Most Common Data Modelling Interview Questions
Data modeling is a process of creating models for the data to store in a database. Technically, it comprises of a conceptual presentation of data objects, the association of different data objects, and its rules. It involves normalization, enforcing data integrity, and describing business entities along with relations. Read our list of Data Modelling Interview Questions to easily understand this subject on a deeper level and crack that job interview.
Most Frequently Asked Data Modelling Interview Questions
|This is an Online Transactional System.
|This is an online analysis and data retrieving process.
|It comprises of a large number of short online transactions.
|It comprises of a large volume of data.
|This uses a traditional DBMS.
|This uses a Data Warehouse.
|Here the tables are normalized
|Here, the tables are not normalized.
|It is designed for real-time business operations due to its fast response rate.
|It is designed for the analysis of business measures via attributes and categories.
There are three types of data models, in general:
1. Conceptual Model: This data model defines what’s contained in the system. The theoretical model is typically created and used by business stakeholders and data architects. Its purpose is the organization, scoping, and the definition of business concepts and rules.
2. Logical Model: The logical model defines the implementation of the system regardless of the DBMS. It is usually created by business analysts and data architects. Its primary purpose is the development of a technical map for the rules and data structures. It shows the entity names, relationships, attributes, primary keys, and the foreign keys in each entity.
3. Physical Model: The Physical data model is responsible for the implementation of the system with regard to a specific DBMS. This model is usually created by DBAs and developers. Its purpose is the actual execution of the database. It displays the primary, foreign keys, table, column names, and column data types.
There are two types of data modeling schemas:
1. Star Schema: It is the purest form of schema consisting of a fact table in the center referencing multiple dimension tables around it. In this, all the dimension tables are connected to the fact table, and in the fact table, the primary key acts as a foreign key.
2. Snowflake Schema: This is a bit of a complicated schema with an increased level of the normalization process. The fact table remains the same as in the star schema, but the dimension tables are normalized. Due to the presence of multiple levels of dimension tables, it is named as a snowflake.
Denormalization is the process used for constructing a data warehouse. It should be used when there is frequent involvement of the table while retrieving data.
Dimensions represent qualitative data. For example, product, class, plan, etc. A dimension table has textual or descriptive attributes. For example, the product category and product name are two attributes of the product dimension table.
There are three types of measures; they are:
1. Non- additive measures: These are the ones on which no aggregation function can be applied. Example: Ratio or Percentage Column.
2. Semi- additive measures: These are the ones on which some of the aggregation functions can be applied. Example: Account Balance or Fee Rate.
3. Additive measures: These are the ones on which all the aggregation functions can be applied. Example: Number of Units Purchased.
The primary key is an individual or group of columns that unequally identify each and every row inside the table. The value of a primary key cannot be null. Each table must contain at least one primary key.
A foreign key is a group of attributes that are used for linking a parent and the child table. In this, the value of the foreign key column is available in the child table and referred to the value of the primary key corresponding to the parent table.