CS614-Midterm
1 / 50
People that design and build the data warehouse must be capable of working across the organization at all levels
2 / 50
Naturally Evolving architecture occurred when an organization had a _______ approach to handling the whole process of hardware and software architecture.
3 / 50
Companies collect and record their own operational data, but at the same time they also use reference data obtained from _______ sources such as codes, prices etc.
4 / 50
There are many variants of the traditional nested-loop join. If the index is built as part of the query plan and subsequently dropped, it is called
5 / 50
In horizontal splitting, we split a relation into multiple tables on the basis of
6 / 50
7 / 50
Non uniform distribution, when the data is distributed across the processors, is called ______.
8 / 50
The goal of ideal parallel execution is to completely parallelize those parts of a computation that are not constrained by data dependencies. The __________ the portion of the program that must be executed sequentially, the greater the scalability of computation.
9 / 50
DTS allows us to connect through any data source or destination that is supported by ____________
10 / 50
The divide & conquer cube partitioning approach helps alleviate the ____________ limitations of MOLAP implementation.
11 / 50
The users of data warehouse are knowledge workers in other words they are _______in the organization.
12 / 50
Pre-computed _______ can solve performance problems
13 / 50
It is observed that every year the amount of data recorded in an organization :
14 / 50
Focusing on data warehouse delivery only often end up _________.
15 / 50
_____________ is a process which involves gathering of information about column through execution of certain queries with intention to identify erroneous records.
16 / 50
It is observed that every year the amount of data recorded in anorganization is
17 / 50
Pakistan is one of the five major ________ countries in the world.
18 / 50
DSS queries do not involve a primary key
19 / 50
________ is the technique in which existing heterogeneous segments are reshuffled, relocated into homogeneous segments.
20 / 50
21 / 50
Suppose the amount of data recorded in an organization is doubled every year. This increase is __________ .
22 / 50
During the application specification activity, we also must give consideration to the organization of the applications.
23 / 50
When performing objective assessments, companies follow a set of principles to develop metrics specific to their needs, there is hard to have “one size fits all” approach. Which of the following statement represents the pervasive functional forms?
24 / 50
25 / 50
If „M‟ rows from table-A match the conditions in the query then table-B is accessed „M‟ times. Suppose table-B has an index on the join column. If „a‟ I/Os are required to read the data block for each scan and „b‟ I/Os for each data block then the total cost of accessing table-B is _____________ logical I/Os approximately.
26 / 50
We must try to find the one access tool that will handle all the needs of their users.
27 / 50
De-Normalization normally speeds up
28 / 50
Normalization effects performance
29 / 50
The degree of similarity between two records, often measured by a numerical value between _______, usually depends on application characteristics.
30 / 50
To judge effectiveness we perform data profiling twice.
31 / 50
The need to synchronize data upon update is called
32 / 50
Data mining uses _________ algorithms to discover patterns and regularities in data.
33 / 50
B-Tree is used as an index to provide access to records
34 / 50
A dense index, if fits into memory, costs only ______ disk I/O access to locate a record by given key.
35 / 50
Data mining evolve as mechanism to cater the limitations of _____ systems to deal massive data sets with high dimensionality, new data types, multiple heterogeneous data resources etc...
36 / 50
Virtual cube is used to query two similar cubes by creating a third “virtual” cube by a join between two cubes.
37 / 50
38 / 50
Data warehousing and on-line analytical processing (OLAP) are _______ elements of decision support system.
39 / 50
The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by _____________ tools typical of decision support systems.
40 / 50
: An optimized structure which is built primarily for retrieval, with update being only a secondary consideration is
41 / 50
To identify the __________________ required we need to perform data profiling
42 / 50
NUMA stands for __________
43 / 50
A ________ dimension is a collection of random transactional codes, flags and/text attributes that are unrelated to any particular dimension. The ______ dimension is simply a structure that provides a convenient place to store the ______ attributes.
44 / 50
The key idea behind ___________ is to take a big task and break it into subtasks that can be processed concurrently on a stream of data inputs in multiple, overlapping stages of execution.
45 / 50
Data Warehouse provides the best support for analysis while OLAP carries out the _________ task.
46 / 50
The goal of star schema design is to simplify ________
47 / 50
________ gives total view of an organization
48 / 50
_____ contributes to an under-utilization of valuable and expensive historical data, and inevitably results in a limitedcapability to provide decision support and analysis.
49 / 50
_________ breaks a table into multiple tables based upon common column values.
50 / 50
The purpose of the House of Quality technique is to reduce ______ types of risk.
Your score is
The average score is 0%
Restart quiz