﷯ ﷯﷯﷯﷯﷯﷯﷯﷯﷯﷯﷯﷯﷯﷯﷯﷯﷯﷯﷯ ﷯﷯﷯﷯Easy Install Instructions:﷯﷯﷯1. Copy the Code﷯﷯2. Log in to your Blogger account
and go to "Manage Layout" from the Blogger Dashboard﷯﷯3. Click on the "Edit HTML" tab.﷯﷯4. Delete the code already in the "Edit Template" box and paste the new code in.﷯﷯5. Click "S BLOGGER TEMPLATES AND TWITTER BACKGROUNDS ?

Wednesday, March 4, 2009

SEARCH

A. DEFINITIONS: -DATA STRUCTURE-

1.Data Structure is the structure that is formed and used by users or programmers to facilitate the processed datas that are needed to be stored. Making or providing structure depends on how or what certain kind of datas it would be stored to properly hold the datas in a memory. Memory is a part when we take data structures. With data structure, it would be convenient to work with datas in memory. Examples of structures are:stacks, linked-list, queues, arrays, graphs, trees, etc. It is also a way of storing data into the computer so that it would be used efficiently with the use of certain algorithms to operate it in the memory space.-personal definition...

2. Data structure in computer science is a way of storing data in a computer so that it can be used efficiently. It is an organization of mathematical and logical concepts of data. Often a carefully chosen data structure will allow the most efficient algorithm to be used. The choice of the data structure often begins from the choice of an abstract data type -http://en.wikipedia.org/wiki/Data_structure

3.An organization of information, usually in memory, for better algorithm efficiency, such as queue, stack, linked list, heap, dictionary, and tree, or conceptual unity, such as the name and address of a person. It may include redundant information, such as length of the list or number of nodes in a subtree.
-http://www.itl.nist.gov/div897/sqg/dads/HTML/datastructur.html

4. In computer science, an organization in software of data that allows more optimal searching, categorizing, or storage of information. Examples: matrix, stack, queue, dequeue, list, vector, scene graph.
-http://en.wiktionary.org/wiki/data_structure

5.A data structure is a specialized format for organizing and storing data. General data structure types include the array, the file, the record, the table, the tree, and so on. Any data structure is designed to organize data to suit a specific purpose so that it can be accessed and worked with in appropriate ways. In computer programming, a data structure may be selected or designed to store data for the purpose of working on it with various algorithms.
-http://searchsqlserver.techtarget.com/sDefinition/0,,sid87_gci804744,00.html

6. A means of storing a collection of data. Computer science is in part the study of methods for effectively using a computer to solve problems, or in other words, determining exactly the problem to be solved. This process entails (1) gaining an understanding of the problem; (2) translating vague descriptions, goals, and contradictory requests, and often unstated desires, into a precisely formulated conceptual solution; and (3) implementing the solution with a computer program. This solution typically consists of two parts: algorithms and data structures.
-http://www.answers.com/topic/data-structure

7. A data structure is a group of data elements grouped together under one name. These data elements, known as members, can have different types and different lengths.
http://www.cplusplus.com/doc/tutorial/structures.html

8. The way data is encoded by a computer. Data structure is not usually the concern of the user. However, in certain cases, e.g. when the user tries to read a text file produced by a different word processor he or she may become aware of the fact that similar screen display does not necessarily mean identical data structure for the computer.
-http://www.uni-duisburg-essen.de/CP/key_terms.htm

9. A way to store and organize data in order to facilitate access and modifications.
- www.azdoqit.com/LS4/Glossary%20of%20EHR%20Terms.doc

10. An organised aggregate of data items. -http://www.cs.ucc.ie/~dgb/courses/swd/glossary.html

------------------------------------------------------------------------------------------------------

B. 3 DATA STRUCTURES UNCOVERED

B.1 VLIST DATA STRUCTURES

-In computer science, the VList is a persistent data structure designed by Phil Bagwell in 2002 that combines the fast indexing of arrays with the easy extension of cons-based (or singly-linked) linked lists.[1]

OPERATIONS TO ENTER ELEMENTS(DATA)

The primary operations of a VList are:

  • Locate the kth element (O(1) average, O(log n) worst-case)
  • Add an element to the front of the VList (O(1) average, with an occasional allocation)
  • Obtain a new array beginning at the second element of an old array (O(1))
  • Compute the length of the list (O(log n))

STRUCTURE

The underlying structure of a VList can be seen as a singly-linked list of arrays whose sizes decrease geometrically; in its simplest form, the first contains the first half of the elements in the list, the next the first half of the remainder, and so on. Each of these blocks stores some information such as its size and a pointer to the next.

The average constant-time indexing operation comes directly from this structure; given a random valid index, we simply observe the size of the blocks and follow pointers until we reach the one it should be in.

DATA RETRIEVAL

Any particular reference to a VList is actually a <base, offset> pair indicating the position of its first element in the data structure described above. The base part indicates which of the arrays its first element falls in, while the offset part indicates its index in that array. This makes it easy to "remove" an element from the front of the list; we simply increase the offset, or increase the base and set the offset to zero if the offset goes out of range. If a particular reference is the last to leave a block, the block will be garbage-collected if such facilities are available, or otherwise must be freed explicitly.

Because the lists are constructed incrementally, the first array in the array list may not contain twice as many values as the next one, although the rest do; this does not significantly impact indexing performance. We nevertheless allocate this much space for the first array, so that if we add more elements to the front of the list in the future we can simply add them to this list and update the size. If the array fills up, we create a new array, twice as large again as this one, and link it to the old first array.



B.2 HASH TABLE

-In computer science, a hash table, or a hash map, is a data structure that associates keys with values.

THE STRUCTURE

The primary operation that hash functions support efficiently is a lookup: given a key (e.g., a person's name), find the corresponding value (e.g., that person's telephone number). It works by transforming the key using a hash function into a hash, a number that is used as an index in an array to locate the desired location ("bucket") where the values should be.

Hash tables support the efficient lookup, insertion and deletion of elements in constant time on average (O(1)) that does not vary with the number of elements stored in the table; although may vary somewhat depending on how full the table is.

BASIC OPERATION

A hash table works by transforming the key using a hash function into a hash, a number that is used as an index in an array to locate the desired location ("bucket") where the values should be. The number is normally converted into the index by taking a modulo operation, or sometimes bit masking is used where the array size is a power of two. The optimal hash function for any given use of a hash table can vary widely, however, depending on the nature of the key.

It is also possible to create a hash table statically where, for example, there is a fairly limited fixed set of input values - such as the values representable by a single byte (or possibly two bytes ) from which an index can be constructed directly (see section below on creating hash tables). The hash table can also be used simultaneously for tests of validity on the values that are disallowed.

LOAD FACTOR: ACCOMODATE

An important property of a hash table is how "full" the table is, that is the ratio between the number of entries n and the size s of the hash table (i.e. the size of the array it uses to store values). The quotient n/s is therefore called the load factor of the hash table.

Most hash table implementations only perform well if the load factor is kept in acertain range.

RETRIEVAL

Hash tables store data in pseudo-random locations, so accessing the data in a sorted manner is a very time consuming operation. Other data structures such as self-balancing binary search trees generally operate more slowly (since their lookup time is O(log n)) and are rather more complex to implement than hash tables but maintain a sorted data structure at all times.


B.3 HEAP DATA STRUCTURE

THE STRUCTURE
-In computer science, a heap is a specialized tree-based data structure that satisfies the heap property: if B is a child node of A, then key(A) ≥ key(B). This implies that an element with the greatest key is always in the root node, and so such a heap is sometimes called a max heap. (Alternatively, if the comparison is reversed, the smallest element is always in the root node, which results in a min heap.) This is why heaps are used to implement priority queues. The efficiency of heap operations is crucial in several graph algorithms.

OPERATION TO GET CONTENTS ON THE NODES:

The operations commonly performed with a heap are

  • delete-max or delete-min: removing the root node of a max- or min-heap, respectively
  • increase-key or decrease-key: updating a key within a max- or min-heap, respectively
  • insert: adding a new key to the heap
  • merge: joining two heaps to form a valid new heap containing all the elements of both.
Comparison of theoretic bounds for variants

The following complexities[1] are worst-case for binary and binomial heaps and amortized complexity for Fibonacci heap. O(f) gives asymptotic upper bound and Θ(f) is asymptotically tight bound (see Big O notation). Function names assume a min-heap.

Operation

Binary

Binomial

Fibonacci

createHeap

Θ(1)

Θ(1)

Θ(1)

findMin

Θ(1)

Θ(lg n) or Θ(1)

Θ(1)

deleteMin

Θ(lg n)

Θlg n)

O(lg n)

insert

Θ(lg n)

O(lg n)

Θ(1)

decreaseKey

Θ(lg n)

Θ(lg n)

Θ(1)

merge

Θ(n)

O(lg n)

Θ(1)

For pairing heaps the insert and merge operations are conjectured[citation needed] to be O(1) amortized complexity but this has not yet been proven. decreaseKey is not O(1) amortized complexity [1] [2]

HEAP RETRIEVAL

Heaps are a favorite data structures for many applications.

Interestingly, full and almost full binary heaps may be represented using an array alone. The first (or last) element will contain the root. The next two elements of the array contain its children. The next four contain the four children of the two child nodes, etc. Thus the children of the node at position n would be at positions 2n and 2n+1 in a one-based array, or 2n+1 and 2n+2 in a zero-based array. Balancing a heap is done by swapping elements which are out of order. As we can build a heap from an array without requiring extra memory (for the nodes, for example), heapsort can be used to sort an array in-place.

One more advantage of heaps over trees in some applications is that construction of heaps can be done in linear time using Tarjan's algorithm.

HEAP IMPLEMENTATION

  • The C++ Standard Template Library provides the make_heap, push_heap and pop_heap algorithms for binary heaps, which operate on arbitrary random access iterators. It treats the iterators as a reference to an array, and uses the array-to-heap conversion detailed above.






0 comments: