Saturday, March 5, 2011

Characteristics of Discrete system simulation Language

    There are some Features of DSS:-
1. Provision for the concept of Model Building
2. Timing control
3. Expressing the stochastic process
4. Program debug function


For more details visit : http://www.gurkpo.com/

Wednesday, February 23, 2011

Data Blocks

Oracle Database allocates logical database space for all data in a database. The units of database space allocation are data blocks, extents, and segments. Here shows the relationships among these data structures.

"At the finest level of granularity, Oracle Database stores data in data blocks (also called logical blocks, Oracle blocks, or pages). One data block corresponds to a specific number of bytes of physical database space on disk.

                                     
The next level of logical database space is an extent. An extent is a specific number of contiguous data blocks allocated for storing a specific type of information.
The level of logical database storage greater than an extent is called a segment. A segment is a set of extents, each of which has been allocated for a specific data structure and all of which are stored in the same table space. For example, each table's data is stored in its own data segment, while each index's data is stored in its own index segment. If the table or index is partitioned, each partition is stored in its own segment.
Oracle Database allocates space for segments in units of one extent. When the existing extents of a segment are full, Oracle Database allocates another extent for that segment. Because extents are allocated as needed, the extents of a segment may or may not be contiguous on disk.
A segment and all its extents are stored in one table space. Within a table space, a segment can include extents from more than one file; that is, the segment can span data files. However, each extent can contain data from only one data file.
Although you can allocate additional extents, the blocks themselves are allocated separately. If you allocate an extent to a specific instance, the blocks are immediately allocated to the free list. However, if the extent is not allocated to a specific instance, then the blocks themselves are allocated only when the high water mark moves. The high water mark is the boundary between used and unused space in a segment.
Overview of Data Blocks
Oracle Database manages the storage space in the data files of a database in units called data blocks. A data block is the smallest unit of data used by a database. In contrast, at the physical, operating system level, all data is stored in bytes. Each operating system has a block size. Oracle Database requests data in multiples of Oracle Database data blocks, not operating system blocks.
The standard block size is specified by the DB_BLOCK_SIZE initialization parameter. In addition, you can specify of up to five nonstandard block sizes. The data block sizes should be a multiple of the operating system's block size within the maximum limit to avoid unnecessary I/O. Oracle Database data blocks are the smallest units of storage that Oracle Database can use or allocate.
Data Block FormatThe Oracle Database data block format is similar regardless of whether the data block contains table, index, or clustered data.
Data Block Format
"Header (Common and Variable)The header contains general block information, such as the block address and the type of segment (for example, data or index).
Table DirectoryThis portion of the data block contains information about the table having rows in this block.
Row DirectoryThis portion of the data block contains information about the actual rows in the block (including addresses for each row piece in the row data area).
After the space has been allocated in the row directory of a data block's overhead, this space is not reclaimed when the row is deleted. Therefore, a block that is currently empty but had up to 50 rows at one time continues to have 100 bytes allocated in the header for the row directory. Oracle Database reuses this space only when new rows are inserted in the block.
OverheadThe data block header, table directory, and row directory are referred to collectively as overhead. Some block overhead is fixed in size; the total block overhead size is variable. On average, the fixed and variable portions of data block overhead total 84 to 107 bytes.
Row DataThis portion of the data block contains table or index data. Rows can span blocks.
Free SpaceFree space is allocated for insertion of new rows and for updates to rows that require additional space (for example, when a trailing null is updated to a nonnull value).
In data blocks allocated for the data segment of a table or cluster, or for the index segment of an index, free space can also hold transaction entries. A transaction entry is required in a block for each INSERT, UPDATE, DELETE, and SELECT...FOR UPDATE statement accessing one or more rows in the block. The space required for transaction entries is operating system dependent; however, transaction entries in most operating systems require approximately 23 bytes.
Free Space ManagementFree space can be managed automatically or manually.
Free space can be managed automatically inside database segments. The in-segment free/used space is tracked using bitmaps, as opposed to free lists. Automatic segment-space management offers the following benefits:
Ease of use
Better space utilization, especially for the objects with highly varying row sizes
Better run-time adjustment to variations in concurrent access
Better multi-instance behavior in terms of performance/space utilization

Monday, February 14, 2011

Log Files

Each Oracle database has a redo log. This redo log records all changes made in data files.
Purpose
The redo log makes it possible to replay SQL statements.
Before Oracle changes data in a data file it writes these changes to the redo log. If something happens to one of the data files, a backed up data file can be restored and the redo, that was written since, replied, which brings the data file to the state it had before it became unavailable.
Archive Log vs. No archive Log
As Oracle rotates through its redo log groups, it will eventually overwrite a group which it has already written to. Data that is being overwritten would of course be useless for a recovery scenario. In order to prevent that, a database can (and for production databases should) be run in archive log mode. Simply stated, in archive log mode, Oracle makes sure that online redo log files are not overwritten unless they have been safely archived somewhere.
A database can only be recovered from media failure if it runs under archive log.
Log Buffer
All changes that are covered by redo is first written into the log buffer. The idea to first store it in the memory is to reduce disk IO. Of course, when a transaction commits, the redo log buffer must be flushed to disk, because otherwise the recovery for that commit could not be guaranteed. It is LGWR (Log Writer) that does that flushing.

Data modeling

Data modeling is the act of exploring data-oriented structures.  Like other modeling artifacts data models can be used for a variety of purposes, from high-level conceptual models to physical data models.  From the point of view of an object-oriented developer data modeling is conceptually similar to class modeling. With data modeling you identify entity types whereas with class modeling you identify classes.

Data attributes are assigned to entity types just as you would assign attributes and operations to classes.  There are associations between entities, similar to the associations between classes – relationships, inheritance, composition, and aggregation are all applicable concepts in data modeling.

Traditional data modeling is different from class modeling because it focuses solely on data – class models allow you to explore both the behavior and data aspects of your domain, with a data model you can only explore data issues.  Because of this focus data modelers have a tendency to be much better at getting the data “right” than object modelers.  However, some people will model database methods (stored procedures, stored functions, and triggers) when they are physical data modeling.

For more detail  visit : http://www.gurukpo.com/

Thursday, February 10, 2011

SQL *Loader

SQL*Loader is a very flexible utility that allows you to load data from a flat file into one or more database tables. That's the sole reason for SQL*Loader's existence.
The basis for almost everything you do with SQL*Loader is a file known as the control file. The SQL*Loader control file is a text file into which you place a description of the data to be loaded. You also use the control file to tell SQL*Loader which database tables and columns should receive the data that you are loading.
Do not confuse SQL*Loader control files with database control files. Database control files are binary files containing information about the physical structure of your database. They have nothing to do with SQL*Loader. SQL*Loader control files, on the other hand, are text files containing commands that control SQL*Loader's operation.
Once you have a data file to load and a control file describing the data contained in that data file, you are ready to begin the load process. You do this by invoking the SQL*Loader executable and pointing it to the control file that you have written. SQL*Loader reads the control file to get a description of the data to be loaded. Then it reads the input file and loads the input data into the database.
SQL*Loader is a very flexible utility, and this short description doesn't begin to do it justice. The rest of this chapter provides a more detailed description of the SQL*Loader environment and a summary of SQL*Loader's many capabilities.
The SQL*Loader Environment
When we speak of the SQL*Loader environment, we are referring to the database, the SQL*Loader executable, and all the different files that you need to be concerned with when using SQL*Loader.

The SQL*Loader Control File

The SQL*Loader control file is the key to any load process. The control file provides the following information to SQL*Loader:
  • The name and location of the input data file
  • The format of the records in the input data file
  • The name of the table or tables to be loaded
  • The correspondence between the fields in the input record and the columns in the database tables being loaded
  • Selection criteria defining which records from the input file contain data to be inserted into the destination database tables.
  • The names and locations of the bad file and the discard file
Some of the items shown in this list may also be passed to SQL*Loader as command-line parameters. The name and location of the input file, for example, may be passed on the command line instead of in the control file. The same goes for the names and locations of the bad files and the discard files.
It's also possible for the control file to contain the actual data to be loaded. This is sometimes done when small amounts of data need to be distributed to many sites, because it reduces (to just one file) the number of files that need to be passed around. If the data to be loaded is contained in the control file, then there is no need for a separate data file.

The Log File

The log file is a record of SQL*Loader's activities during a load session. It contains information such as the following:
  • The names of the control file, log file, bad file, discard file, and data file
  • The values of several command-line parameters
  • A detailed breakdown of the fields and datatypes in the data file that was loaded
  • Error messages for records that cause errors
  • Messages indicating when records have been discarded
  • A summary of the load that includes the number of logical records read from the data file, the number of rows rejected because of errors, the number of rows discarded because of selection criteria, and the elapsed time of the load
Always review the log file after a load to be sure that no errors occurred, or at least that no unexpected errors occurred. This type of information is written to the log file, but is not displayed on the terminal screen.

The Bad File and the Discard File

Whenever you insert data into a database, you run the risk of that insert failing because of some type of error. Integrity constraint violations undoubtedly represent the most common type of error. However, other problems, such as the lack of free space in a tablespace, can also cause insert operations to fail. Whenever SQL*Loader encounters a database error while trying to load a record, it writes that record to a file known as the bad file.
Discard files, on the other hand, are used to hold records that do not meet selection criteria specified in the SQL*Loader control file. By default, SQL*Loader will attempt to load all the records contained in the input file. You have the option, though, in your control file, of specifying selection criteria that a record must meet before it is loaded. Records that do not meet the specified criteria are not loaded, and are instead written to a file known as the discard file.
Discard files are optional. You will only get a discard file if you've specified a discard file name, and if at least one record is actually discarded during the load. Bad files are not optional. The only way to avoid having a bad file generated is to run a load that results in no errors. If even one error occurs, SQL*Loader will create a bad file and write the offending input record (or records) to that file.
The format of your bad files and discard files will exactly match the format of your input files. That's because SQL*Loader writes the exact records that cause errors, or that are discarded, to those files. If you are running a load with multiple input files, you will get a distinct set of bad files and discard files for each input file


 

Wednesday, February 9, 2011

DataBase Administrator

A database administrator (DBA) is a person responsible for the design, implementation, maintenance and repair of an organization's database. They are also known by the titles Database Coordinator or Database Programmer, and is closely related to the Database Analyst, Database Modeler, Programmer Analyst, and Systems Manager. The role includes the development and design of database strategies, monitoring and improving database performance and capacity, and planning for future expansion requirements. They may also plan, co-ordinate and implement security measures to safeguard the database.Personal Characteristics/Skills:_
1.Strong organizational skills.
2.Strong logical and analytical thinker.
3.Ability to concentrate and pay close attention to detail.
4.Strong written and verbal communication skills.
5.Willing to pursue education throughout your career.


Database administrator's activities can be listed as below:
  1. Transferring Data
  2. Replicating Data
  3. Maintaining database and ensuring its availability to users
  4. Controlling privileges and permissions to database users
  5. Monitoring database performance
  6. Database backup and recovery
  7. Database security
For more details visit:- http://www.gurkpo.com/

Tuesday, February 8, 2011

oracle architecture


The  terms related to the above figure are describe below:-

The System Global Area (SGA)

The SGA is a shared memory region that Oracle uses to store data and control information for one Oracle instance. The SGA is allocated when the Oracle instance starts and deallocated when the Oracle instance shuts down. Each Oracle instance that starts has its own SGA. The information in the SGA consists of the following elements, each of which has a fixed size and is created at instance startup:
The database buffer cache--This stores the most recently used data blocks. These blocks can contain modified data that has not yet been written to disk (sometimes known as dirty blocks), blocks that have not been modified, or blocks that have been written to disk since modification (sometimes known as clean blocks). Because the buffer cache keeps blocks based on a most recently used algorithm, the most active buffers stay in memory to reduce I/O and improve performance.
  • The redo log buffer--This stores redo entries, or a log of changes made to the database. The redo log buffers are written to the redo log as quickly and efficiently as possible. Remember that the redo log is used for instance recovery in the event of a system failure.
  • The shared pool--This is the area of the SGA that stores shared memory structures such as shared SQL areas in the library cache and internal information in the data dictionary. The shared pool is important because an insufficient amount of memory allocated to the shared pool can cause performance degradation. The shared pool consists of the library cache and the data-dictionary cache.
The Library Cache
The library cache is used to store shared SQL. Here the parse tree and the execution plan for every unique SQL statement are cached. If multiple applications issue the same SQL statement, the shared SQL area can be accessed by each to reduce the amount of memory needed and to reduce the processing time used for parsing and execution planning.
The Data-Dictionary Cache
The data dictionary contains a set of tables and views that Oracle uses as a reference to the database. Oracle stores information here about the logical and physical structure of the database. The data dictionary contains information such as the following:
  • User information, such as user privileges
  • Integrity constraints defined for tables in the database
  • Names and data types of all columns in database tables
  • Information on space allocated and used for schema objects
The data dictionary is frequently accessed by Oracle for the parsing of SQL statements. This access is essential to the operation of Oracle; performance bottlenecks in the data dictionary affect all Oracle users. Because of this, you should make sure that the data-dictionary cache is large enough to cache this data. If you do not have enough memory for the data-dictionary cache, you see a severe performance degredation. If you ensure that you have allocated sufficient memory to the shared pool where the data-dictionary cache resides, you should see no performance problems.

The Program Global Area (PGA)

The PGA is a memory area that contains data and control information for the Oracle server processes. The size and content of the PGA depends on the Oracle server options you have installed. This area consists of the following components:
  • Stack space--This is the memory that holds the session's variables, arrays, and so on.
  • Session information--If you are not running the multithreaded server, the session information is stored in the PGA. If you are running the multithreaded server, the session information is stored in the SGA.
  • Private SQL area--This is an area in the PGA where information such as binding variables and runtime buffers is kept.
For more details visit:- http://www.gurukpo.com/ 

Thursday, February 3, 2011

Types of addressing modes

Each instruction of a computer specifies an operation on certain data. The are various ways of specifying address of the data to be operated on. These different ways of specifying data are called the addressing modes. The most common addressing modes are:
  • Immediate addressing mode
  • Direct addressing mode
  • Indirect addressing mode
  • Register addressing mode
  • Register indirect addressing mode
  • Displacement addressing mode
  • Stack addressing mode
Immediate Addressing:
This is the simplest form of addressing. Here, the operand is given in the instruction itself. This mode is used to define a constant or set initial values of variables. The advantage of this mode is that no memory reference other than instruction fetch is required to obtain operand. The disadvantage is that the size of the number is limited to the size of the address field, which most instruction sets is small compared to word length.

Direct Addressing:
In direct addressing mode, effective address of the operand is given in the address field of the instruction. It requires one memory reference to read the operand from the given location and provides only a limited address space. Length of the address field is usually less than the word length.
Ex : Move P, Ro, Add Q, Ro P and Q are the address of operand.

Indirect Addressing:
Indirect addressing mode, the address field of the instruction refers to the address of a word in memory, which in turn contains the full length address of the operand. The advantage of this mode is that for the word length of N, an address space of 2N can be addressed. He disadvantage is that instruction execution requires two memory reference to fetch the operand Multilevel or cascaded indirect addressing can also be used.

Register Addressing:
Register addressing mode is similar to direct addressing. The only difference is that the address field of the instruction refers to a register rather than a memory location 3 or 4 bits are used as address field to reference 8 to 16 generate purpose registers. The advantages of register addressing are Small address field is needed in the instruction.

Register Indirect Addressing:
This mode is similar to indirect addressing. The address field of the instruction refers to a register. The register contains the effective address of the operand. This mode uses one memory reference to obtain the operand. The address space is limited to the width of the registers available to store the effective address.

Displacement Addressing:
In displacement addressing mode there are 3 types of addressing mode. They are :
1) Relative addressing
2) Base register addressing
3) Indexing addressing.
This is a combination of direct addressing and register indirect addressing. The value contained in one address field. A is used directly and the other address refers to a register whose contents are added to A to produce the effective address.

Stack Addressing:
Stack is a linear array of locations referred to as last-in first out queue. The stack is a reserved block of location, appended or deleted only at the top of the stack. Stack pointer is a register which stores the address of top of stack location. This mode of addressing is also known as implicit addressing.

Monday, January 31, 2011

Artificial Intelligence

 It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence.

Saturday, January 29, 2011

Asynchronous Decade Counters

The binary counters  introduced  in previous post have two to the power n states.  But counters with states less than this number are also possible.  They are designed to have the number of states in their sequences, which are called truncated sequences.  These sequences are achieved by forcing the counter to recycle before going through all of its normal states. A common modulus for counters with truncated sequences is ten.  A counter with ten states in its sequence is called a decade counter.  The circuit below is an implementation of a decade counter.


Once the counter counts to ten (1010), all the flip-flops are being cleared.  Notice that only Q1 and Q3 are used to decode the count of ten.  This is called partial decoding, as none of the other states (zero to nine) have both Q1 and Q3 HIGH at the same time. The sequence of the decade counter is shown in the table below:
Sequence

Asynchronous (Ripple) Counters

A two-bit asynchronous counter is shown below.  The external clock is connected to the clock input of the first flip-flop (FF0) only.  So, FF0 changes state at the falling edge of each clock pulse, but FF1 changes only when triggered by the falling edge of the Q output of FF0.  Because of the inherent propagation delay through a flip-flop, the transition of the input clock pulse and a transition of the Q output of FF0 can never occur at exactly the same time.  Therefore, the flip-flops cannot be triggered simultaneously, producing an asynchronous operation.

2-bit Asynchronous Counter

Note that for simplicity, the transitions of Q0, Q1 and CLK in the timing diagram above are shown as simultaneous even though this is an asynchronous counter.  Actually, there is some small delay between the CLK, Q0 and Q1 transitions.

Usually, all the CLEAR inputs are connected together, so that a single pulse can clear all the flip-flops before counting starts.  The clock pulse fed into FF0 is rippled through the other counters after propagation delays, like a ripple on water, hence the name Ripple Counter.
 
The 2-bit ripple counter circuit above has four different states, each one corresponding to a count value.  Similarly, a counter with n flip-flops can have 2 to the power n states.  The number of states in a counter is known as its mod (modulo) number.  Thus a 2-bit counter is a mod-4 counter.

The following is a three-bit asynchronous binary counter  and its timing diagram for one cycle.  It works exactly the same way as a two-bit asynchronous binary counter mentioned above, except it has eight states due to the third flip-flop.
3-bit Asynchronous Binary Counter

Friday, January 28, 2011

Thought for the day

           A man is not finished when he is defeated.
                    He is finished when he quits.”

Indirect Addressing Technique

An address that serves as a reference point instead of the address to the direct location.
For example, if a programmer was writing something to an indirect address in memory, that data would be saved anywhere in the memory, instead of a specific address.


For more detail visit :- http://www.gurukpo.com/

Thursday, January 27, 2011

Direct Addressing technique

Direct addressing is so-named because the value to be stored in memory is obtained by directly retrieving it from another memory location.
For example:
MOV A,30h
This instruction will read the data out of Internal RAM address 30 (hexadecimal) and store it in the Accumulator.


For more detail visit :- http://www.gurukpo.com/

Thought for the day

“A man is but the product of his thoughts
what he thinks, he becomes.”

Tuesday, January 25, 2011

Thought for the day

“You can fool some of the people all of the time,
and all of the people some of the time,
but you can not fool all of the people all of the time.”

Simulation

Simulation is the imitation of some real thing or process. The act of simulating something generally entails representing certain key characteristics or behaviours of a selected physical or abstract system.


For more details visit   http://www.gurukpo.com/

Monday, January 24, 2011

Shift registers

Shift registers are a type of sequential logic circuit, mainly for storage of digital data.  They are a group of flip-flops connected in a chain so that the output from one flip-flop becomes the input of the next flip-flop.  Most of the registers possess no characteristic internal sequence of states.  All the flip-flops are driven by a common clock, and all are set or reset simultaneously.

There are varios types of Registers:-

Serial In - Serial Out
Serial In - Parallel Out
Parallel In - Serial Out
Parallel In - Parallel Out
Shift Register Counters
                          (i) Ring Counters
                          (ii) Johnson Counters

 

Addressing techniques

Addressing techniques refer to the way that data is referenced in an assembler instruction.
There are so many ways to do this  but the major types of addressing technique are:-

Immediate Addressing
Direct Addressing
Indirect Addressing


For more detail visit  http://www.gurukpo.com/

Friday, January 21, 2011

Secure Electronic Transaction

SET incorporates the following features:
  • Confidentiality of information
  • Integrity of data
  • Cardholder account authentication
  • Merchant authentication


For more details visit : http://www.gurukpo.com/

Wednesday, January 19, 2011

Counters

It is a Sequential device that  stores (and sometimes displays) the number of times a particular event or process has occurred, often in relationship to a clock signal.
counters can be implemented quite easily using register-type circuits such as the flip flop and a wide variety of designs exist, e.g.:
  • Asynchronous (ripple) counter – changing state bits are used as clocks to subsequent state flip-flops
  • Synchronous counter – all state bits change under control of a single clock
  • Decade counter – counts through ten states per stage
  • Up–down counter – counts both up and down, under command of a control input
  • Ring counter – formed by a shift register with feedback connection in a ring
  • Johnson counter – a twisted ring counter
For more details visit : http://www.gurukpo.com/
:

Sunday, January 16, 2011

RS flip flop

In the R-S flip flop ..when
Both S and R inputs are equl to 1 than Output is in Indeterminate state...
This is the major drawback of RS flip flop.
we can see this by table 7.2


For more details visit: www.gurukpo.com

Wednesday, January 12, 2011

Race around Condition

In jk Flip flop, the output is feedback to the input, and therefore change in the output causes the change in inputs. due to this in the postive half of the clock pulse if j and k both are high then the output toggles simultaneously. this condition is known as Race around condition.

For more details visit http://www.gurukpo.com/

Tuesday, January 11, 2011

Multiplexer

Tuesday, January 11, 2011


A multiplexer, sometimes referred to as a "multiplexor" or simply "mux", is a device that selects between a number of input signals. In its simplest form, a multiplexer will have 2 to the power n inputs ,n selection lines and 1 output.




In above dia.C0 to C3 are inputs,.
A,B are selection lines and f(A,B,C).

For more details visit http://www.gurukpo.com/