Quick Search Box

Tuesday, March 17, 2009

Data Retrieval

For meaningful data retrieval, availability of data that has been compiled from various sources and put together in a usable form is an essential prerequisite. On the Internet, a large number of databases exist. These have been put together by commercially run data providers as well as individuals or groups with special interest in particular areas. To retrieve such data, any user needs to know the address/s of such Internet servers. Then depending on the depth of information being sought, different databases have to be searched and required information compiled. The work involved is similar to a search process in a large library; except that this Internet "library" is immense, dynamic, because of regular updating, and entirely electronic. While some skill is required for searching, the user will be able to access, search and check a large collection of servers.

Communication

Communication on the Internet can be online or offline. When some users connect to a single server or an on-line service at the same time, they can communicate in an "online chat". This can be truly "many to many" as in a room full of people talking to each other on peer to peer basis. Alternatively, the users send e-mail to each other which can be read by the receiver whenever he/she finds the time. This is off-line communication, but "one to one" or "one to many". Similarly, it is possible for users to get together electronically with those sharing common interests in "usenet" groups. The users post messages to be read and answered by others at their convenience, in turn all of which can be read and replied to by others and so on.

Applications of Internet

Internet’s applications are many and depend on the innovation of the user. The common applications of the Internet can be classified into three primary types namely: Communication, Data retrieval and Data publishing.

Surfing on the Internet:

Many of the servers on the Internet provide information, specialising on a topic or subject. There is a large number of such servers on the Internet. When a user is
looking for some information, it may be necessary for him/her to look for such information from more than one server. WWW links the computers on the Internet, like a spider web, facilitating users to go from one computer to another directly. When a user keeps hopping from one computer to another, it is called "surfing".
The Internet facilitates "many to many" communication. Modern technology has, so far, made possible communication, "one to many" as in broadcasting; "one to one" as in telephony; "a few to a few" as in telephone conferencing; and "many to one" as in polling. In addition WWW works on "multi-media", and information can be accessed and transmitted in text, voice, sound and/or video. Graphics and interactive communication are two distinctive features of the Internet and WWW.

Uniform Resource Locators

The format of a URL is: protocol/Internet address/Web page address.
The protocol that the Web uses for HTML codes for Web page is HyperText Transport Protocol (HTTP) For example, consider the web page address: http://pages.prodigy.com/kdallas/index.htm.
The http:// specifies that HTTP will be used to process information to and from the Web server; pages.prodigy.com is the Web server’s Internet address; and kdallas/index.htm is the address of the page on the server. Index.htm could have been omitted, because this is the default for the main page within a directory (i.e., kdallas in this example) Within HTML, there is the capability to display information in list or tables and to create forms for users to send information to someone else. In addition, HTML provides the capability to specify graphic files to be displayed. These and other features let a user create complex Web pages.

Monday, March 16, 2009

World Wide Web

The World Wide Web or the Web is a component of the Internet that provides access to large amounts of information located on many different servers. The Web also provides access to many of the services available on the Internet.
The fundamental unit of the Web is the Web page. The Web page is a text document that contains links to other Web pages, graphic and audio files, and other Internet services such as file transfer protocol (FTP) and E-mail.
Web pages reside on servers that run special software that allow users to access Web pages and to activate links to other Web pages and to Internet services. Tens of thousands of Web servers are currently connected to the Internet. A user can directly access any Web page on one of these servers and then follow the links to other pages. This process creates a Web of links around the world and, thus, the name World Wide Web.

ARPANET Objectives

♦ The network would continue to function even if one or many of the computers or connections in the network failed.
♦ The network had to be useable by vastly different hardware and software platforms.
♦ The network had to be able to automatically reroute traffic around non-functioning parts of the network.
♦ The network had to be a network of networks, rather than a network of computers (Hoffman, 1995)
It rapidly became evident that people wanted to share information between networks, and as a result, commercial networks were developed to meet consumer demand. The Internet became the umbrella name for this combination of networks in the late 1980’s .Today’s Internet is somewhat difficult to describe. Essentially, the Internet is a network of computers that offers access to information and people.

History and Background

The history of the Internet is closely tied to the U.S. Department of Defense. The military was a leader in the use of computer technology in the 1960’s and saw the need to create a network that could survive a single network computer’s malfunction. In the 1970’s, the Advanced Research Projects Agency (ARPA) developed a network that has evolved into today’s Internet. The network was named ARPANET and had many objectives that are still relevant today.
Access controls are common form of controls encountered in the boundary subsystem by restricting the use of system resources to authorize users, limiting the actions authorized users can take with these resources and ensuring that the users obtain only authentic system resources.
Current systems are designed to allow users to share their resources. This is done by having a single system simulate the operations of several systems, where each of the simulated system works as virtual machine allowing more efficient use of resources by lowering the idle capacity of the real system. Here, a major design problem is to ensure that each virtual system operates as if it were totally unaware of the operations of the other virtual systems. Besides increased scope exists for unintentional or deliberate damage to system resources / user’s actions.

Threats and Vulnerabilities:

The threats to the security of systems assets can be broadly divided into nine categories:
(i) Fire,
(ii) Water,
(iii) Energy variations like voltage fluctuations, circuit breakage, etc.,
(iv) Structural damages,
(v) Pollution,
(vi) Intrusion like physical intrusion and eavesdropping which can be eliminated / minimized by physical access controls, prevention of electromagnetic emission and providing the facilities with their proper locations / sites,
(vii) Viruses and Worms (being discussed in detail later on),
(viii) Misuse of software, data and services which can be avoided by preparing an employees’ code of conduct and
(ix) Hackers, the expected loss from whose activities can be mitigated only by robust logical access controls.

Sunday, March 15, 2009

Level of Security:

The task of a Security Administration in an organization is to conduct a security program which is a series of ongoing, regular and periodic review of controls exercised to ensure safeguarding of assets and maintenance of data integrity. Security programs involve following eight steps –
(i) Preparing project plan for enforcing security,
(ii) Assets identification,
(iii) Assets valuation,
(iv) Threats identification,
(v) Threats probability of occurrence assessment,
(vi) Exposure analysis,
(vii) Controls adjustment,
(viii) Report generation outlining the levels of security to be provided for individual systems, end user, etc.

Need for security:

The basic objective for providing network security is two fold:
(i) to safeguard assets and (ii) to ensure and maintain the data integrity.
The boundary subsystem is an interface between the potential users of a system and the system itself controls in the boundary subsystem have the following purposes like
(i) to establish the system resources that the users desire to employ and (ii) to restrict the actions undertaken by the users who obtain the system resource to an authorized set.
There are two types of systems security. A physical security is implemented to protect the physical systems assets of an organization like the personnel, hardware, facilities, supplies and documentation. A logical security is intended to control (i) malicious and non-malicious threats to physical security and (ii) malicious threats to logical security itself.

Business Continuity Planning (BCP)

Disaster events: -
(i) There is a potential for significantly interrupt normal business processing,
(ii) Business is associated with natural disasters like earthquake, flood, tornadoes, thunderstorms, fire, etc.
(iii) It is not a fact that all the disruptions are disasters,
(iv) Disasters are disruptions causing the entire facility to be inoperative for a lengthy period of time (usually more than a day)
(v) Catastrophes are disruptions resulting from disruption of processing facility.
A Business Continuity Plan (BCP) is a documented description of action, resources, and procedures to be followed before, during and after an event, functions vital to continue business operations are recovered, operational in an acceptable time frame.

Hot site:

An alternative facility that has the equipment and resources to recover business functions that are affected by a disaster. Hot sites may vary in the type of facilities offered (such as data processing, communications, or any other critical business functions needing duplication) The location and size of the hog site must be proportional to the equipment and resources needed.

Warm site:

An alternate processing site that is only partially equipped, as compared to a hot site, which is fully equipped. It can be shared (sharing servers equipment) or dedicated (own servers) .

Saturday, March 14, 2009

Cold site:

An alternative facility that is devoid of any resources or equipment, except air conditioning and raised flooring. Equipment and resources must be installed in such a facility to duplicate the critical business functions of an organisation. Cold sites have many variations depending on their communication facilities.

Disaster recovery sites

Data centers need to be equipped with the appropriate disaster recovery systems that minimize downtime for its customers. This means that every data center needs to invest in solutions, such as power backup and remote management. Downtime can be eliminated by having proper disaster recovery (DR) plans for mission-critical types of organisations, so as to be prepared when disaster strikes. Some of the larger IT organizations, which cannot tolerate too much downtime, tend to set up their DR site as a hot site, where both the primary and DR sites are kept in real-time synchronization , all the time.

Challenges faced by the management

(i) Maintaining a skilled staff and the high infrastructure needed for daily data center operations:. A company needs to have staff which is expert at network management and has software / OS skills and hardware skills. The company has to employ a large number of such people, as they have to work on rotational shifts. The company would also used additional cover in case a person leaves
(ii) Maximising uptime and performance : While establishing sufficient redundancy and maintaining watertight security, data centers have to maintain maximum uptime and system performance.
(iii) Technology selection : The other challenges that enterprise data centers face is technology selection, which is crucial to the operations of the facility keeping business objectives in mind. Another problem is compensating for obsolescence.

Leveraging the best

In both enterprise/captive and public data centers, the systems and infrastructure need to be leveraged fully to maximize ROI. For companies that host their online applications with public data centers, in addition to the primary benefit of cost savings, perhaps the biggest advantage is the value-added services available. Enterprises usually prefer to select a service provider,
which can function as a one-stop solution provider and give them an end-to-end outsourcing experience.
Data centers need to strike a careful balance between utilization and spare infrastructure capacity. They need to be able to provide additional infrastructure to their customers who wish to scale their existing contracts with little or no advance notice. Thus it is necessary that there be additional infrastructure at all times. This infrastructure could include bandwidth and connectivity, storage, server or security infrastructure (firewalls, etc.)

Constituents of a Data Centre

To keep equipment running reliably, even under the worst circumstances, the data center is built with following carefully engineered support infrastructures:
• Network connectivity with various levels of physical (optical fibre and copper) and service (both last mile and international bandwidth) provider redundancy
• Dual DG sets and dual UPS
• HVAC systems for temperature control
• Fire extinguishing systems
• Physical security systems: swipe card/ biometric entry systems, CCTV, guards and so on.
• Raised flooring
• Network equipment
• Network management software
• Multiple optical fibre connectivity
• Network security: segregating the public and private network, installing firewalls and intrusion detection systems (IDS)

Friday, March 13, 2009

System monitoring and support

The data center should provide system monitoring and support, so that you can be assured that the servers are being monitored round the clock.
• 24x7x365 hours network monitoring
• Proactive customer notification
• Notification to customers for pre-determined events
• Monitoring of power supply, precision air conditioning system, fire and smoke detection systems, water detection systems, generators and uninterruptible power supply (UPS) systems.
For a data center to be considered world-class, there can be no shortcuts in the commissioning of the facility. Connectivity, electrical supply and security are perhaps the three paramount requirements of any data center.

Security

Physical security and systems security are critical to operations. Thus, it should provide both types of security measures to ensure the security of equipment and data placed at the data center.
(a) Physical security: It can be achieved through
• Security guards
• Proximity card and PIN for door access
• Biometrics access and PIN for door access
• 24 x 365 CCTV surveillance and recording
(b) Data security: Data security within a data center should be addressed at multiple levels.
• Perimeter security: This is to manage both internal and external threats. This consists of firewalls, intrusion detection and content inspections; host security; anti-virus and access control and administrative tools.
• Access management: This is for both applications and operating systems that host these critical applications.

Electrical and power systems:

A data center should provide the highest power availability with uninterrupted power systems (UPS)

Availability of Data

The goal of a data center is to maximize the availability of data, and to minimize potential downtime. To do this, redundancy has to be built in to all the mission critical infrastructure of the data center, such as connectivity, electrical supply, security and surveillance, air conditioning and fire suppression.

Data Security

Another issue critical for data centers is the need to ensure maximum data security and 100 per cent availability. Data centers have to be protected against intruders by controlling access to the facility and by video surveillance. They should be able to withstand natural disasters and calamities, like fire and power failures. Recovery sites must be well maintained as it is here that everything in the data center is replicated for failure recovery.

Wednesday, March 11, 2009

Size

Data centers are characterized foremost by the size of their operations. A financially viable data center could contain from a hundred to several thousand servers. This would require a minimum area of around 5,000 to 30,000 square metres. Apart from this, the physical structure containing a data center should be able to withstand the sheer weight of the servers to be installed inside. Thus, there is a need for high quality construction.

Storage on demand:

• It provides the back-end infrastructure as well as the expertise, best practices and proven processes so as to give a robust, easily management and cost-effective storage strategy.
• It provides data storage infrastructure that supports your ability to access information at any given moments – one that gives the security, reliability and availability needed to meet company demands.

What can they do?

(i) Database monitoring:
• This is done via a database agent, which enables the high availability of the database through comprehensive automated management.
(ii) Web monitoring:
• This is to assess and monitor website performance, availability, integrity and the responsiveness from a site visotor’s perspective.
• It also reports on HTTP, FTP service status, monitors URL availability and round-trip response times, and verifies Web content accuracy and changes.
(iii) Backup and restore:
• It provides centralized multi-system management capabilities.
• It is also a comprehensive integrated management solution for enterprise data storage using specialized backup agents for the operating system, database, open files and application.

Which sectors use them?

Any large volume of data that needs to be centralized, monitored and managed centrally needs a data center. Of course, a data center is not mandatory for all organizations that have embraced IT; it depends on the size and criticality of data. Data centers are extremely capital-intensive facilities. Commissioning costs run into millions of dollars, and operational costs involved in maintaining levels of redundant connectivity, hardware and human resources, can be stratospheric. The percentage of enterprises for which it makes business sense to commission and operate an enterprise data center is, consequently, extremely small.

Public data centers:

A public data center (also called internet data centers) provide services ranging from equipment colocation to managed web-hosting. Clients typically access their data and applications via the internet.
Typically, data centers can be classified in tiers, with tier 1 being the most basic and inexpensive, and tier 4 being the most robust and costly. The more ‘mission critical’ an application is, the more redundancy, robustness and security are required for the data center. A tier 1 data center does not necessarily need to have redundant power and cooling infrastructures. It only needs a lock for security and can tolerate upto 28.8 hours of downtime per year.

Tuesday, March 10, 2009

Private Data Centre:

A private data center (also called enterprise data centers) is managed by the organization’s own IT department, and it provides the applications, storage, web-hosting, and e-business functions needed to maintain full operations. If an organization prefers to outsource these IT functions, then it turns to a public data center.

WHAT IS A DATA CENTRE ?

A data center is a centralized repository for the storage, management and dissemination of data and information. Data centers can be defined as highly secure, fault-resistant facilities, hosting customer equipment that connects to telecommunications networks. Often referred to as an Internet hotel/ server farm, data farm, data warehouse, corporate data center, Internet service provide(ISP) or wireless application service provider (WASP), the purpose of a data center is to provide space and bandwidth connectivity for servers in a reliable, secure and scaleable environment. These data centers are also referred to as public data centers because they are open to customers.

Change management:

Of course it’s easy - and faster - to exchange a component on the server than to furnish numerous PCs with new program versions. To come back to our VAT example: it is quite easy to run the new version of a tax-object in such a way that the clients automatically work with the version from the exact date that it has to be run. It is, however, compulsory that interfaces remain stable and that old client versions are still compatible. In addition such components require a high standard of quality control. This is because low quality components can, at worst, endanger the functions of a whole set of client applications. At best, they will still irritate the systems operator.

The advantages of 3-tier architecture:

As previously mentioned 3-tier architecture solves a number of problems that are inherent to 2-tier architectures. Naturally it also causes new problems, but these are outweighed by the advantages.
Clear separation of user-interface-control and data presentation from application-logic: Through this separation more clients are able to have access to a wide variety of server applications. The two main advantages for client-applications are clear: quicker development through the reuse of pre-built business-logic components and a shorter test phase, because the server-components have already been tested.
Dynamic load balancing: If bottlenecks in terms of performance occur, the server process can be moved to other servers at runtime.

Data-server-tier:

This tier is responsible for data storage. Besides the widespread relational database systems, existing legacy systems databases are often reused here.
It is important to note that boundaries between tiers are logical. It is quite easily possible to run all three tiers on one and the same (physical) machine. The main importance is that the system is neatly structured, and that there is a well planned definition of the software boundaries between the different tiers.

Monday, March 9, 2009

Application-server-tier:

This tier is new, i.e. it isn’t present in 2-tier architecture in this explicit form. Business-objects that implement the business rules "live" here, and are available to the client-tier. This level now forms the central key to solving 2-tier problems. This tier protects the data from direct access by the clients.
The object oriented analysis "OOA", on which many books have been written, aims in this tier: to record and abstract business processes in business-objects. This way it is possible to map out the applications-server-tier directly from the CASE-tools that support OOA.

Client-tier:

It is responsible for the presentation of data, receiving user events and controlling the user interface. The actual business logic (e.g. calculating added value tax) has been moved to an application-server. Today, Java-applets offer an alternative to traditionally written PC-applications. See our Internet-page for further information.

What is 3-tier and n-tier architecture?

From here on we will only refer to 3-tier architecture, that is to say, at least 3-tier architecture.
The following diagram shows a simplified form of reference-architecture, though in principal, all possibilities are illustrated.

Why 3-tier?

Unfortunately the 2-tier model shows striking weaknesses, that make the development and maintenance of such applications much more expensive.
The complete development accumulates on the PC. The PC processes and presents information which leads to monolithic applications that are expensive to maintain.
In a 2-tier architecture, business-logic is implemented on the PC. Even the business-logic never makes direct use of the windowing-system, programmers have to be trained for the complex API under Windows.
Windows 3.X and Mac-systems have tough resource restrictions. For this reason applications programmers also have to be well trained in systems technology, so that they can optimize scarce resources.

3-Tier and N-Tier Architecture

Through the appearance of Local-Area-Networks, PCs came out of their isolation, and were soon not only being connected mutually but also to servers. Client/Server-computing was born.
Servers today are mainly file and database servers; application servers are the exception. However, database-servers only offer data on the server; consequently the application intelligence must be implemented on the PC (client) Since there are only the architecturally tiered data server and client, this is called 2-tier architecture. This model is still predominant today, and is actually the opposite of its popular terminal based predecessor that had its entire intelligence on the host system.

Network-Node Intrusion Detection

Network-node intrusion detection was developed to work around the inherent flaws in traditional NID. Network-node pulls the packet-intercepting technology off of the wire and puts it on the host. With NNID, the "packet-sniffer" is positioned in such a way that it captures packets after they reach their final target, the destination host. The packet is then analyzed just as if it were traveling along the network through a conventional "packet-sniffer." This scheme came from a HID-centric assumption that each critical host would already be taking advantage of host-based technology. In this approach, network-node is simply another module that can attach to the HID agent. Network node's major disadvantage is that it only evaluates packets addressed to the host on which it resides.

Hybrid Intrusion Detection

Hybrid intrusion detection systems offer management of and alert notification from both network and host-based intrusion detection devices. Hybrid solutions provide the logical complement to NID and HID - central intrusion detection management.

Host-based Intrusion Detection

Host-based intrusion detection systems are designed to monitor, detect, and respond to user and system activity and attacks on a given host. Some more robust tools also offer audit policy management and centralization, supply data forensics, statistical analysis and evidentiary support, and in certain instances provide some measure of access control. The difference between host-based and network-based intrusion detection is that NID deals with data transmitted from host to host while HID is concerned with what occurs on the hosts themselves.

Network Intrusion Detection

Network intrusion detection deals with information passing on the wire between hosts. Typically referred to as "packet-sniffers," network intrusion detection devices intercept packets traveling along various communication mediums and protocols, usually TCP/IP. Once captured, the packets are analyzed in a number of different ways. Some NID devices will simply compare the packet to a signature database consisting of known attacks and malicious packet "fingerprints", while others will look for anomalous packet activity that might indicate malicious behavior. In either case, network intrusion detection should be regarded primarily as a perimeter defense.

IDS Components

The goal of intrusion detection is to monitor network assets to detect anomalous behavior and misuse. This concept has been around for nearly twenty years but only recently has it seen a dramatic rise in popularity and incorporation into the overall information security infrastructure.

Friday, March 6, 2009

Proxy server:

A proxy server is designed to restrict access to information on the Internet. If, for example, you do not want your users to have access to pornographic materials, a proxy server can be configured to refuse to pass the request along to the intended Internet server.
A proxy server operates on a list of rules given to it by a System Administrator. Some proxy software uses list of specific forbidden sites, while other proxy software examines the content of a page before it is served to the requester. If certain keywords are found in the requested page, access to it is denied by the proxy server.
Technologically, there’s no substantial difference between a caching server and a proxy server. The difference comes in the desired outcome of such a server’s use.
If you wish to reduce the overall amount of traffic exchanged between your network and the Internet, a caching server may be your best bet. On the other hand, if you wish to restrict or prohibit the flow of certain types of information to your network, a proxy server will allow you to do that. There are several different packages that will allow a System Administrator to set up a caching or proxy server. Additionally, you can buy any of a number of turn-key solutions to provide these services.

Caching server:

A caching server is employed when you want to restrict your number of accesses to the Internet. There are many reasons to consider doing this. Basically, a caching server sits between the client computer and the server that would normally fulfill a client’s request. Once the client’s request is sent, it is intercepted by the caching server. The caching server maintains a library of files that have been requested in the recent past by users on the network. If the caching server has the requested information in its cache, the server returns the information without going out to the Internet.
Storing often-used information locally is a good way to reduce overall traffic to and from the Internet. A caching server does not restrict information flow. Instead, it makes a copy of requested information, so that frequently requested items can be served locally, instead of from the original Internet source. Caching servers can also be connected in a hierarchy so if the local cache does not have the information, it can pass the request to nearby caching servers that might also contain the desired files.

Chat server:

Some organizations choose to run a server that will allow multiple users to have "real-time" discussions, called "chats" on the Internet. Some chat groups are moderated; most however are unmoderated public discussions. Further, most chat servers allow the creation of "private" chat rooms where participants can "meet" for private discussions. You can participate in chats on other servers without running a chat server yourself.
The popularity of chat rooms has grown dramatically over the past few years on the Internet, however, the ability to talk in small groups on the Internet is not new. "Chat" is a graphical form of an Internet service called IRC, or Internet Relay Chat. IRC was a replacement for a UNIX command called "talk." Using talk, and even IRC can be cumbersome. Chat clients, on the other hand, are available for all platforms and are graphical in nature, opening up their utility to the majority of Internet users.

News server:

Usenet News is a world wide discussion system consisting of thousands of newsgroups organized into hierarchies by subject. Users read and post articles to these newsgroups using client software. The "news" is held for distribution and access on the news server. Because newsgroups tend to generate large amounts of Internet traffic, you may wish to consider the method in which you intend to receive Usenet news.
There are two ways to accept Usenet News: as a "push" or "pull" feed. With a "push" feed, news articles are "pushed" onto your news server, whether or not your users read those articles. With a "pull" feed, your news server has all of the headers for the collection of Usenet News articles, but does not retrieve the article itself unless it is specifically requested by a user.

FTP server:

File Transfer Protocol (FTP) is an Internet-wide standard for distribution of files from one computer to another. The computer that stores files and makes them available to others is server. Client software is used to retrieve the files from the server. The two most common way to transfer files are with anonymous FTP, where anyone can retrieve files from or place files on a specific site, and logged file transfers, where an individual must login into the FTP server with an ID and password.
For example, Merit Network, Inc. makes network configuration files such as Domain Name Registration templates available for anonymous FTP on ftp.merit.edu

Thursday, March 5, 2009

Web server:

The World Wide Web (WWW) is a very popular Internet source of information. Web browsers present information to the user in hypertext format. When the user selects a word or phrase that a Web page’s author has established as a hypertext link, the Web browser queries another Web server or file to move to another Web page related to the link. For example, "Netscape" is a Web browser which queries Web servers on the Internet. Which Web server Netscape queries depends upon which hypertext link the user selects.

Gopher server:

Gopher is an Internet application that uses multiple Gopher servers to locate images, applications, and files stored on various servers on the Internet. Gopher offers menu choices to prompt users for information that interests them, and then establishes the necessary network connections to obtain the resource. For example, "Veronica" is a Gopher application that searches databases of the file contents of worldwide Gopher servers to help locate Gopher resources.

DNS server:

Domain Name Service is an Internet-wide distributed database system that documents and distributes network-specific information, such as the associated IP address for
a host name, and vice versa. The host that stores this database is a name server. The library routines that query the name server, interpret the response and return the information to the program that requested it are resolvers.
For example: To determine the location of a remote computer on the Internet, communications software applications (such as NCSA Telnet) use resolver library routines to query DNS for the remote computer's IP address.

Mail server:

A mail server is the most efficient way to receive and store electronic mail messages for a community of users. A central mail server runs 24 hours a day. The mail server can also provide a global email directory for all community and school users, as well as email gateway and relay services for all other mail servers in the district. In such a scenario, user email boxes would be maintained on a separate email server located at each school .

File server:

A file server, one of the simplest servers, manages requests from clients for files stored on the server’s local disk. A central file server permits groups and users to share and access data in multiple ways. Central file servers are backed up regularly and administrators may implement disk space quotas for each user or group of users.
For example: Using a certain software client, PC users are able to "mount" remote UNIX server file systems. As a result, the remote network file system appears to be a local hard drive on the user's PC.

Wednesday, March 4, 2009

What Does Transaction Server Do?

To understand what MTS is and what it does, we need to first make one very important point clear. This software should really have been named Microsoft Component Server, not Microsoft Transaction Server. MTS is all about managing the way applications use components, and not just about managing transactions. Yes, transactions are a big part of many applications we write and MTS can help to manage these—but MTS also provides a very useful service for applications that don’t use transactions at all. To be able to define MTS accurately, we first need to understand what goes on inside it in the most basic way.

Transaction Servers:

MTS or Microsoft Transaction Server is an integral part of Windows NT, and is installed by default as part of the operating system in NT5. It is a service in much the same way as Internet Information Server or the File and Print services that we now take for granted. In other words, it is part of the system that is available in the background whenever one of our applications requires it.
Control and configuration of MTS is via either a snap-in to the Microsoft Management Console, or through the HTML administration pages that are included with MTS. This is very similar to the interface provided for Internet Information Server 4, and gives an integrated management function that is useful when building and setting up distributed applications.

Print Servers:

Print servers provide shared access to printers. Most LAN operating systems provide print service. Print service can run on a file server or on one or more separate print server machines. Non-file server print servers can be dedicated to the task of print service, or they can be non-dedicated workstations.
The disadvantages of using workstations as print servers are similar to the disadvantages of using file servers as workstations. The workstation may run a little slower when print services are being used, a user could shut down the server without warning, or an application could lock up the server. The consequences of a lock-up or shut-down on a print server, however,
are usually less severe than the consequences of locking up a file server. The time involved in dealing with these problems, however. can be costly.

Active Application Server:

This type of server supports and provides a rich environment for server-side logic expressed as objects, rules and components. These types of servers are most suitable for dealing with based e-commerce and decision processing.

Component Servers:

The main purpose of these servers is to provide database access and transaction processing services to software components including DLLs, CORBA, and JavaBeans. First, they provide environment for server-side components. Second, they provide access to database and other services to the component. These types of servers are stateless. Examples include MTS (which provides an interface for DLL), Sybase Jaguar, and IBM Component broker.

Tuesday, March 3, 2009

Web Information Servers:

This type of server employs HTML templates and scripts to generate pages incorporating values from the database in them. These types of servers are stateless servers. Such servers include Netscape Server, HAHT, Allaire, Sybase, and SilverStream.

Management Console:

Single point graphical management console for remotely monitoring clients and server clusters.

Load Balancing:

Capability to send the request to different servers depending on the load and availability of the server.

Fault Tolerance:

Ability of the application server with no single point of failure, defining policies for recovery and fail-over recovery in case of failure of one object or group of objects.

Component Management:

Provides the manager with tools for handling all the components and runtime services like session management, and synchronous/asynchronous client notifications, as well as executing server business logic.

Monday, March 2, 2009

Application Servers

An application server is a server program that resides in the server (computer) and provides the business logic for the application program. The server can be a part of the network, more precisely the part of the distributed network. The server program is a program that provides its services to the client program that resides either in the same computer or on another computer connected through the network.
Application servers are mainly used in Web-based applications that have a 3-tier architecture.

Database Servers

Database management systems (DBMS) can be divided into three primary components: development tools, user interface, and database engine. The database engine does all the selecting, sorting, and updating. Currently, most DBMS combine the interface and engine on each user's computer. Database servers split these two functions, allowing the user interface software to run on each user's PC (the client), and running the database engine in a separate machine (the database server) shared by all users. This approach can increase database performance as well as overall LAN performance because only selected records are transmitted to the user's PC, not large blocks of files. However, because the database engine must handle multiple requests,the database server itself can become a bottleneck when a large number of requests are pending.

Network

The network hardware is the cabling, the communication cords, and the device that link the server and the clients. The communication and data flow over the network is managed and maintained by network software. Network technology is not well understood by business people and end users, since it involves wiring in the wall and function boxes that are usually in a closet.

Fat-client or Fat-server

Fat-client or fat-server are popular terms in computer literature. These terms serve as vivid descriptions of the type of client/server systems in place. In a fat-client system, more of the processing takes place on the client, like with a file server or database server. Fat-servers place more emphasis on the server and try to minimize the processing done by clients. Examples of fat-servers are transaction, GroupWare, and web
servers. It is also common to hear fat-clients referred to as "2-Tier" systems and fat-servers referred to as "3-Tier" systems.

Middleware

The network system implemented within the client/server technology is commonly called by the computer industry as middleware. Middleware is all the distributed software needed to allow clients and servers to interact. General middleware allows for communication, directory services, queuing, distributed file sharing, and printing. Service-specific software like ODBC. The middleware is typically composed of four layers, which are Service, Back-end Processing, Network Operating System, and Transport Stacks. The Service layer carries coded instructions and data from software applications to the Back-end Processing layer for encapsulating network-routing instructions. Next, the Network Operating System adds additional instructions to ensure that the Transport layer transfers data packets to its designated receiver efficiently and correctly. During the early stage of middleware development, the transfer method was both slow and unreliable.

Sunday, March 1, 2009

Server

Servers await requests from the client and regulate access to shared resources. File servers make it possible to share files across a network by maintaining a shared library of documents, data, and images. Database servers one their processing power to executed Structured Query Language (SQL) requests from clients. Transaction servers execute a series of SQL commands, an online transaction-processing program (OLTP), as opposed to database servers, which respond to a single client command. Web servers allow clients and servers to communicate with a universal language called HTTP.

Client

Clients, which are typically PCs, are the "users" of the services offered by the servers described above. There are basically three types of clients. Non-Graphical User Interface (GUI) clients require a minimum amount of human interaction; non-GUIs include ATMs, cell phones, fax machines, and robots. Second, GUI-Clients are human interaction models usually involving object/action models like the pull-down menus in Windows 3-X. Object-Oriented User Interface (OOUI) Clients take GUI-Clients even further with expanded visual formats, multiple workplaces, and object interaction rather than application interaction. Windows 95 is a common OOUI Client.

Characteristics of Client / Server Technology

There are ten characteristics that reflect the key features of a client / server system. These ten characteristics are as follows:
1. Client/server architecture consists of a client process and a server process that can be distinguished from each other.
2. The client portion and the server portions can operate on separate computer platforms.
3. Either the client platform or the server platform can be upgraded without having to upgrade the other platform.
4. The server is able to service multiple clients concurrently; in some client/server systems, clients can access multiple servers.
5. The client/server system includes some sort of networking capability.
6. A significant portion of the application logic resides at the client end.

Benefits of the Client /Server Technology

Client/server systems have been hailed as bringing tremendous benefits to the new user, especially the users of mainframe systems. Consequently, many businesses are currently in the process of changing or in the near future will change from mainframe (or PC) to client / server systems. Client / Server has become the IT solution of choice among the country’s largest corporations. In fact, the whole transition process, that a change to a client/ server invokes, can benefit a company’s long run strategy.
􀂃 People in the field of information systems can use client/server computing to make their jobs easier.
􀂃 Reduce the total cost of ownership.
􀂃 Increased Productivity
􀂃 End user productivity .

Implementation examples of client / server technology

􀂃 Online banking application
􀂃 Internal call centre application
􀂃 Applications for end-users that are stored in the server
􀂃 E-commerce online shopping page
􀂃 Intranet applications
􀂃 Financial, Inventory applications based on the client Server technology.
􀂃 Tele communication based on Internet technologies .

Saturday, February 28, 2009

Why Change to Client/Server Computing

Client/server is described as a `cost-reduction’ technology. This technology allows doing what one may be currently doing with computers much less expensively. These technologies include client/server computing, open systems, fourth generation languages, and relational databases. Cost reductions are usually quoted as the chief reasons for changing to client/server. However, the list of reasons has grown to include improved control, increased data integrity and security, increased performance, and better connectivity. The key business issues dividing adoption are:
􀂃 Improving the Flow of Management Information
􀂃 Better Service to End-User Departments.
􀂃 Lowering IT costs

What is Client/Server?

Client/Server (C/S) refers to computing technologies in which the hardware and software components (i.e., clients and servers) are distributed across a network. The client/server software architecture is a versatile, message-based and modular infrastructure that is intended to improve usability, flexibility, interoperability, and scalability as compared to centralised, mainframe, time sharing computing. This technology includes both the traditional database-oriented C/S technology, as well as more recent general distributed computing technologies. The use of LANs has made the client/server model even more attractive to organisations.

Need for Client Server Model

Client server technology, on the other hand, intelligently divides the processing work between the server and the workstation. The server handles all the global tasks while the workstation handles all the local tasks. The server only sends those records to the workstation that are needed to satisfy the information request. Network traffic is significantly reduced. The result of this system is that is fast, secure, reliable, efficient, inexpensive, and easy to use .

File sharing architecture

The original PC networks were based on file sharing architectures, where the server downloads files from the shared location to the desktop environment. The requested user job is then run in the desktop environment.
The traditional file server architecture has many disadvantages especially with the advent of less expensive but more powerful computer hardware. The server directs the data while the
workstation processes the directed data. Essentially this is a dumb server-smart workstation relationship. The server will send the entire file over the network even though the workstation only requires a few records in the file to satisfy the information request. In addition, an easy to use graphic user interface (GUI) added to this model simply adds to the network traffic, decreasing response time and limiting customer service.

Personal Computers

With introduction of the PC and its operating system, independent-computing workstations quickly became common. Disconnected, independent personal computing models allow processing loads to be removed from a central computer. Besides not being able to share data, disconnected personal workstation users cannot share expensive resources that mainframe system users can share: disk drives, printers, modems, and other peripheral computing devices. The data ( and peripheral) sharing problems of independent PCs and workstations, quickly led to the birth of the network/file server computing model, which links PCs and workstations together in a Local Area Network-LAN, so they can share data and peripherals.

Mainframe architecture

With mainframe software architectures, all intelligence is within the central host computer (processor) Users interact with the host through a dump terminal that captures keystrokes and sends that information to the host. Centralized host-based computing models allow many users to share a single computer’s applications, databases, and peripherals. Mainframe software architectures are not tied to a hardware platform. User interaction can be done using PCs and UNIX workstations.
A limitation of mainframe software architectures is that they do not easily support graphical user interfaces or access to multiple databases from geographically dispersed sites. They cost literally thousands of times more than PCs, but they sure don’t do thousands of times more work.

CLIENT / SERVER TECHNOLOGY

Recently, many organisations have been adopting a form of distributed processing called client / server computing. It can be defined as " a form of shared, or distributed, computing in which tasks and computing power are split between servers and clients (usually workstations or personal computers) Servers store and process data common to users across the enterprise, these data can then be accessed by client system. In this section we will discuss various aspects of client/server technology. But before that, let first look at the characteristics of the traditional computing models and various limitations that led to the client/ server computing.

Fiber Optic Cables

Many organisations are replacing the older, copper wire cables in their networks with fiber Optic cables. Fiber optic cables use light as the communications medium. To create the on-and-off bit code needed by computers, the light is rapidly turned on and off on the channel. Fiber optic channels are light weight, can handle many times the telephone conversation or volumes of data handled by copper wire cabling, and can be
installed in environments hostile to copper wire, such as wet areas or areas subject to a great deal of electromagnetic interference. Data is more secure in fiber optic networks.

Coaxial cable

It is a well established and long-used cabling system for terminals and computers. This cabling comes in a variety of sizes to suit different purposes. Coaxial cable is commonly used to connect computers and terminals in a local area such as an office, floor, building or campus .

Twisted-Pair wiring

Twisted-pair wiring or cabling is the same type of cabling system which is used for home and office telephone system. It is inexpensive and easy to install. Technological improvements over the last few years have increased the capacity of twisted-pair wires so that they can now handle data communications with speeds upto 10 mbps (million of bits per second) over limited distances.

Wednesday, February 25, 2009

Network Cabling

Once the server, workstations and network interface cards are in place, network cabling is used to connect everything together. The most popular type of network cable is the shielded twisted-pair, co-axial and fibre optic cabling as discussed below. Please note that cables and cards chosen should match each other.

Network interface card

As discussed earlier, every device connected to a LAN needs a Network interface card(NIC) to plug into the LAN. For example, a PC may have an Ethernet card installed in it to connect to an Ethernet LAN.

Workstations

Workstations are attached to the server through the network interface card and the cabling. The dumb terminals used on mainframes and minicomputer systems are not supported on networks because they are not capable of processing on their own. Workstations are normally intelligent systems, such as the IBM PC. The concept of distributed processing relies on the fact that personal computers attached to the networks perform their own processing after loading programs and data from the server. Hence, a workstation is called an Active Device on the network. After processing, files are stored back on the server where they can be used by other workstations.
The workstation can also be a diskless PC, wherein loading of operating system takes place from the file server. In short, a PC + a LAN card = a Workstation.

The network operating system

It is loaded into the server’s hard disk along with the system management tools and user utilities. When the system is restarted, NetWare boots and the server comes under its control. At this point, DOS or Windows is no longer valid on the network drive, since it is running the network operating system or NetWare; however most DOS/Windows programs can be run as normal. No processing is done on the server, and hence it is called a Passive Device. The choice of a dedicated or non-dedicated network server is basically a trade-off between the cost and performance, and operation of a network.

Components of a LAN

A typical local area network running under Novell NetWare has five basic components that make up the network. These are :
- File Servers
- Network operating system
- Personal Computers, Workstations or Nodes
- Network Interface Cards
- Cabling
(i) File Server - A network file server is a computer system used for the purpose of managing the file system, servicing the network printers, handling network communications, and other functions. A server may be dedicated in which case all of its processing power is allocated to network functions, or it may be non-dedicated which means that a part of the servers functions may be allocated as a workstation or DOS-based system.

Tuesday, February 24, 2009

LAN Requirements

There are certain features that every LAN should have and users would do well to keep note of these when they decide to implement their own network. These features essentially invoice hardware and software components. Broadly, these are :
(i) Compatibility - A local area network operating system must provide a layer of compatibility at the software level so that software can be easily written and widely distributed. A LAN operating system must be flexible, which means that it must support a large variety of hardware. Novell Net Ware is a network operating system that can provide these features, and has today, become an industry standard.
(ii) Internetworking - Bridging of different LANs together is one of the most important requirements of any LAN. Users should be able to access resources from all workstations on the bridge network in a transparent way; no special commands should be required to cross the bridge. A network operating system must be hardware independent, providing the same user interface irrespective of the hardware.

Why Lans ?

One of the original reasons for users going in for LANs was that such a distributed environment gave them the ability to have their own independent processing stations while sharing expensive computer resources like disk files, printers and plotters. Today, however, more critical reasons have emerged for users to increasingly move towards LAN solutions. These include :
(i) Security - Security for programs and data can be achieved using servers that are locked through both software and by physical means. Diskless nodes also offer security by not allowing users to download important data on floppies or upload unwanted software or virus.
(ii) Expanded PC usage through inexpensive workstation - Once a LAN has been set up, it actually costs less to automate additional employees through diskless PCs. Existing PCs can be easily converted into nodes by adding network interface cards.
(iii) Distributed processing - Many companies operate as if they had distributed system in place. If numerous PCs are installed around the office, these machines represent the basic platform for a LAN with inter-user communication and information exchange.

The Concept

While personal computers were becoming more powerful through the use of advanced processors and more sophisticated software, users of mainframes and minicomputers began to break with the tradition of having a centralized information systems division. PCs were easy to use and provided a better and more effective way of maintaining
data on a departmental level. In the mainframe and mini environment, the data required by individual departments was often controlled by the management information system department or some such similar department. Each user was connected to the main system through a dumb terminal that was unable to perform any of its own processing tasks. In the mainframe and minicomputer environment, processing and memory are centralized.

The Emergence of Local Area Networks

The advent of IBM PCs in the early 1980s set a whole new standard in both business and personal computing. Along with PCs came a new operating system called DOS. DOS provided an easy programming environment for software vendors developing and publishing software. The significance of the DOS standard is that it stimulates growth of new products by providing software and hardware vendors with an open development platform to build both accessories and software products. Since this brought in an abundance of software, the use of personal computers increased. As more and more people began to use computers, it became obvious that a way of connecting them together would provide many useful benefits, such as printer and hard disk sharing, especially when budgets became a constraint. This gave birth to the Local Area Network (LAN) concept.

Broad Band Networks (ISDN):

Integrated Services Digital Network (ISDN) is a system of digital phone connections to allow simultaneous voice and data transmission across the world. Such voice and data are carried by bearer channels (B channels) having a bandwidth of 64 kilobits per second. A data channel can carry signals at 16kbps or 64kbps, depending on the nature of service provided. There are two types of ISDN service – Basic Rate Interface (BRI) and Primary Rate Interface (PRI) BRI consists of two 64 kbps B channels and one 16kbps D channel for a total of 144kbps and is suitable for individual users. PRI consists of twenty three B channels and one 64kbps D channel for a total of 1536kbps and is suitable for users with higher capacity requirements. It is possible to support multiple primary PRI lines with one 64kbps D channel using Non Facility Associated Signaling (NFAS) .

Monday, February 23, 2009

OSI or the open System Interconnection

Has been outlined by International Organization for Standardization (ISO) to facilitate communication of heterogeneous hardware or software platforms with each other with the help of following seven layers of functions with their associated controls: -
Layer 1 or Physical Layer is a hardware layer which specifies mechanical features as well as electromagnetic features of the connection between the devices and the transmission. Network topology is a part of this layer.
Layer 2 or Data Link Layer is also a hardware layer which specifies channel access control method and ensures reliable transfer of data through the transmission medium.
Layer 3 or Network Layer makes a choice of the physical route of transmission of say, a message packet, creates a virtual circuit for upper layers to make them independent of data transmission and switching, establishes, maintains, terminates connections between the nodes, ensures proper routing of data.
Layer 4 or Transport Layer ensures reliable transfer of data between user processes, assembles and disassembles message packets, provides error recovery and flow control. Multiplexing and encryption are undertaken at this layer level.
Layer 5 or Session Layer establishes, maintains and terminates sessions (dialogues) between user processes. Identification and authentication are undertaken at this layer level.
Layer 6 or Presentation Layer controls on screen display of data, transforms data to a standard application interface. Encryption, data compression can also be undertaken at this layer level.

Transmission Protocols

For any network to exist , there must be connections between computers and agreements or what is termed as protocols about the communications language. However, setting up connections and agreements between dispersed computers (from PCs to mainframes) is complicated by the fact that over the last decade, systems have become increasingly heterogeneous in their software and hardware, as well as their intended functionality.
Protocols are software that performs a variety of actions necessary for data transmission between computers. Stated more precisely, protocols are a set of rules for inter-computer communication that have been agreed upon and implemented by many vendors, users and standards bodies. Ideally, a protocols standard allows heterogeneous computers to talk to each other.

Communications Software

Communications software manages the flow of data across a network. It performs the following functions:
• Access control: Linking and disconnecting the different devices; automatically dialing and answering telephones; restricting access to authorized users; and establishing parameters such as speed, mode, and direction of transmission.
• Network management: Polling devices to see whether they are ready to send or receive data; queuing input and output; determining system priorities; routing messages; and logging network activity, use, and errors.
• Data and file transmission: Controlling the transfer of data, files, and messages among the various devices.

Communication Services

Normally, an organization that wishes to transmit data uses one of the common carrier services to carry the messages from station to station. Following is a brief description of these services.
Narrow band Service - Usually, this service is used where data volume is relatively low ; the transmission rates usually range from 45 to 300 bits per second. Examples of this are the telephone companies’ typewriters exchange service (TWX) and Telex service.
Voice band Services - Voice band services use ordinary telephone lines to send data messages. Transmission rates vary from 300 to 4,800 bits per second, and higher.
Wide band Services - Wide band services provide data transmission rates from several thousands to several million bits per second. These services are limited to high-volume users. Such services generally use coaxial cable or microwave communication. Space satellites, a more exotic development, have been employed to rapidly transmit data from any part of the world to another part of the world.

Communications Channels

A communications channel is the medium that connects the sender and the receiver in the data communications network. Common communications channels include telephone lines, fiber optic cables, terrestrial microwaves, satellite, and cellular radios. A communications network often uses several different media to minimize the total data transmission costs. Thus, it is important to understand the basic characteristics, and costs, of different communications channels.

Sunday, February 22, 2009

Packet switching

It is a sophisticated means of maximizing transmission capacity of networks. This is accomplished by breaking a message into transmission units, called packets, and routing them individually through the network depending on the availability of a channel for each packet. Passwords and all types of data can be included within the packet and the transmission cost is by packet and not by message, routes or distance. Sophisticated error and flow control procedures are applied on each link by the network.

Message Switching

Some organisations with a heavy volume of data to transmit use a special computer for the purpose of data message switching. The computer receives all transmitted data ; stores it ; and, when an outgoing communication line is available, forwards it to the receiving point.

Circuit switching

Circuit switching is what most of us encounter on our home phones. We place a call and either get our destination party or encounter a busy signal, we can not transmit any message. A single circuit is used for the duration of the call.

Transmission Modes

There are three different types of data communication modes :
(i) Simplex : A simplex communication mode permits data to flow in only one direction. A terminal connected to such a line is either a send-only or a receive only device. Simplex mode is seldom used because a return path is generally needed to send acknowledgements, control or error signals.
(ii) Half duplex : Under this mode, data can be transmitted back and forth between
two stations, but data can only go in one of the two directions at any given point of point.
(iii) Full duplex : A full duplex connection can simultaneously transmit and receive data between two stations. It is most commonly used communication mode. A full duplex line is faster, since it avoids the delay that occur in a half-duplex mode each time the direction of transmission is changed.

Asynchronous Transmission

In this transmission , each data word is accompanied by stop(1) and start (0) bits that identify the beginning and ending of the word. When no information is being transmitted (sender device is idle), the communication line is usually high (in binary 1), i.e., there is a continuous stream of 1.
Advantage: Reliable as the start and stop bits ensure that the sender and receiver remain in step with one another.
Disadvantage: Inefficient as the extra start and stop bits slow down the data transmission when there is a huge volume of information to be transmitted.

Saturday, February 21, 2009

Synchronous Transmission

In this transmission ,bits are transmitted at fixed rate. The transmitter and receiver both use the same clock signals for synchronisation.
• Allows characters to be sent down the line without start-stop bits.
• Allows data to be send as a multi-word blocks.
• Uses a group of synchronisation bits, which is placed, at the beginning and at the end of each block to maintain synchronisation.
• Timing determined by a MODEM
Advantage: Transmission is faster because by removing the start and stop bits, many data words can be transmitted per second.
Disadvantage: The synchronous device is more expensive to build as it must be smart enough to differentiate between the actual data and the special synchronous characters.

Synchronous versus Asynchronous Transmission

Another aspect of data transmission is synchronization (relative timing) of the pulses when transmitted. When a computer sends the data bits and parity bit down the same communication channel, the data are grouped together in predetermined bit patterns for the receiving devices to recognize when each byte (character) has been transmitted. There are two basic ways of transmitting serial binary data: synchronous and asynchronous.

Serial versus Parallel Transmission

Data are transmitted along a communication either in serial or in parallel mode.
Serial Transmission: In serial transmission, the bits of each byte are sent along a single path one after another. An example is the serial port (RS-232) for the mouse or MODEM.
Advantages of serial transmission are :
It is a cheap mode of transferring data .
It is suitable to transmit data over long distance.
The disadvantage is :
This mode is not efficient (i.e. slow) as it transfers data in series .

Mesh network

In this structure, there is random connection of nodes using communication links. In real life, however, network connections are not made randomly. Network lines are expensive to install and maintain. Therefore, links are planned very carefully after serious thoughts, to minimize cost and maintain reliable and efficient traffic movement. A mesh network may be fully connected or connected with only partial links. In fully interconnected topology, each node is connected by a dedicated point to point link to every node. This means that there is no need of any routing function as nodes are directly connected. The reliability is very high as there are always alternate paths available if direct link between two nodes is down or dysfunctional.

Ring network

This is yet another structure for local area networks. In this topology, the network cable passes from one node to another until all nodes are connected in the form of a loop or ring. There is a direct point-to-point link between two neighboring nodes. These links are unidirectional which ensures that transmission by a node traverses the whole ring and comes back to the node, which made the transmission.

Friday, February 20, 2009

Bus network

This structure is very popular for local area networks. In this structure or topology, a single network cable runs in the building or campus and all nodes are linked along
with this communication line with two endpoints called the bus or backbone. Two ends of the cable are terminated with terminators.
Advantages:
• Reliable in very small networks as well as easy to use and understand.
• Requires the least amount of cable to connect the computers together and therefore is less expensive than other cabling arrangements.
• Is easy to extend. Two cables can be easily joined with a connector, making a longer cable for more computers to join the network.
• A repeater can also be used to extend a bus configuration.
Disadvantages:
• Heavy network traffic can slow a bus considerably. Because any computer can transmit at any time. But networks do not coordinate when information is sent. Computers interrupting each other can use a lot of bandwidth.

Star Network

That is, processing nodes in a star network interconnect directly with a central system. Each terminal, small computer, or large main frame can communicate only with the central site and not with other nodes in the network. If it is desired to transmit information from one node to another, it can be done only by sending the details to the central node, which in turn sends them to the destination.
A star network is particularly appropriate for organisations that require a centralized data base or a centralized processing facility. For example, a star network may be used in banking for centralized record keeping in an on-line branch office environment.

NETWORK STRUCTURE OR TOPOLOGY

The geometrical arrangement of computer resources, remote devices, and communication facilities is known as network structure or network topology. A compute network is comprised of nodes and links. A node is the end point of any branch in a computer, a terminal device, workstation or an interconnecting equipment facility. A link is a communication path between two nodes. The terms "circuit" and "channel" are frequently used as synonyms for link.

Remote Access Devices

Remote access devices are modem banks that serve as gateways to the Internet or to private corporate networks. Their function is to properly route all incoming and outgoing connections.

Protocol converters

Dissimilar devices can not communicate with each other unless a strict set of communication standards is followed. Such standards are commonly referred to as protocols. A protocol is a set of rules required to initiate and maintain communication between a sender and receiver device.
Because an organization’s network typically evolved over numerous years, it is often composed of a mixture of many types of computers, transmission channels, transmission modes, and data codes. To enable diverse systems components to communicate with one another and to operate as a functional unit, protocol conversion may be needed. For example, it may be necessary to convert from ASCII to EBCDIC. Protocol conversion can be accomplished via hardware, software, or a combination of hardware and software.

Wednesday, February 18, 2009

Front-end communication processors :

These are programmable devices which control the functions of communication system. They support the operations of a mainframe computer by performing functions, which it would otherwise be required to perform itself. These functions include code conversions, editing and verification of data, terminal recognition and control of transmission lines. The mainframe computer is then able to devote its time to data processing rather than data transmission.

Multiplexer :

This device enables several devices to share one communication line. The multiplexer scans each device to collect and transmit data on a single line to the CPU. It also communicates transmission from the CPU to the appropriate terminal linked to the Multiplexer. The devices are polled and periodically asked whether there is any data to transmit. This function may be very complex and on some systems, there is a separate computer processor devoted to this activity and this is called a "front-end-processor".

Modem :

Data communication discussed above could be achieved due to the development of encoding/decoding devices. These units covert the code format of computers to those of communication channels for transmission, then reverse the procedure when data are received. These communication channels include telephone lines, microwave links or satellite transmission. The coding/encoding device is called a modem.

Bridges, repeaters and gateways:

Workstations in one network often need access to computer resources in another network or another part of a WAN. For example, an office manager using a local area network might want to access an information service that is offered by a VAN over the public phone system. In order to accommodate this type of need, bridges and routers are often necessary.
Bridges: The main task of a bridge computer is to receive and pass data from one LAN to another. In order to transmit this data successfully, the bridge magnifies the data transmission signal. This means that the bridge can act as a repeater as well as a link.

Hubs

A hub is a hardware device that provides a common wiring point in a LAN. Each node is connected to the hub by means of simple twisted pair wires. The hub then provides a connection over a higher speed link to other LANs, the company’s WAN, or the Internet .

Tuesday, February 17, 2009

Switches and Routers

Are hardware devices used to direct messages across a network, switches create temporary point to point links between two nodes on a network and send all data along that link. Router computers are similar to bridges but have the added advantage of supplying the user with network management utilities. Routers help administer the data flow by such means as redirecting data traffic to various peripheral devices or other computers. In an
Internet work communication, routers not only pass on the data as necessary but also select appropriate routes in the event of possible network malfunctions or excessive use.

Network Interface Cards

Network interface cards (NIC) provide the connection for network cabling to servers and workstations. An NIC first of all, provides the connector to attach the network cable to a server or a workstation. The on-board circuitry then provides the protocols and commands required to support this type of network card. An NIC has additional memory for buffering incoming and outgoing data packets, thus improving the network throughput. A slot may also be available for remote boot PROM, permitting the board to be mounted in a diskless workstation. Network interface cards are available in 8-bit bus or in faster 16-bit bus standards.

COMPONENTS OF A NETWORK

There are five basic components in any network (whether it is the Internet, a LAN, a WAN, or a MAN):
1. The sending device
2. The communications interface devices
3. The communications channel
4. The receiving device
5. Communications software .

Peer to peer

In peer-to-peer architecture, there are no dedicated servers. All computers are equal, and therefore, are termed as peer. Normally, each of these machines functions both as a client and a server. This arrangement is suitable for environments with a limited number of users (usually ten or less) Moreover, the users are located in the same area and security is not an important issue while the network is envisaged to have a limited growth. At the same time, users need to freely access data and programs that reside on other computers across the network.

Client-Server

Client-Server networks are comprised servers -- typically powerful computers running advanced network operating systems -- and user workstations (clients) which access data or run applications located on the servers. Servers can host e-mail; store common data files and serve powerful network applications such as Microsoft's SQL Server. As a centerpiece of the network, the server validates logins to the network and can deny access to both networking resources as well as client software. Servers are typically the center of all backup and power protection schemas.

Monday, February 16, 2009

Site-to-Site VPN

Through the use of dedicated equipment and large-scale encryption, a company can connect multiple fixed sites over a public network such as the Internet. Site-to-site VPNs can be one of two types:
• Intranet-based - If a company has one or more remote locations that they wish to join in a single private network, they can create an intranet VPN to connect LAN to LAN.
• Extranet-based - When a company has a close relationship with another company (for example, a partner, supplier or customer), they can build an extranet VPN that connects LAN to LAN, and that allows all of the various companies to work in a shared environment.

VPN

A VPN is a private network that uses a public network (usually the Internet) to connect remote sites or users together. Instead of using a dedicated, real-world connection such as leased line, a VPN uses "virtual" connections routed through the Internet from the company's private network to the remote site or employee. There are two common types of VPN.

Wide Area Networks (WAN)

A WAN covers a large geographic area with various communication facilities such as long distance telephone service, satellite transmission, and under-sea cables. The WAN typically involves best computers and many different types of communication hardware and software. Examples of WANs are interstate banking networks and airline reservation systems. Wide area networks typically operate at lower link speeds ( about 1 Mbps) Following are the salient features of WAN:
• Multiple user computers connected together.
• Machines are spread over a wide geographic region
• Communications channels between the machines are usually furnished by a third party (for example, the Telephone Company, a public data network, a satellite carrier)
• Channels are of relatively low capacity (measuring through put in kilobits per second, k bits) .

Metropolitan Area Networks (MAN)

A metropolitan area network (MAN) is some where between a LAN and a WAN. The terms MAN is sometimes used to refer to networks which connect systems or local area networks within a metropolitan area (roughly 40 kms in length
from one point to another) MANs are based on fiber optic transmission technology and provide high speed (10 Mbps or so), interconnection between sites.
A MAN can support both data and voice, cable television networks are examples of MANs that distribute television signals. A MAN just has one or two cables and does not contain switching elements.

Sunday, February 15, 2009

Local Area Networks (LAN)

A LAN covers a limited area. This distinction, however, is changing as the scope of LAN coverage becomes increasingly broad. A typical LAN connects as many as hundred or so microcomputers that are located in a relatively small area, such as a building or several adjacent buildings. Organizations have been attracted to LANs because they enable multiple users to share software, data, and devices. Unlike WAN which use point-to-point links between systems, LANs use a shared physical media which is routed in the whole campus to connect various systems. LANs use high-speed media (1 Mbps to 30 Mbps or more) and are mostly privately owned and operated.

Benefits of using networks

As the business grows, good communication between employees is needed. The organisations can improve efficiency by sharing information such as common files, databases and business application software over a computer network.
With improvements in network capacity and the ability to work wirelessly or remotely, successful businesses should regularly re-evaluate their needs and their IT infrastructure.
(i) Organisations can improve communication by connecting theirr computers and working on standardised systems, so that:
• Staff, suppliers and customers are able to share information and get in touch more easily
• More information sharing can make the business more efficient - eg networked access to a common database can avoid the same data being keyed multiple times, which would waste time and could result in errors.

Organization

A variety of network scheduling software is available that makes it possible to arrange meetings without constantly checking everyone's schedules. This software usually includes other helpful features, such as shared address books and to-do lists.

Communication and collaboration

It's hard for people to work together if no one knows what anyone else is doing. A network allows employees to share files, view other people's work, and exchange ideas more efficiently. In a larger office, one can use e-mail and instant messaging tools to communicate quickly and to store messages for future reference.

Internet Access and Security

When computers are connected via a network, they can share a common, network connection to the Internet. This facilitates email, document transfer and access to the resources available on the World Wide Web. Various levels of Internet service are available, depending on your organization's requirements. These range from a single dial-up connection (as you might have from your home computer) to 128K ISDN to 768K DSL or up to high-volume T-1 service. A.I. Technology Group strongly recommends the use of a firewall to any organization with any type of broadband Internet connection.

Saturday, February 14, 2009

Fault Tolerance

Establishing Fault Tolerance is the process of making sure that there are several lines of defense against accidental data loss. An example of accidental data loss might be a hard drive failing, or someone deleting a file by mistake. Usually, the first line of defense is having redundant hardware, especially hard drives, so that if one fails, another can take its place without losing data. Tape backup should always be a secondary line of defense (never primary) While today's backup systems are good, they are not fail-safe. Additional measures include having the server attached to an uninterruptible power supply, so that power problems and blackouts do not unnecessarily harm the equipment.

Shared Databases

Shared databases are an important subset of file sharing. If the organization maintains an extensive database - for example, a membership, client, grants or financial accounting database - a network is the only effective way to make the database
available to multiple users at the same time. Sophisticated database server software ensures the integrity of the data while multiple users access it at the same time.

Remote Access

In our increasingly mobile world, staff often require access to their email, documents or other data from locations outside of the office. A highly desirable network function, remote access allows users to dial in to your organization's network via telephone and access all of the same network resources they can access when they're in the office. Through the use of Virtual Private Networking (VPN), which uses the Internet to provide remote access to your network, even the cost of long-distance telephone calls can be avoided.

Fax Sharing

Through the use of a shared modem(s) connected directly to the network server, fax sharing permits users to fax documents directly from their computers without ever having to print them out on paper. This reduces paper consumption and printer usage and is more convenient for staff. Network faxing applications can be integrated with email contact lists, and faxes can be sent to groups of recipients. Specialized hardware is available for high-volume faxing to large groups. Incoming faxes can also be handled by the network and forwarded directly to users' computers via email, again eliminating the need to print a hard copy of every fax - and leaving the fax machine free for jobs that require it.

E-Mail

Internal or "group" email enables staff in the office to communicate with each other quickly and effectively. Group email applications also provide capabilities for contact management, scheduling and task assignment. Designated contact lists can be shared by the whole organization instead of duplicated on each person's own rolodex; group events can be scheduled on shared calendars accessible by the entire staff or appropriate groups. Equally important is a network's ability to provide a simple organization-wide conduit for Internet email, so that the staff can send and receive email with recipients outside of the organization as easily as they do with fellow staff members. Where appropriate, attaching documents to Internet email is dramatically faster, cheaper and easier than faxing them.

Thursday, February 12, 2009

Print Sharing

When printers are made available over the network, multiple users can print to the same printer. This can reduce the number of printers the organization must purchase, maintain and supply. Network printers are often faster and more capable than those connected directly to individual workstations, and often have accessories such as envelope feeders or multiple paper trays.

File Sharing

File sharing is the most common function provided by networks and consists of grouping all data files together on a server or servers. When all data files in an organization are concentrated in one place, it is much easier for staff to share documents and other data. It is also an excellent way for the entire office to keep files organized according to a consistent scheme. Network operating systems such as Windows 2000 allow the administrator to grant or deny groups of users access to certain files.

COMPUTER NETWORKS

A computer network is a collection of computers and terminal devices connected together by a communication system. The set of computers may include large-scale computers, medium scale computers, mini computers and microprocessors. The set of terminal devices may include intelligent terminals, "dumb" terminals, workstations of various kinds and miscellaneous devices such as the commonly used telephone instruments.
Many computer people feel that a computer network must include more than one computer system-otherwise, it is an ordinary on-line system. Others feel that the use of telecommunication facilities is of primary importance. Thus, there is no specific definition of a computer network.

Interpretation and evaluation

The patterns identified by the system are interpreted into knowledge which can then be used to support human decision-making e.g. prediction and classification tasks, summarizing the contents of a database or explaining observed phenomena.

Data mining

this stage is concerned with the extraction of patterns from the data. A pattern can be defined as given a set of facts(data) F, a language L, and some measure of certainty C a pattern is a statement S in L that describes relationships among a subset Fs of F with a certainty c such that S is simpler in some sense than the enumeration of all the facts in Fs.

Wednesday, February 11, 2009

Transformation

the data is not merely transferred across but transformed in that overlays may added such as the demographic overlays commonly used in market research. The data is made useable and navigable.

Preprocessing

this is the data cleansing stage where certain information is removed which is deemed unnecessary and may slow down queries for example unnecessary to note the sex of a patient when studying pregnancy. Also the data is reconfigured to ensure a consistent format as there is a possibility of inconsistent formats because the data is drawn from several sources e.g. sex may recorded as f or m and also as 1 or 0.

Selection

selecting or segmenting the data according to some criteria e.g. all those people who own a car, in this way subsets of the data can be determined.

DATA MINING

Data mining is concerned with the analysis of data and the use of software techniques for finding patterns and regularities in sets of data. It is the computer, which is responsible for finding the patterns by identifying the underlying rules and features in the data. The idea is that it is possible to strike gold in unexpected places as the data mining software extracts patterns not previously discernable or so obvious that no-one has noticed them before.
Data mining analysis tends to work from the data up and the best techniques are those developed with an orientation towards large volumes of data, making use of as much of the collected data as possible to arrive at reliable conclusions and decisions.

Concerns in using data warehouse

• Extracting, cleaning and loading data could be time consuming.
• Data warehousing project scope might increase.
• Problems with compatibility with systems already in place e.g. transaction processing system.
• Providing training to end-users, who end up not using the data warehouse.
• Security could develop into a serious issue, especially if the data warehouse is web accessible.
• A data warehouse is a HIGH maintenance system.

Tuesday, February 10, 2009

Advantages of using data warehouse

There are many advantages to using a data warehouse, some of them are:
• Enhances end-user access to a wide variety of data.
• Increases data consistency.
• Increases productivity and decreases computing costs.
• Is able to combine data from different sources, in one place.
• It provides an infrastructure that could support changes to data and replication of the changed data back into the operational systems.

Different methods of storing data in a data warehouse

All data warehouses store their data grouped together by subject areas that reflect the general usage of the data (Customer, Product, Finance etc.) The general principle used in the majority of data warehouses is that data is stored at its most elemental level for use in reporting and information analysis. Within this generic intent, there are two primary approaches to organising the data in a data warehouse.
The first is using a "dimensional" approach. In this style, information is stored as "facts" which are numeric or text data that capture specific data about a single transaction or event, and "dimensions" which contain reference information that allows each transaction or event to be classified in various ways. As an example, a sales transaction would be broken up into facts such as the number of products ordered, and the price paid, and dimensions such as date, customer, product, geographical location and sales person.

Optional Components

In addition, the following components also exist in some data warehouses:
1. Dependent Data Marts: A dependent data mart is a physical database (either on the same hardware as the data warehouse or on a separate hardware platform) that receives all its information from the data warehouse. The purpose of a Data Mart is to provide a sub-set of the data warehouse's data for a specific purpose or to a specific sub-group of the organisation.
2. Logical Data Marts: A logical data mart is a filtered view of the main data warehouse but does not physically exist as a separate data copy. This approach to data marts delivers the same benefits but has the additional advantages of not requiring additional (costly) disk space and it is always as current with data as the main data warehouse.

Operations

Data warehouse operations comprises of the processes of loading, manipulating and extracting data from the data warehouse. Operations also covers user management, security, capacity management and related functions.

Metadata

Metadata , or "data about data", is used to inform operators and users of the data warehouse about its status and the information held within the data warehouse. Examples of
data warehouse metadata include the most recent data load date, the business meaning of a data item and the number of users that are logged in currently.

Thursday, February 5, 2009

Natural Language

It is difficult for a system to understand natural language due to its ambiguity in sentence structure, syntax, construction and meaning. Accordingly it is not possible to design an interface given this problem. However systems can be designed to understand a subset of a language which implies the use of natural language in a restricted domain.

STRUCTURED QUERY LANGUAGE AND OTHER QUERY LANGUAGES

A query language is a set of commands to create, update and access data from a database allowing users to raise adhoc queries / questions interactively without the help of programmers. A structured query language (SQL) is a set of thirty (30) English like commands which has since become an adoptable standard. The structured query language syntax use some set of commands regardless of the database management system software like ‘Select’, ‘From’, ‘Where’. For example, after ‘Select’ a user lists the fields, after ‘From’ the names of the files/group of records containing these fields are listed, after ‘Where’ conditions for search of the records are listed.

Friday, January 9, 2009

COMPUTER OUTPUT MICROFILM

Computer output microfilm (COM) is an output technique that records output from a computer as microscopic images on roll or sheet film. The images stored on COM are the same as the images, which would be printed on paper. The COM recording process reduces characters 24, 42 or 48 times smaller than would be produced from a printer. The information is then recorded on sheet film called 16 mm, 35 mm microfilm or 105 mm microfiche. The data to be recorded on the microfilm can come directly from the computer (online) or from magnetic tape, which is produced by the computer (off-line) The data is read into a recorder where, in most systems, it is displayed internally on a CRT. As the data is displayed on the CRT, a camera takes a picture of it and places it on the film. The film is then processed, either in the recorder unit or separately. After it is processed, it can be retrieved and viewed by the user.

Saturday, January 3, 2009

The Track Ball

A track ball is a pointing device that works like an upside-down mouse. The user rests his thumb on the exposed ball and his fingers on the buttons. To move the cursor around the screen, the ball is rolled with the thumb. Since the whole device is not moved, a track ball requires less space than a mouse. So when space is limited, a track ball can be a boom. Track balls are particularly popular among users of notebook computers, and are built into Apple Computer’s Power Book and IBM ThinkPad notebooks.