Quick Search Box

Tuesday, March 17, 2009

Data Retrieval

For meaningful data retrieval, availability of data that has been compiled from various sources and put together in a usable form is an essential prerequisite. On the Internet, a large number of databases exist. These have been put together by commercially run data providers as well as individuals or groups with special interest in particular areas. To retrieve such data, any user needs to know the address/s of such Internet servers. Then depending on the depth of information being sought, different databases have to be searched and required information compiled. The work involved is similar to a search process in a large library; except that this Internet "library" is immense, dynamic, because of regular updating, and entirely electronic. While some skill is required for searching, the user will be able to access, search and check a large collection of servers.

Communication

Communication on the Internet can be online or offline. When some users connect to a single server or an on-line service at the same time, they can communicate in an "online chat". This can be truly "many to many" as in a room full of people talking to each other on peer to peer basis. Alternatively, the users send e-mail to each other which can be read by the receiver whenever he/she finds the time. This is off-line communication, but "one to one" or "one to many". Similarly, it is possible for users to get together electronically with those sharing common interests in "usenet" groups. The users post messages to be read and answered by others at their convenience, in turn all of which can be read and replied to by others and so on.

Applications of Internet

Internet’s applications are many and depend on the innovation of the user. The common applications of the Internet can be classified into three primary types namely: Communication, Data retrieval and Data publishing.

Surfing on the Internet:

Many of the servers on the Internet provide information, specialising on a topic or subject. There is a large number of such servers on the Internet. When a user is
looking for some information, it may be necessary for him/her to look for such information from more than one server. WWW links the computers on the Internet, like a spider web, facilitating users to go from one computer to another directly. When a user keeps hopping from one computer to another, it is called "surfing".
The Internet facilitates "many to many" communication. Modern technology has, so far, made possible communication, "one to many" as in broadcasting; "one to one" as in telephony; "a few to a few" as in telephone conferencing; and "many to one" as in polling. In addition WWW works on "multi-media", and information can be accessed and transmitted in text, voice, sound and/or video. Graphics and interactive communication are two distinctive features of the Internet and WWW.

Uniform Resource Locators

The format of a URL is: protocol/Internet address/Web page address.
The protocol that the Web uses for HTML codes for Web page is HyperText Transport Protocol (HTTP) For example, consider the web page address: http://pages.prodigy.com/kdallas/index.htm.
The http:// specifies that HTTP will be used to process information to and from the Web server; pages.prodigy.com is the Web server’s Internet address; and kdallas/index.htm is the address of the page on the server. Index.htm could have been omitted, because this is the default for the main page within a directory (i.e., kdallas in this example) Within HTML, there is the capability to display information in list or tables and to create forms for users to send information to someone else. In addition, HTML provides the capability to specify graphic files to be displayed. These and other features let a user create complex Web pages.

Monday, March 16, 2009

World Wide Web

The World Wide Web or the Web is a component of the Internet that provides access to large amounts of information located on many different servers. The Web also provides access to many of the services available on the Internet.
The fundamental unit of the Web is the Web page. The Web page is a text document that contains links to other Web pages, graphic and audio files, and other Internet services such as file transfer protocol (FTP) and E-mail.
Web pages reside on servers that run special software that allow users to access Web pages and to activate links to other Web pages and to Internet services. Tens of thousands of Web servers are currently connected to the Internet. A user can directly access any Web page on one of these servers and then follow the links to other pages. This process creates a Web of links around the world and, thus, the name World Wide Web.

ARPANET Objectives

♦ The network would continue to function even if one or many of the computers or connections in the network failed.
♦ The network had to be useable by vastly different hardware and software platforms.
♦ The network had to be able to automatically reroute traffic around non-functioning parts of the network.
♦ The network had to be a network of networks, rather than a network of computers (Hoffman, 1995)
It rapidly became evident that people wanted to share information between networks, and as a result, commercial networks were developed to meet consumer demand. The Internet became the umbrella name for this combination of networks in the late 1980’s .Today’s Internet is somewhat difficult to describe. Essentially, the Internet is a network of computers that offers access to information and people.

History and Background

The history of the Internet is closely tied to the U.S. Department of Defense. The military was a leader in the use of computer technology in the 1960’s and saw the need to create a network that could survive a single network computer’s malfunction. In the 1970’s, the Advanced Research Projects Agency (ARPA) developed a network that has evolved into today’s Internet. The network was named ARPANET and had many objectives that are still relevant today.
Access controls are common form of controls encountered in the boundary subsystem by restricting the use of system resources to authorize users, limiting the actions authorized users can take with these resources and ensuring that the users obtain only authentic system resources.
Current systems are designed to allow users to share their resources. This is done by having a single system simulate the operations of several systems, where each of the simulated system works as virtual machine allowing more efficient use of resources by lowering the idle capacity of the real system. Here, a major design problem is to ensure that each virtual system operates as if it were totally unaware of the operations of the other virtual systems. Besides increased scope exists for unintentional or deliberate damage to system resources / user’s actions.

Threats and Vulnerabilities:

The threats to the security of systems assets can be broadly divided into nine categories:
(i) Fire,
(ii) Water,
(iii) Energy variations like voltage fluctuations, circuit breakage, etc.,
(iv) Structural damages,
(v) Pollution,
(vi) Intrusion like physical intrusion and eavesdropping which can be eliminated / minimized by physical access controls, prevention of electromagnetic emission and providing the facilities with their proper locations / sites,
(vii) Viruses and Worms (being discussed in detail later on),
(viii) Misuse of software, data and services which can be avoided by preparing an employees’ code of conduct and
(ix) Hackers, the expected loss from whose activities can be mitigated only by robust logical access controls.

Sunday, March 15, 2009

Level of Security:

The task of a Security Administration in an organization is to conduct a security program which is a series of ongoing, regular and periodic review of controls exercised to ensure safeguarding of assets and maintenance of data integrity. Security programs involve following eight steps –
(i) Preparing project plan for enforcing security,
(ii) Assets identification,
(iii) Assets valuation,
(iv) Threats identification,
(v) Threats probability of occurrence assessment,
(vi) Exposure analysis,
(vii) Controls adjustment,
(viii) Report generation outlining the levels of security to be provided for individual systems, end user, etc.

Need for security:

The basic objective for providing network security is two fold:
(i) to safeguard assets and (ii) to ensure and maintain the data integrity.
The boundary subsystem is an interface between the potential users of a system and the system itself controls in the boundary subsystem have the following purposes like
(i) to establish the system resources that the users desire to employ and (ii) to restrict the actions undertaken by the users who obtain the system resource to an authorized set.
There are two types of systems security. A physical security is implemented to protect the physical systems assets of an organization like the personnel, hardware, facilities, supplies and documentation. A logical security is intended to control (i) malicious and non-malicious threats to physical security and (ii) malicious threats to logical security itself.

Business Continuity Planning (BCP)

Disaster events: -
(i) There is a potential for significantly interrupt normal business processing,
(ii) Business is associated with natural disasters like earthquake, flood, tornadoes, thunderstorms, fire, etc.
(iii) It is not a fact that all the disruptions are disasters,
(iv) Disasters are disruptions causing the entire facility to be inoperative for a lengthy period of time (usually more than a day)
(v) Catastrophes are disruptions resulting from disruption of processing facility.
A Business Continuity Plan (BCP) is a documented description of action, resources, and procedures to be followed before, during and after an event, functions vital to continue business operations are recovered, operational in an acceptable time frame.

Hot site:

An alternative facility that has the equipment and resources to recover business functions that are affected by a disaster. Hot sites may vary in the type of facilities offered (such as data processing, communications, or any other critical business functions needing duplication) The location and size of the hog site must be proportional to the equipment and resources needed.

Warm site:

An alternate processing site that is only partially equipped, as compared to a hot site, which is fully equipped. It can be shared (sharing servers equipment) or dedicated (own servers) .

Saturday, March 14, 2009

Cold site:

An alternative facility that is devoid of any resources or equipment, except air conditioning and raised flooring. Equipment and resources must be installed in such a facility to duplicate the critical business functions of an organisation. Cold sites have many variations depending on their communication facilities.

Disaster recovery sites

Data centers need to be equipped with the appropriate disaster recovery systems that minimize downtime for its customers. This means that every data center needs to invest in solutions, such as power backup and remote management. Downtime can be eliminated by having proper disaster recovery (DR) plans for mission-critical types of organisations, so as to be prepared when disaster strikes. Some of the larger IT organizations, which cannot tolerate too much downtime, tend to set up their DR site as a hot site, where both the primary and DR sites are kept in real-time synchronization , all the time.

Challenges faced by the management

(i) Maintaining a skilled staff and the high infrastructure needed for daily data center operations:. A company needs to have staff which is expert at network management and has software / OS skills and hardware skills. The company has to employ a large number of such people, as they have to work on rotational shifts. The company would also used additional cover in case a person leaves
(ii) Maximising uptime and performance : While establishing sufficient redundancy and maintaining watertight security, data centers have to maintain maximum uptime and system performance.
(iii) Technology selection : The other challenges that enterprise data centers face is technology selection, which is crucial to the operations of the facility keeping business objectives in mind. Another problem is compensating for obsolescence.

Leveraging the best

In both enterprise/captive and public data centers, the systems and infrastructure need to be leveraged fully to maximize ROI. For companies that host their online applications with public data centers, in addition to the primary benefit of cost savings, perhaps the biggest advantage is the value-added services available. Enterprises usually prefer to select a service provider,
which can function as a one-stop solution provider and give them an end-to-end outsourcing experience.
Data centers need to strike a careful balance between utilization and spare infrastructure capacity. They need to be able to provide additional infrastructure to their customers who wish to scale their existing contracts with little or no advance notice. Thus it is necessary that there be additional infrastructure at all times. This infrastructure could include bandwidth and connectivity, storage, server or security infrastructure (firewalls, etc.)

Constituents of a Data Centre

To keep equipment running reliably, even under the worst circumstances, the data center is built with following carefully engineered support infrastructures:
• Network connectivity with various levels of physical (optical fibre and copper) and service (both last mile and international bandwidth) provider redundancy
• Dual DG sets and dual UPS
• HVAC systems for temperature control
• Fire extinguishing systems
• Physical security systems: swipe card/ biometric entry systems, CCTV, guards and so on.
• Raised flooring
• Network equipment
• Network management software
• Multiple optical fibre connectivity
• Network security: segregating the public and private network, installing firewalls and intrusion detection systems (IDS)

Friday, March 13, 2009

System monitoring and support

The data center should provide system monitoring and support, so that you can be assured that the servers are being monitored round the clock.
• 24x7x365 hours network monitoring
• Proactive customer notification
• Notification to customers for pre-determined events
• Monitoring of power supply, precision air conditioning system, fire and smoke detection systems, water detection systems, generators and uninterruptible power supply (UPS) systems.
For a data center to be considered world-class, there can be no shortcuts in the commissioning of the facility. Connectivity, electrical supply and security are perhaps the three paramount requirements of any data center.

Security

Physical security and systems security are critical to operations. Thus, it should provide both types of security measures to ensure the security of equipment and data placed at the data center.
(a) Physical security: It can be achieved through
• Security guards
• Proximity card and PIN for door access
• Biometrics access and PIN for door access
• 24 x 365 CCTV surveillance and recording
(b) Data security: Data security within a data center should be addressed at multiple levels.
• Perimeter security: This is to manage both internal and external threats. This consists of firewalls, intrusion detection and content inspections; host security; anti-virus and access control and administrative tools.
• Access management: This is for both applications and operating systems that host these critical applications.

Electrical and power systems:

A data center should provide the highest power availability with uninterrupted power systems (UPS)

Availability of Data

The goal of a data center is to maximize the availability of data, and to minimize potential downtime. To do this, redundancy has to be built in to all the mission critical infrastructure of the data center, such as connectivity, electrical supply, security and surveillance, air conditioning and fire suppression.

Data Security

Another issue critical for data centers is the need to ensure maximum data security and 100 per cent availability. Data centers have to be protected against intruders by controlling access to the facility and by video surveillance. They should be able to withstand natural disasters and calamities, like fire and power failures. Recovery sites must be well maintained as it is here that everything in the data center is replicated for failure recovery.

Wednesday, March 11, 2009

Size

Data centers are characterized foremost by the size of their operations. A financially viable data center could contain from a hundred to several thousand servers. This would require a minimum area of around 5,000 to 30,000 square metres. Apart from this, the physical structure containing a data center should be able to withstand the sheer weight of the servers to be installed inside. Thus, there is a need for high quality construction.

Storage on demand:

• It provides the back-end infrastructure as well as the expertise, best practices and proven processes so as to give a robust, easily management and cost-effective storage strategy.
• It provides data storage infrastructure that supports your ability to access information at any given moments – one that gives the security, reliability and availability needed to meet company demands.

What can they do?

(i) Database monitoring:
• This is done via a database agent, which enables the high availability of the database through comprehensive automated management.
(ii) Web monitoring:
• This is to assess and monitor website performance, availability, integrity and the responsiveness from a site visotor’s perspective.
• It also reports on HTTP, FTP service status, monitors URL availability and round-trip response times, and verifies Web content accuracy and changes.
(iii) Backup and restore:
• It provides centralized multi-system management capabilities.
• It is also a comprehensive integrated management solution for enterprise data storage using specialized backup agents for the operating system, database, open files and application.

Which sectors use them?

Any large volume of data that needs to be centralized, monitored and managed centrally needs a data center. Of course, a data center is not mandatory for all organizations that have embraced IT; it depends on the size and criticality of data. Data centers are extremely capital-intensive facilities. Commissioning costs run into millions of dollars, and operational costs involved in maintaining levels of redundant connectivity, hardware and human resources, can be stratospheric. The percentage of enterprises for which it makes business sense to commission and operate an enterprise data center is, consequently, extremely small.

Public data centers:

A public data center (also called internet data centers) provide services ranging from equipment colocation to managed web-hosting. Clients typically access their data and applications via the internet.
Typically, data centers can be classified in tiers, with tier 1 being the most basic and inexpensive, and tier 4 being the most robust and costly. The more ‘mission critical’ an application is, the more redundancy, robustness and security are required for the data center. A tier 1 data center does not necessarily need to have redundant power and cooling infrastructures. It only needs a lock for security and can tolerate upto 28.8 hours of downtime per year.

Tuesday, March 10, 2009

Private Data Centre:

A private data center (also called enterprise data centers) is managed by the organization’s own IT department, and it provides the applications, storage, web-hosting, and e-business functions needed to maintain full operations. If an organization prefers to outsource these IT functions, then it turns to a public data center.

WHAT IS A DATA CENTRE ?

A data center is a centralized repository for the storage, management and dissemination of data and information. Data centers can be defined as highly secure, fault-resistant facilities, hosting customer equipment that connects to telecommunications networks. Often referred to as an Internet hotel/ server farm, data farm, data warehouse, corporate data center, Internet service provide(ISP) or wireless application service provider (WASP), the purpose of a data center is to provide space and bandwidth connectivity for servers in a reliable, secure and scaleable environment. These data centers are also referred to as public data centers because they are open to customers.

Change management:

Of course it’s easy - and faster - to exchange a component on the server than to furnish numerous PCs with new program versions. To come back to our VAT example: it is quite easy to run the new version of a tax-object in such a way that the clients automatically work with the version from the exact date that it has to be run. It is, however, compulsory that interfaces remain stable and that old client versions are still compatible. In addition such components require a high standard of quality control. This is because low quality components can, at worst, endanger the functions of a whole set of client applications. At best, they will still irritate the systems operator.

The advantages of 3-tier architecture:

As previously mentioned 3-tier architecture solves a number of problems that are inherent to 2-tier architectures. Naturally it also causes new problems, but these are outweighed by the advantages.
Clear separation of user-interface-control and data presentation from application-logic: Through this separation more clients are able to have access to a wide variety of server applications. The two main advantages for client-applications are clear: quicker development through the reuse of pre-built business-logic components and a shorter test phase, because the server-components have already been tested.
Dynamic load balancing: If bottlenecks in terms of performance occur, the server process can be moved to other servers at runtime.

Data-server-tier:

This tier is responsible for data storage. Besides the widespread relational database systems, existing legacy systems databases are often reused here.
It is important to note that boundaries between tiers are logical. It is quite easily possible to run all three tiers on one and the same (physical) machine. The main importance is that the system is neatly structured, and that there is a well planned definition of the software boundaries between the different tiers.

Monday, March 9, 2009

Application-server-tier:

This tier is new, i.e. it isn’t present in 2-tier architecture in this explicit form. Business-objects that implement the business rules "live" here, and are available to the client-tier. This level now forms the central key to solving 2-tier problems. This tier protects the data from direct access by the clients.
The object oriented analysis "OOA", on which many books have been written, aims in this tier: to record and abstract business processes in business-objects. This way it is possible to map out the applications-server-tier directly from the CASE-tools that support OOA.

Client-tier:

It is responsible for the presentation of data, receiving user events and controlling the user interface. The actual business logic (e.g. calculating added value tax) has been moved to an application-server. Today, Java-applets offer an alternative to traditionally written PC-applications. See our Internet-page for further information.

What is 3-tier and n-tier architecture?

From here on we will only refer to 3-tier architecture, that is to say, at least 3-tier architecture.
The following diagram shows a simplified form of reference-architecture, though in principal, all possibilities are illustrated.

Why 3-tier?

Unfortunately the 2-tier model shows striking weaknesses, that make the development and maintenance of such applications much more expensive.
The complete development accumulates on the PC. The PC processes and presents information which leads to monolithic applications that are expensive to maintain.
In a 2-tier architecture, business-logic is implemented on the PC. Even the business-logic never makes direct use of the windowing-system, programmers have to be trained for the complex API under Windows.
Windows 3.X and Mac-systems have tough resource restrictions. For this reason applications programmers also have to be well trained in systems technology, so that they can optimize scarce resources.

3-Tier and N-Tier Architecture

Through the appearance of Local-Area-Networks, PCs came out of their isolation, and were soon not only being connected mutually but also to servers. Client/Server-computing was born.
Servers today are mainly file and database servers; application servers are the exception. However, database-servers only offer data on the server; consequently the application intelligence must be implemented on the PC (client) Since there are only the architecturally tiered data server and client, this is called 2-tier architecture. This model is still predominant today, and is actually the opposite of its popular terminal based predecessor that had its entire intelligence on the host system.

Network-Node Intrusion Detection

Network-node intrusion detection was developed to work around the inherent flaws in traditional NID. Network-node pulls the packet-intercepting technology off of the wire and puts it on the host. With NNID, the "packet-sniffer" is positioned in such a way that it captures packets after they reach their final target, the destination host. The packet is then analyzed just as if it were traveling along the network through a conventional "packet-sniffer." This scheme came from a HID-centric assumption that each critical host would already be taking advantage of host-based technology. In this approach, network-node is simply another module that can attach to the HID agent. Network node's major disadvantage is that it only evaluates packets addressed to the host on which it resides.

Hybrid Intrusion Detection

Hybrid intrusion detection systems offer management of and alert notification from both network and host-based intrusion detection devices. Hybrid solutions provide the logical complement to NID and HID - central intrusion detection management.

Host-based Intrusion Detection

Host-based intrusion detection systems are designed to monitor, detect, and respond to user and system activity and attacks on a given host. Some more robust tools also offer audit policy management and centralization, supply data forensics, statistical analysis and evidentiary support, and in certain instances provide some measure of access control. The difference between host-based and network-based intrusion detection is that NID deals with data transmitted from host to host while HID is concerned with what occurs on the hosts themselves.

Network Intrusion Detection

Network intrusion detection deals with information passing on the wire between hosts. Typically referred to as "packet-sniffers," network intrusion detection devices intercept packets traveling along various communication mediums and protocols, usually TCP/IP. Once captured, the packets are analyzed in a number of different ways. Some NID devices will simply compare the packet to a signature database consisting of known attacks and malicious packet "fingerprints", while others will look for anomalous packet activity that might indicate malicious behavior. In either case, network intrusion detection should be regarded primarily as a perimeter defense.

IDS Components

The goal of intrusion detection is to monitor network assets to detect anomalous behavior and misuse. This concept has been around for nearly twenty years but only recently has it seen a dramatic rise in popularity and incorporation into the overall information security infrastructure.

Friday, March 6, 2009

Proxy server:

A proxy server is designed to restrict access to information on the Internet. If, for example, you do not want your users to have access to pornographic materials, a proxy server can be configured to refuse to pass the request along to the intended Internet server.
A proxy server operates on a list of rules given to it by a System Administrator. Some proxy software uses list of specific forbidden sites, while other proxy software examines the content of a page before it is served to the requester. If certain keywords are found in the requested page, access to it is denied by the proxy server.
Technologically, there’s no substantial difference between a caching server and a proxy server. The difference comes in the desired outcome of such a server’s use.
If you wish to reduce the overall amount of traffic exchanged between your network and the Internet, a caching server may be your best bet. On the other hand, if you wish to restrict or prohibit the flow of certain types of information to your network, a proxy server will allow you to do that. There are several different packages that will allow a System Administrator to set up a caching or proxy server. Additionally, you can buy any of a number of turn-key solutions to provide these services.

Caching server:

A caching server is employed when you want to restrict your number of accesses to the Internet. There are many reasons to consider doing this. Basically, a caching server sits between the client computer and the server that would normally fulfill a client’s request. Once the client’s request is sent, it is intercepted by the caching server. The caching server maintains a library of files that have been requested in the recent past by users on the network. If the caching server has the requested information in its cache, the server returns the information without going out to the Internet.
Storing often-used information locally is a good way to reduce overall traffic to and from the Internet. A caching server does not restrict information flow. Instead, it makes a copy of requested information, so that frequently requested items can be served locally, instead of from the original Internet source. Caching servers can also be connected in a hierarchy so if the local cache does not have the information, it can pass the request to nearby caching servers that might also contain the desired files.

Chat server:

Some organizations choose to run a server that will allow multiple users to have "real-time" discussions, called "chats" on the Internet. Some chat groups are moderated; most however are unmoderated public discussions. Further, most chat servers allow the creation of "private" chat rooms where participants can "meet" for private discussions. You can participate in chats on other servers without running a chat server yourself.
The popularity of chat rooms has grown dramatically over the past few years on the Internet, however, the ability to talk in small groups on the Internet is not new. "Chat" is a graphical form of an Internet service called IRC, or Internet Relay Chat. IRC was a replacement for a UNIX command called "talk." Using talk, and even IRC can be cumbersome. Chat clients, on the other hand, are available for all platforms and are graphical in nature, opening up their utility to the majority of Internet users.

News server:

Usenet News is a world wide discussion system consisting of thousands of newsgroups organized into hierarchies by subject. Users read and post articles to these newsgroups using client software. The "news" is held for distribution and access on the news server. Because newsgroups tend to generate large amounts of Internet traffic, you may wish to consider the method in which you intend to receive Usenet news.
There are two ways to accept Usenet News: as a "push" or "pull" feed. With a "push" feed, news articles are "pushed" onto your news server, whether or not your users read those articles. With a "pull" feed, your news server has all of the headers for the collection of Usenet News articles, but does not retrieve the article itself unless it is specifically requested by a user.

FTP server:

File Transfer Protocol (FTP) is an Internet-wide standard for distribution of files from one computer to another. The computer that stores files and makes them available to others is server. Client software is used to retrieve the files from the server. The two most common way to transfer files are with anonymous FTP, where anyone can retrieve files from or place files on a specific site, and logged file transfers, where an individual must login into the FTP server with an ID and password.
For example, Merit Network, Inc. makes network configuration files such as Domain Name Registration templates available for anonymous FTP on ftp.merit.edu

Thursday, March 5, 2009

Web server:

The World Wide Web (WWW) is a very popular Internet source of information. Web browsers present information to the user in hypertext format. When the user selects a word or phrase that a Web page’s author has established as a hypertext link, the Web browser queries another Web server or file to move to another Web page related to the link. For example, "Netscape" is a Web browser which queries Web servers on the Internet. Which Web server Netscape queries depends upon which hypertext link the user selects.

Gopher server:

Gopher is an Internet application that uses multiple Gopher servers to locate images, applications, and files stored on various servers on the Internet. Gopher offers menu choices to prompt users for information that interests them, and then establishes the necessary network connections to obtain the resource. For example, "Veronica" is a Gopher application that searches databases of the file contents of worldwide Gopher servers to help locate Gopher resources.

DNS server:

Domain Name Service is an Internet-wide distributed database system that documents and distributes network-specific information, such as the associated IP address for
a host name, and vice versa. The host that stores this database is a name server. The library routines that query the name server, interpret the response and return the information to the program that requested it are resolvers.
For example: To determine the location of a remote computer on the Internet, communications software applications (such as NCSA Telnet) use resolver library routines to query DNS for the remote computer's IP address.

Mail server:

A mail server is the most efficient way to receive and store electronic mail messages for a community of users. A central mail server runs 24 hours a day. The mail server can also provide a global email directory for all community and school users, as well as email gateway and relay services for all other mail servers in the district. In such a scenario, user email boxes would be maintained on a separate email server located at each school .

File server:

A file server, one of the simplest servers, manages requests from clients for files stored on the server’s local disk. A central file server permits groups and users to share and access data in multiple ways. Central file servers are backed up regularly and administrators may implement disk space quotas for each user or group of users.
For example: Using a certain software client, PC users are able to "mount" remote UNIX server file systems. As a result, the remote network file system appears to be a local hard drive on the user's PC.

Wednesday, March 4, 2009

What Does Transaction Server Do?

To understand what MTS is and what it does, we need to first make one very important point clear. This software should really have been named Microsoft Component Server, not Microsoft Transaction Server. MTS is all about managing the way applications use components, and not just about managing transactions. Yes, transactions are a big part of many applications we write and MTS can help to manage these—but MTS also provides a very useful service for applications that don’t use transactions at all. To be able to define MTS accurately, we first need to understand what goes on inside it in the most basic way.

Transaction Servers:

MTS or Microsoft Transaction Server is an integral part of Windows NT, and is installed by default as part of the operating system in NT5. It is a service in much the same way as Internet Information Server or the File and Print services that we now take for granted. In other words, it is part of the system that is available in the background whenever one of our applications requires it.
Control and configuration of MTS is via either a snap-in to the Microsoft Management Console, or through the HTML administration pages that are included with MTS. This is very similar to the interface provided for Internet Information Server 4, and gives an integrated management function that is useful when building and setting up distributed applications.

Print Servers:

Print servers provide shared access to printers. Most LAN operating systems provide print service. Print service can run on a file server or on one or more separate print server machines. Non-file server print servers can be dedicated to the task of print service, or they can be non-dedicated workstations.
The disadvantages of using workstations as print servers are similar to the disadvantages of using file servers as workstations. The workstation may run a little slower when print services are being used, a user could shut down the server without warning, or an application could lock up the server. The consequences of a lock-up or shut-down on a print server, however,
are usually less severe than the consequences of locking up a file server. The time involved in dealing with these problems, however. can be costly.

Active Application Server:

This type of server supports and provides a rich environment for server-side logic expressed as objects, rules and components. These types of servers are most suitable for dealing with based e-commerce and decision processing.

Component Servers:

The main purpose of these servers is to provide database access and transaction processing services to software components including DLLs, CORBA, and JavaBeans. First, they provide environment for server-side components. Second, they provide access to database and other services to the component. These types of servers are stateless. Examples include MTS (which provides an interface for DLL), Sybase Jaguar, and IBM Component broker.

Tuesday, March 3, 2009

Web Information Servers:

This type of server employs HTML templates and scripts to generate pages incorporating values from the database in them. These types of servers are stateless servers. Such servers include Netscape Server, HAHT, Allaire, Sybase, and SilverStream.

Management Console:

Single point graphical management console for remotely monitoring clients and server clusters.

Load Balancing:

Capability to send the request to different servers depending on the load and availability of the server.

Fault Tolerance:

Ability of the application server with no single point of failure, defining policies for recovery and fail-over recovery in case of failure of one object or group of objects.

Component Management:

Provides the manager with tools for handling all the components and runtime services like session management, and synchronous/asynchronous client notifications, as well as executing server business logic.

Monday, March 2, 2009

Application Servers

An application server is a server program that resides in the server (computer) and provides the business logic for the application program. The server can be a part of the network, more precisely the part of the distributed network. The server program is a program that provides its services to the client program that resides either in the same computer or on another computer connected through the network.
Application servers are mainly used in Web-based applications that have a 3-tier architecture.

Database Servers

Database management systems (DBMS) can be divided into three primary components: development tools, user interface, and database engine. The database engine does all the selecting, sorting, and updating. Currently, most DBMS combine the interface and engine on each user's computer. Database servers split these two functions, allowing the user interface software to run on each user's PC (the client), and running the database engine in a separate machine (the database server) shared by all users. This approach can increase database performance as well as overall LAN performance because only selected records are transmitted to the user's PC, not large blocks of files. However, because the database engine must handle multiple requests,the database server itself can become a bottleneck when a large number of requests are pending.

Network

The network hardware is the cabling, the communication cords, and the device that link the server and the clients. The communication and data flow over the network is managed and maintained by network software. Network technology is not well understood by business people and end users, since it involves wiring in the wall and function boxes that are usually in a closet.

Fat-client or Fat-server

Fat-client or fat-server are popular terms in computer literature. These terms serve as vivid descriptions of the type of client/server systems in place. In a fat-client system, more of the processing takes place on the client, like with a file server or database server. Fat-servers place more emphasis on the server and try to minimize the processing done by clients. Examples of fat-servers are transaction, GroupWare, and web
servers. It is also common to hear fat-clients referred to as "2-Tier" systems and fat-servers referred to as "3-Tier" systems.

Middleware

The network system implemented within the client/server technology is commonly called by the computer industry as middleware. Middleware is all the distributed software needed to allow clients and servers to interact. General middleware allows for communication, directory services, queuing, distributed file sharing, and printing. Service-specific software like ODBC. The middleware is typically composed of four layers, which are Service, Back-end Processing, Network Operating System, and Transport Stacks. The Service layer carries coded instructions and data from software applications to the Back-end Processing layer for encapsulating network-routing instructions. Next, the Network Operating System adds additional instructions to ensure that the Transport layer transfers data packets to its designated receiver efficiently and correctly. During the early stage of middleware development, the transfer method was both slow and unreliable.

Sunday, March 1, 2009

Server

Servers await requests from the client and regulate access to shared resources. File servers make it possible to share files across a network by maintaining a shared library of documents, data, and images. Database servers one their processing power to executed Structured Query Language (SQL) requests from clients. Transaction servers execute a series of SQL commands, an online transaction-processing program (OLTP), as opposed to database servers, which respond to a single client command. Web servers allow clients and servers to communicate with a universal language called HTTP.

Client

Clients, which are typically PCs, are the "users" of the services offered by the servers described above. There are basically three types of clients. Non-Graphical User Interface (GUI) clients require a minimum amount of human interaction; non-GUIs include ATMs, cell phones, fax machines, and robots. Second, GUI-Clients are human interaction models usually involving object/action models like the pull-down menus in Windows 3-X. Object-Oriented User Interface (OOUI) Clients take GUI-Clients even further with expanded visual formats, multiple workplaces, and object interaction rather than application interaction. Windows 95 is a common OOUI Client.

Characteristics of Client / Server Technology

There are ten characteristics that reflect the key features of a client / server system. These ten characteristics are as follows:
1. Client/server architecture consists of a client process and a server process that can be distinguished from each other.
2. The client portion and the server portions can operate on separate computer platforms.
3. Either the client platform or the server platform can be upgraded without having to upgrade the other platform.
4. The server is able to service multiple clients concurrently; in some client/server systems, clients can access multiple servers.
5. The client/server system includes some sort of networking capability.
6. A significant portion of the application logic resides at the client end.

Benefits of the Client /Server Technology

Client/server systems have been hailed as bringing tremendous benefits to the new user, especially the users of mainframe systems. Consequently, many businesses are currently in the process of changing or in the near future will change from mainframe (or PC) to client / server systems. Client / Server has become the IT solution of choice among the country’s largest corporations. In fact, the whole transition process, that a change to a client/ server invokes, can benefit a company’s long run strategy.
􀂃 People in the field of information systems can use client/server computing to make their jobs easier.
􀂃 Reduce the total cost of ownership.
􀂃 Increased Productivity
􀂃 End user productivity .

Implementation examples of client / server technology

􀂃 Online banking application
􀂃 Internal call centre application
􀂃 Applications for end-users that are stored in the server
􀂃 E-commerce online shopping page
􀂃 Intranet applications
􀂃 Financial, Inventory applications based on the client Server technology.
􀂃 Tele communication based on Internet technologies .