Monday, December 2, 2013

LVM (Logical Volume Management)

LVM (Logical Volume Management)

  With LVM, we can create logical partitions that can span across one or more physical hard drives. First, the hard drives are divided into physical volumes, then those physical volumes are combined together to create the volume group and finally the logical volumes are created from volume group.

How to Create LVM

fdisk /dev/sda

n                               (create 3 new partition)

t                                (change partition ID  to 8e)

w                              (save partition)

partprobe /dev/sda

pvcreate /dev/sda{8,9,10}                        (To create physical volume)
pvdisplay                                                   (To get detail of  physical volume)
vgcreate vg00 /dev/sda{8,9,10}               (To create virtual Group)

vgdisplay                                                   (To get detail of virtual Group)

lvcreate -L +100M -n lv01 /dev/vg00        (To create 1st logical volume)
lvcreate -L +100M -n lv02 /dev/vg00        (To create 2nd logical volume)
lvdisplay                                                    (To get detail of logical volume)

mke2fs -j /dev/vg00/01                              (To format 1st logical volume)
mke2fs -j /dev/vg00/02                              (To format 2nd logical volume)

mkdir xyz

mkdir xyz1

mount /dev/vg00/lv01 xyz                            (To mount 1st logical volume)   
mount /dev/vg00/lv02 xyz1                          (To mount 2nd logical volume)

mount -a                                                       (update mount)

vi /etc/fstab                                                   (update entry)



Expand and Reduce LV

lvextend –L +50M /dev/vg00/lv01
lvdisplay /dev/vg00/lv01
lvextend –L +50M /dev/vg00/lv01



Remove  LV, VG, PV partition


umount xyz
umount xyz1
lvremove /dev/bg00/lv01
lvremove /dev/bg00/lv02
vgremove /dev/vg00
pvremove /dev/sda{8,9,10}
vi /etc/fstab                                            (To delete entry)

fdisk /dev/sda                                        (press d to  delete partition )

Step by Step Configuration of YUM Server

How to Configure YUM

Most of the beginners in Linux faces a big problem when they are deploying a package i.e.; dependency failure. To resolve this issue Redhat introduced a technology called YUM.
YUM is a repository system contains information about packages and files inside every RPMs, Now Don’t worry about redhat repository .Create your own Yum repository in a local system and enjoy






Pre quest of yum server
I assume that you have completed these pre quest of yum server

1st) A Linux system with hostname Server.example.com and with ip address of 192.168.0.254

2nd) Dump of RHEL disk on /var/ftp/pub location

Once you have completed these pre quests follow this guide. change directory to /var/ftp/pub/Server








yum and createrepo rpm are required for yum server install them

 

Now install createrepo by rpm  command.




After installing necessary package change directory to /var/ftp/pub




Now create repository of Server directory

repository of all rpm will be created in few minute 



Now create repository for VT










Wednesday, March 20, 2013

Linux log files location


     
Linux log files location

Almost all log files are located under /var/log directory (and sub directory). You can change to this directory using cd command but you need to be the root user. You can use less, more, cat or tail command to see the logs.


Go to /var/logs directory:# cd /var/logsView common log file /var/log/messages using any one of the following command:



# tail -f /var/log/messages
# less /var/log/messages
# more -f /var/log/messages
# vi /var/log/messages


Output:

Jan 17 22:04:25 router  dnsprobe[276]: dns query failed
Jan 17 22:04:29 router last message repeated 2 times
Jan 17 22:04:29 router  dnsprobe[276]: Primary DNS server Is Down... Switching To Secondary DNS server
Jan 17 22:05:08 router  dnsprobe[276]: Switching Back To Primary DNS server
Jan 17 22:26:11 debian -- MARK --
Jan 17 22:46:11 debian -- MARK --
Jan 17 22:47:36 router  -- MARK --
Jan 17 22:47:36 router  dnsprobe[276]: dns query failed
Jan 17 22:47:38  debian kernel: rtc: lost some interrupts at 1024Hz.
Jan 17 22:47:39  debian kernel: IN=eth0 OUT= MAC=00:0f:ea:91:04:07:00:08:5c:00:00:01:08:00 SRC=61.4.218.24 DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=46 ID=21599 DF PROTO=TCP SPT=59297 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0


Common Linux log files name and usage

    /var/log/message: General message and system related stuff
    /var/log/auth.log: Authenication logs
    /var/log/kern.log: Kernel logs
    /var/log/cron.log: Crond logs (cron job)
    /var/log/maillog: Mail server logs
    /var/log/qmail/ : Qmail log directory (more files inside this directory)
    /var/log/httpd/: Apache access and error logs directory
    /var/log/lighttpd: Lighttpd access and error logs directory
    /var/log/boot.log : System boot log
    /var/log/mysqld.log: MySQL database server log file
    /var/log/secure: Authentication log
    /var/log/utmp or /var/log/wtmp : Login records file
    /var/log/yum.log: Yum log files


In short /var/log is the location where you should find all Linux logs file. However some applications such as httpd have a directory within /var/log/ for their own log files. You can rotate log file using logrotate software and monitor logs files using logwatch software.

Tuesday, March 19, 2013

NFS Server

NFS Server

If you have two or more machines, you may wish to share disk space or devices, such as a CD drive, between machines. For this there is network file system (NFS), the easiest way of sharing files and resources across a network.


NFS is the way of working with files on a remote machine as if they are on your local one. NFS is the standard de facto solution to access the same $HOME directories from any machine on the Net. Using it also is a good way to avoid keeping duplicated informatioNFS n from eating large amounts of disk space.

How to make NFS

System should be Yum Server or Yum Client.

yum install nfs* -y                                    (package install)

yum install portmap* -y                           (package install)

mkdir /Data_Share                                  (making Dir for sharing)

vim /Data_Share/file1                              (making file)

vim /etc/exports
(Entry)
/Data_Share   *(rw,sync)
:wq!

service nfs restart                                  (Restart the Service)

service portmap restart                         (Restart the Service)

chkconfig nfs on

chkconfig portmap on



Configuration on Client Side   

mount -t nfs 192.168.1.1:/xyz  /mnt           (Considering server ip as 192.168.1.1)

vim /etc/fstab                                             (To make Permanent Entry)

192.168.1.1:/Data_Share /mnt nfs default 0 0

Saturday, March 16, 2013

Computer Communication Network Assignment


Computer Communication Network
(MCA)
Assignment A

1.a ) Explain Ethernet MAC sub layer protocol with frame formats and also analyze its performance.

Ans- The medium access control mechanisms of these protocols, which are responsible for satisfying both the time-critical/real-time response requirement over the network and the quality and reliability of communication between devices on the network. These timing parameters, which will ultimately influence control applications, are affected by the network data rate, the period of messages, the data or message size of the information, and the communication protocol. For each protocol, we studied the key performance parameters of the corresponding network when used in a control situation, including network utilization, magnitude of the expected time delay, and characteristics of time delays. Simulation results were presented for several different scenarios. The timing analyses and comparisons of message time delay given in this article will be useful for anyone designing a networked control system. The evaluation study described here is based on the characteristics and requirements of control systems. The characteristics are small data sizes and periodic messages, and the requirements include guaranteed transmission and bounded time delay. The suitability of network protocols for use in control systems is greatly influenced by these two criteria. Although Ethernet has seen widespread use in many data transmission applications and can support high data rates up to 1 Gbps, it may not be suitable as the communication medium for some control systems when compared with deterministic network systems. However, 19 because of its high data rate, Ethernet can be used for a periodic /non-time-critical and large data size communication, such as communication between workstations or machine cells. For intra-workstation cell communication with controller, sensors, and actuators, deterministic network systems are generally more suitable for meeting the characteristics and requirements of control systems. For control systems with short and/or prioritized messages, DeviceNet demonstrates better performance. The scheduled and unscheduled messaging capabilities in ControlNet make it suitable for time-critical and non-time-critical messages. ControlNet is also suitable for large data size message transmission. Our future efforts will focus on controller design for networked control systems, which can differ significantly from the design of traditional centralized control systems. This work will include conducting experimental studies of control networks for control applications. We plan to use this analysis, along with performance evaluation of candidate control networks presented here, as the basis for future message scheduling and control algorithm designs for networked control systems.


b) Given an IP address 192.168.0.5
i. What is the binary equivalent of second octet?
ii. What is the class of this address?
iii. What is the network address of the given IP address?

Ans i. 
  The first, and probably most important step, is to put down this row of values:

128
64
32
16
8
4
2
1
In order to remember these values start with the number 1, go from right to left, and double that number seven times. For example, start with 1 on the right side. For your next number, double the 1 (1 x 2 = 2). So, 2 is your next number (remembering to go from right to left). For your third number, double the 2 (2 x 2 = 4); to continue the sequence, double the 4 (4 x 2 = 8). Repeat this process until you've doubled your original number, seven times. The key to this is that every single one of the values we put in that row are going to have either number 1 or number 0 assigned to it. To convert the IP address we will take that string of numbers and start from left to right this time. For each value we ask this question: “Can I subtract this value from the decimal remaining?” If the answer is “NO” then you put a “0” under the binary value, and if the answer is “YES” then you put “1” there

 We take the IP address: 192.168.0.5 and the binary equivalent of second octet 168 is .
 Question: Can I subtract 128 from 168? Answer: YES. So we assign 1 to 128.

128    64    32    16    8    4    2    1
1

a. Question: Can I subtract 64 from 40? Answer: NO. So we assign 0 to 64.  
128    64    32    16    8    4    2    1
1         0

b. Question: Can I subtract 32 from 40? Answer: YES. So we assign 1 to 32.  

128    64    32    16    8    4    2    1
1         0      1

That will give us a remainder of 8. (40-32=8).

c. Question: Can I subtract 16 from 8? Answer: NO. So we assign 0 to 16.  


128    64    32    16    8    4    2    1
1         0      1       0

d. Question: Can I subtract 8 from 8? Answer: YES. So we assign 1 to 8.

128    64    32    16    8    4    2    1
1         0      1       0     1

e. That will give us a remainder of 0. So for the rest of the values in our row, we can assign 0.  

128    64    32    16    8    4    2    1
1         0      1       0     1    0    0    0

f. So now we know that a decimal number 168 is 10101000 converted to binary form. To double check, we take the values assigned with 1 and add them together: 
128+32+8=168 

Ans ii. Class- C
Ansiii. 255.255.255.0

2. Explain the various classifications of Computer Networks based on size, topology etc. Compare and contrast the advantages and disadvantages in each classification.

 “Computer network” to mean a collection of autonomous computer interconnected by a single technology. Two computer are said to be interconnected if they are able to exchange information. The old model of a single computer serving all of the organization’s computational needs has been replaced by one in which a large number of separate but interconnected computers do the job. These system are called computer networks.

Network topology defined as the logical connection of various computer in the network.
The six basic network topologies are: bus, ring, star, tree, mesh and hybrid.


1-Bus Topology:
In Bus topology all the computers are connected to a long cable called a bus. A node that wants to send data puts the data on the bus which carries it to the destination node. In this topology any computer can data over the bus at any time. Since, the bus is shared among all the computers. When two or more computers to send at the same time, an arbitration mechanism are needed to prevent simultaneous access to the bus.
Bus Topology

Advantages  of  Bus Topology

# It is easy to set-up and extend bus network.
# Cable length required for this topology is the least compared to other networks.
# Linear Bus network is mostly used in small networks. Good for LAN.
Disadvantages of Bus Topology

# There is a limit on central cable length and number of nodes that can be connected.
# Dependency on central cable in this topology has its disadvantages. If the main cable (i.e. bus ) encounters some problem, whole network breaks down.
# It is difficult to detect and troubleshoot fault at individual station.
# Security is very low because all the computers receive the sent signal from the source

2-Ring Topology:In Ring Topology, all the nodes are connected to each-other in such a way that they make a closed loop. Each workstation is connected to two other components on either side, and it communicates with these two adjacent neighbors. Data travels around the network, in one direction. Sending and receiving of data takes place by the help of TOKEN.

Token Passing is process where token contains a piece of information which along with data is sent by the source computer. This token then passes to next node, which checks if the signal is intended to it. If yes, it receives it and passes the empty to into the network, otherwise passes token along with the data to next node. This process continues until the signal reaches its intended destination.
The nodes with token are the ones only allowed to send data. Other nodes have to wait for an empty token to reach them. This network is usually found in offices, schools and small buildings.

Advantages of Ring Topology # Token ring technology reduces the need of server or central hub to manage the  workstations
# High performance delivered
# All nodes have equal opportunity to transmit the data

Disadvantages of Ring Topology

# Each packet of data must pass through all the computers between source and destination. This makes it slower than Star topology.
# If one workstation or port goes down, the entire network gets affected.
# Network is highly dependent on the wire which connects different components.
# MAU’s and network cards are expensive as compared to Ethernet cards and hubs

3-Star Topology:

In Star topology, all the components of network are connected to the central device called “hub” which may be a hub, a router or a switch. Unlike Bus topology (discussed earlier), where nodes were connected to central cable, here all the workstations are connected to central device with a point-to-point connection. So it can be said that every computer is indirectly connected to every other node by the help of “hub”.

All the data on the star topology passes through the central device before reaching the intended destination. Hub acts as a junction to connect different nodes present in Star Network, and at the same time it manages and controls whole of the network. Depending on which central device is used, “hub” can act as repeater or signal booster. Central device can also communicate with other hubs of different network. Unshielded Twisted Pair (UTP) Ethernet cable is used to connect workstations to central node.

Advantages of Star Topology

# Easy to install and implement
# Faulty nodes can be easily removed without affecting the other nodes in the loop
# Give better performance as messages doesn’t pass through various nodes unlike Bus topology

Disadvantages of Star Topology
# If more nodes are to be added then more cable would be required and this would increase the cost # If the central hub fails then the whole network is disrupted.
# Data transfer and capability depends on capacity of the central hub.

4-Tree Topology: 

Among all the Network Topologies we can derive that the Tree Topology is a combination of the bus and the Star Topology. The tree like structure allows you to have many servers on the network and you can branch out the network in many ways. This is particularly helpful for colleges, universities and schools so that each of the branches can identify the relevant systems in their own network and yet connect to the big network in some way.
A Tree Structure suits best when the network is widely spread and vastly divided into many branches. Like any other topologies, the Tree Topology has its advantages and disadvantages. A Tree Network may not suit small networks and it may be a waste of cable to use it for small networks. Tree Topology has some limitations and the configuration should suit those limitations.
Tree Topology integrates the characteristics of Star and Bus Topology. Earlier we saw how in Physical Star network Topology, computers (nodes) are connected by each other through central hub. And we also saw in Bus Topology, work station devices are connected by the common cable called Bus. After understanding these two network configurations, we can understand tree topology better. In Tree Topology, the number of Star networks are connected using Bus. This main cable seems like a main stem of a tree, and other star networks as the branches.

Advantages of a Tree Topology

# Point-to-point wiring for individual segments.
# Supported by several hardware and software venders.

 Disadvantages of a Tree Topology

# Overall length of each segment is limited by the type of cabling used.
# If the backbone line breaks, the entire segment goes down.
# More difficult to configure and wire than other topologies.

5-Mesh Topology:

Mesh Network is a network where all the nodes are connected to each other and is a complete network. In a Mesh Network every node is connected to other nodes on the network through hops. Some are connected through single hops and some may be connected with more than one hope.

While the data is traveling on the Mesh Network it is automatically configured to reach the destination by taking the shortest route which means the least number of hops. Data travels by hopping from one node to another and then reaches the destination node in a Mesh Topology Network.
An example of a Mesh Network is the Mobile Ad-hoc Network or MANet. The entire Mesh Network is continuously connected. Being completely connected does not mean that Mesh Network is dependent on each and every node of the network. Even if one node fails in the Mesh Network the network finds an alternate route to transfer the data. It is called the self healing technology where it receives data one way or the other.
The Mesh Network is based on a very sensible concept and has lesser chances of a network breakdown. There are so many possible combinations of routes and hops a data transfer can take that it will reach the destination one way or the other. It is highly unlikely that all the nodes in a single Mesh Network will break down at any given point of time.

Advantages of a Mesh Topology

# Eliminates traffic problems in links sharing.
# If one link becomes unusable, it does not incapacitate the entire system. Thus, act as robust.
# It has privacy and security.
# Point-to-point link make fault identification and fault isolation easy.
Disadvantages of a Mesh Topology

# Installation and reconnection are difficult.
# The hardware required to connect each link (I/O ports and cable) is expensive.
# It is generally too costly and complex for practical networks.

6-Hybrid Topology:

Hybrid, as the name suggests, is mixture of two different things. Similarly in this type of topology we integrate two or more different topologies to form a resultant topology which has good points(as well as weaknesses) of all the constituent basic topologies rather than having characteristics of one specific topology. This combination of topologies is done according to the requirements of the organization.
Hybrid topology

For example, if there exists a ring topology in one office department while a bus topology in another department, connecting these two will result in Hybrid topology. Remember connecting two similar topologies cannot be termed as Hybrid topology. Star-Ring and Star-Bus networks are most common examples of hybrid network.

Let's see the benefits and drawbacks of this networking architecture.

Advantages of Hybrid Network Topology

# Reliable : Unlike other networks, fault detection and troubleshooting is easy in this type of topology. The part in which fault is detected can be isolated from the rest of network and required corrective measures can be taken, WITHOUT affecting the functioning of rest of the network.
# Scalable: Its easy to increase the size of network by adding new components, without disturbing existing architecture.
# Flexible: Hybrid Network can be designed according to the requirements of the organization and by optimizing the available resources. Special care can be given to nodes where traffic is high as well as where chances of fault are high.
# Effective: Hybrid topology is the combination of two or more topologies, so we can design it in such a way that strengths of constituent topologies are maximized while there weaknesses are neutralized. For example we saw Ring Topology has good data reliability (achieved by use of tokens) and Star topology has high tolerance capability (as each node is not directly connected to other but through central device), so these two can be used effectively in hybrid star-ring topology.

Disadvantages of Hybrid Topology

# Complexity of Design: One of the biggest drawback of hybrid topology is its design. Its not easy to design this type of architecture and its a tough job for designers. Configuration and installation process needs to be very efficient.
# Costly Hub: The hubs used to connect two distinct networks, are very expensive. These hubs are different from usual hubs as they need to be intelligent enough to work with different architectures and should be function even if a part of network is down.
# Costly Infrastructure: As hybrid architectures are usually larger in scale, they require a lot of cables, cooling systems, sophisticate network devices, etc.

3. What is the difference between asynchronous and synchronous transmission?

Ans- 3 Data transfers occur in bursts of information each composed of a certain amount of bits. In order for a receiver to make sense of the data, it must know when to start and when to stop reading each burst. Synchronous and asynchronous transfers represent different methods of addressing this issue. The former involves a communication between the sender and receiver where the two agree upon the timing of the transfer. The latter relies on cues in the data itself to indicate to the receiver how to read the information. 

Synchronous

In synchronous data transfers, the sender and receiver take some time to communicate before they make the exchange. This communication outlines the parameters of the data exchange. This usually involves establishing which end, sender or receiver, will be in control of the transfer. Here, the two parties also ensure they are using the same timing; that is, they know when each burst ends and another begins. They also set parameters for resetting their clocks during the transfer to make sure they don't drift away from the agreed-upon timing.

Asynchronous

In asynchronous, or "best effort" transfers, sender and receiver do not establish the parameters of the information exchange. Rather, the sender places extra bits of data before and after each burst that indicate when each burst begins and ends. It then sends the information, and it is up to the receiver to determine how to reset its clock to match the timing of the signal. Unlike synchronous transfers, the receiver does not take time to communicate to the sender information about what it received.

Benefits and Drawbacks

Asynchronous transfers are generally faster than synchronous transfers. This is because they do not take up time prior to the transfer to coordinate their efforts. However, because of this, more errors tend to occur in asynchronous transfers as opposed to synchronous transfers. If many errors occur, it can negate the time saved by eliminating the initial step of setting transfer parameters, because the receiver will have to take measures to correct its errors.

Uses

Asynchronous transfers work well in situations where the exchange occurs over a reliable physical medium, such as fiber optic and coaxial cabling. This helps minimize transmission errors so the time saved by forgoing establishing parameters actually results in a faster transfer from the end user's point of view. Synchronous transfers work well when using less reliable transfer media, such as electrical wires and radio signals. Here, it's worth taking the extra time to coordinate the details of the transfer as it compensates for mistakes made by the physical medium.

4. With the help of neat sequence topology diagrams, explain-- 

(i) Stop and wait protocol 

Ans-i-  Stop and Wait ARQ
Stop and Wait transmission is the simplest reliability technique and is adequate for a very simple communications protocol. A stop and wait protocol transmits a Protocol Data Unit (PDU) of information and then waits for a response. The receiver receives each PDU and sends an Acknowledgement (ACK) PDU if a data PDU is received correctly, and a Negative Acknowledgement (NACK) PDU if the data was not received. In practice, the receiver may not be able to reliably identify whether a PDU has been received, and the transmitter will usually also need to implement a timer to recover from the condition where the receiver does not respond.
Under normal transmission the sender will receive an ACK for the data and then commence transmission of the next data block. For a long delay link, the sender may have to wait an appreciable time for this response. While it is waiting the sender is said to be in the "idle" state and is unable to send further data.


Stop and Wait ARQ - Waiting for Acknowledgment (ACK) from the remote node.

The blue arrows show the sequence of data PDUs being sent across the link from the sender (top to the receiver (bottom). A Stop and Wait protocol relies on two way transmission (full duplex or half duplex) to allow the receiver at the remote node to return PDUs acknowledging the successful transmission. The acknowledgements are shown in green in the diagram, and flow back to the original sender. A small processing delay may be introduced between reception of the last byte of a Data PDU and generation of the corresponding ACK.
When PDUs are lost, the receiver will not normally be able to identify the loss (most receivers will not receive anything, not even an indication that something has been corrupted). The transmitter must then rely upon a timer to detect the lack of a response.



Stop and Wait ARQ - Retransmission due to timer expiry
In the diagram, the second PDU of Data is corrupted during transmission. The receiver discards the corrupted data (by noting that it is followed by an invalid data checksum). The sender is unaware of this loss, but starts a timer after sending each PDU. Normally an ACK PDU is received before this the timer expires. In this case no ACK is received, and the timer counts down to zero and triggers retransmission of the same PDU by the sender. The sender always starts a timer following transmission, but in the second transmission receives an ACK PDU before the timer expires, finally indicating that the data has now been received by the remote node.
The state diagram (also showing the operation of NACK) is shown below:




State Diagram for a simple stop and wait protocol.

(ii) Go - back - N   ARQ sliding window protocol.

Ans- Go-Back-N uses the sliding window flow control protocol. If no errors occur the operations are identical to Sliding Window.
 Operations:
-> A station may send multiple frames as allowed by the window size
-> Receiver sends a NAK i if frame i is in error. After that, the receiver discards all incoming frames until the frame in error was correctly re-transmitted
-> If sender receives a NAK i it will re-transmit frame I and all packets i+1, i+2,... which have been sent, but not been acknowledged



5. Explain ISO – OSI , seven layer network architecture giving the functions of each layer.

Ans- In order for a computer to send information to another computer, and for that computer to receive and understand the information, there has to exist a set of rules or standard for this communication process. These standards ensure that varying devices and products can communicate with each other over any network. This set of standards is called a model. 
The International Standards Organization (ISO) has been created an industry wide model, or framework, for defining the rules networks should employ to ensure reliable communications. This network model is broken into layers, with each layer having a distinctive job in the communication process. 

Open Systems Interconnection(OSI) Model Overview

The Open Systems Interconnection (OSI) model began as a reference model, but has since been implemented. It was created by the International Organization for Standardization (ISO) to provide a logical framework for how data communication processes should interact across networks. Standards were created for the computer industry allowing different networks to work together efficiently. 

OSI Model Layers

The OSI Model consists of the following seven layers:
1. Application 
2. Presentation 
3. Session 
4. Transport 
5. Network 
6. Data Link 
7. Physical 

What do the 7 layers really do?




There are 7 layers in the OSI model. Each layer is responsible for a particular aspect of data communication. For example, one layer may be responsible for establishing connections between devices, while another layer may be responsible for error checking during transfer. 
The layers of the OSI model are divided into two groups: the upper layer and lower layer. The upper layers focus on user applications and how files are represented on the computers prior to transport. For the most part, network engineers are more concerned with the lower layers. It's the lower layers that concentrate on how the communication across a network actually occurs. 

Functions of each layer.

Layer 7: The application layer-This is the layer at which communication partners are identified, quality of service is identified, user authentication and privacy are considered, and any constraints on data syntax are identified. (This layer is not the application itself, although some applications may perform application layer functions.)
Layer 6: The presentation layer- This is a layer, usually part of an operating system, that converts incoming and outgoing data from one presentation format to another (for example, from a text stream into a pop-up window with the newly arrived text). Sometimes called the syntax layer.
Layer 5: The session layer- This layer sets up, coordinates, and terminates conversations, exchanges, and dialogs between the applications at each end. It deals with session and connection coordination.
Layer 4: The transport layer- This layer manages the end-to-end control (for example, determining whether all packets have arrived) and error-checking. It ensures complete data transfer.
Layer 3: The network layer- This layer handles the routing of the data (sending it in the right direction to the right destination on outgoing transmissions and receiving incoming transmissions at the packet level). The network layer does routing and forwarding.
Layer 2: The data-link layer- This layer provides synchronization for the physical level and does bit-stuffing for strings of 1's in excess of 5. It furnishes transmission protocol knowledge and management.
Layer 1: The physical layer- This layer conveys the bit stream through the network at the electrical and mechanical level. It provides the hardware means of sending and receiving data on a carrier


Computer Communication Network
(MCA)
Assignment B

1. What is Routing, Explain Shortest Path Routing algorithm with example?

Ans- Routing is the process of selecting paths in a network along which to send network traffic. Routing is performed for many kinds of networks, including the telephone network (circuit switching), electronic data networks (such as the Internet), and transportation networks.

Shortest Path Routing
Here, the central question dealt with is 'How to determine the optimal path for routing ?' Various algorithms are used to determine the optimal routes with respect to some predetermined criteria. A network is represented as a graph, with its terminals as nodes and the links as edges. A 'length' is associated with each edge, which represents the cost of using the link for transmission. Lower the cost, more suitable is the link. The cost is determined depending upon the criteria to be optimized. Some of the important ways of determining the cost are: 

Minimum number of hops: If each link is given a unit cost, the shortest path is the one with minimum number of hops. Such a route is easily obtained by a breadth first search method. This is easy to implement but ignores load, link capacity etc. 
Transmission and Propagation Delays: If the cost is fixed as a function of transmission and propagation delays, it will reflect the link capacities and the geographical distances. However these costs are essentially static and do not consider the varying load conditions. 
Queuing Delays: If the cost of a link is determined through its queuing delays, it takes care of the varying load conditions, but not of the propagation delays. 
Ideally, the cost parameter should consider all the above mentioned factors, and it should be updated periodically to reflect the changes in the loading conditions. However, if the routes are changed according to the load, the load changes again. This feedback effect between routing and load can lead to undesirable oscillations and sudden swings. 

Routing Algorithms

As mentioned above, the shortest paths are calculated using suitable algorithms on the graph representations of the networks. Let the network be represented by graph G ( V, E ) and let the number of nodes be 'N'. For all the algorithms discussed below, the costs associated with the links are assumed to be positive. A node has zero cost w.r.t itself. Further, all the links are assumed to be symmetric, i.e. if di,j = cost of link from node i to node j, then d i,j = d j,i . The graph is assumed to be complete. If there exists no edge between two nodes, then a link of infinite cost is assumed. The algorithms given below find costs of the paths from all nodes to a particular node; the problem is equivalent to finding the cost of paths from a source to all destinations.

Bellman-Ford Algorithm

This algorithm iterates on the number of edges in a path to obtain the shortest path. Since the number of hops possible is limited (cycles are implicitly not allowed), the algorithm terminates giving the shortest path. 

Notation: 
d i,j     = Length of path between nodes i and j, indicating the cost of the link. 
H     = Number of hops. 
D[ i,h]  = Shortest path length from node i to node 1, with up-to 'h' hops. 
D[ 1,h] =0 for all h . 

Algorithm : 

Initial condition : D[ i, 0] = infinity, for all i ( i != 1 ) 
Iteration : D[i, h+1] = min { di,j + D[j,h] } over all values of j . 
Termination : The algorithm terminates when D[i, h] =D [ i, h+1] for all i . 

Principle: 
For zero hops, the minimum length path has length of infinity, for every node. For one hop the shortest-path length associated with a node is equal to the length of the edge between that node and node 1. Hereafter, we increment the number of hops allowed, (from h to h+1 ) and find out whether a shorter path exists through each of the other nodes. If it exists, say through node 'j', then its length must be the sum of the lengths between these two nodes (i.e. di,j ) and the shortest path between j and 1 obtainable in upto h paths. If such a path doesn't exist, then the path length remains the same. The algorithm is guaranteed to terminate, since there are utmost N nodes, and so N-1 paths. It has time complexity of O ( N3 ) .
Dijkstra's Algorithm

Notation:
Di = Length of shortest path from node 'i' to node 1. 
di,j = Length of path between nodes i and j . 

Algorithm
Each node j is labeled with Dj, which is an estimate of cost of path from node j to node 1. Initially, let the estimates be infinity, indicating that nothing is known about the paths. We now iterate on the length of paths, each time revising our estimate to lower values, as we obtain them. Actually, we divide the nodes into two groups ; the first one, called set P contains the nodes whose shortest distances have been found, and the other Q containing all the remaining nodes. Initially P contains only the node 1. At each step, we select the node that has minimum cost path to node 1. This node is transferred to set P. At the first step, this corresponds to shifting the node closest to 1 in P. Its minimum cost to node 1 is now known. At the next step, select the next closest node from set Q and update the labels corresponding to each node using : 

Dj = min [ Dj , Di + dj,i ]

Finally, after N-1 iterations, the shortest paths for all nodes are known, and the algorithm terminates. 

Principle

Let the closest node to 1 at some step be i. Then i is shifted to P. Now, for each node j , the closest path to 1 either passes through i or it doesn't. In the first case Dj remains the same. In the second case, the revised estimate of Dj is the sum Di + di,j . So we take the minimum of these two cases and update Dj accordingly. As each of the nodes get transferred to set P, the estimates get closer to the lowest possible value. When a node is transferred, its shortest path length is known. So finally all the nodes are in P and the Dj 's represent the minimum costs. The algorithm is guaranteed to terminate in N-1 iterations and its complexity is O( N2 ). 

The Floyd Warshall Algorithm

This algorithm iterates on the set of nodes that can be used as intermediate nodes on paths. This set grows from a single node ( say node 1 ) at start to finally all the nodes of the graph. At each iteration, we find the shortest path using given set of nodes as intermediate nodes, so that finally all the shortest paths are obtained. 

Notation
Di,j [n] = Length of shortest path between the nodes i and j using only the nodes 1,2,....n as intermediate nodes. 
Initial Condition
Di,j[0] = di,j for all nodes i,j . 
Algorithm
Initially, n = 0. At each iteration, add next node to n. i.e. For n = 1,2, .....N-1 , 

Di,j[n + 1] = min { Di,j[n] , Di,n+1[n] + Dn+1,j[n] } 

Principle
Suppose the shortest path between i and j using nodes 1,2,...n is known. Now, if node n+1 is allowed to be an intermediate node, then the shortest path under new conditions either passes through node n+1 or it doesn't. If it does not pass through the node n+1, then Di,j[n+1] is same as Di,j[n] . Else, we find the cost of the new route, which is obtained from the sum, Di,n+1[n] + Dn+1,j[n]. So we take the minimum of these two cases at each step. After adding all the nodes to the set of intermediate nodes, we obtain the shortest paths between all pairs of nodes together. The complexity of Floyd-Warshall algorithm is O ( N3 ). 
It is observed that all the three algorithms mentioned above give comparable performance, depending upon the exact topology of the network.

2. When a network performs poorly, its users often complain to the administrators running it and demanding improvements. To improve the performance, the operators must first determine exactly what is going on? To find out what is really happening, the operators must make some measurements. What are these measurements?

Ans- 
Measuring network performance and parameters has many potential pitfalls. Below i list a few of them. Any systematic attempt to measure network performance should be careful to avoid these.

Make Sure That the Sample Size Is Large Enough

Do not measure the time to send one TPDU, but repeat the measurement, say, one million
times and take the average. Having a large sample will reduce the uncertainty in the measured mean and standard deviation. This uncertainty can be computed using standard statistical formulas.

Make Sure That the Samples Are Representative

Ideally, the whole sequence of one million measurements should be repeated at different times of the day and the week to see the effect of different system loads on the measured quantity. Measurements of congestion, for example, are of little use if they are made at a moment when 429 there is no congestion. Sometimes the results may be counterintuitive at first, such as heavy congestion at 10, 11, 1, and 2 o'clock, but no congestion at noon (when all the users are away at lunch).

Be Careful When Using a Coarse-Grained Clock

Computer clocks work by incrementing some counter at regular intervals. For example, a
millisecond timer adds 1 to a counter every 1 msec. Using such a timer to measure an event that takes less than 1 msec is possible, but requires some care. (Some computers have more accurate clocks, of course.) To measure the time to send a TPDU, for example, the system clock (say, in milliseconds) should be read out when the transport layer code is entered and again when it is exited. If the true TPDU send time is 300 μsec, the difference between the two readings will be either 0 or 1, both wrong. However, if the measurement is repeated one million times and the total of all measurements added up and divided by one million, the mean time will be accurate to better than 1 μsec.

Be Sure That Nothing Unexpected Is Going On during Your Tests

Making measurements on a university system the day some major lab project has to be turned in may give different results than if made the next day. Likewise, if some researcher has decided to run a video conference over your network during your tests, you may get a biased result. It is best to run tests on an idle system and create the entire workload yourself. Even this approach has pitfalls though. While you might think nobody will be using the network at 3 A.M., that might be precisely when the automatic backup program begins copying all the disks to tape. Furthermore, there might be heavy traffic for your wonderful World Wide Web pages from distant time zones.

Caching Can Wreak Havoc with Measurements

The obvious way to measure file transfer times is to open a large file, read the whole thing, close it, and see how long it takes. Then repeat the measurement many more times to get a good average. The trouble is, the system may cache the file, so only the first measurement actually involves network traffic. The rest are just reads from the local cache. The results from such a measurement are essentially worthless (unless you want to measure cache performance). Often you can get around caching by simply overflowing the cache. For example, if the cache is 10 MB, the test loop could open, read, and close two 10-MB files on each pass, in an attempt to force the cache hit rate to 0. Still, caution is advised unless you are absolutely sure you understand the caching algorithm. Buffering can have a similar effect. One popular TCP/IP performance utility program has been 
known to report that UDP can achieve a performance substantially higher than the physical line allows. How does this occur? A call to UDP normally returns control as soon as the message has been accepted by the kernel and added to the transmission queue. If there is sufficient buffer space, timing 1000 UDP calls does not mean that all the data have been sent. Most of them may still be in the kernel, but the performance utility thinks they have all been transmitted.

Understand What You Are Measuring

When you measure the time to read a remote file, your measurements depend on the network, the operating systems on both the client and server, the particular hardware interface boards 430  used, their drivers, and other factors. If the measurements are done carefully, you will ultimately discover the file transfer time for the configuration you are using. If your goal is to tune this particular configuration, these measurements are fine.
 However, if you are making similar measurements on three different systems in order to
 choose which network interface board to buy, your results could be thrown off completely by  the fact that one of the network drivers is truly awful and is only getting 10 percent of the  performance of the board.

Be Careful about Extrapolating the Results

Suppose that you make measurements of something with simulated network loads running from 0 (idle) to 0.4 (40 percent of capacity), as shown by the data points and solid line through them in below Figure.  It may be tempting to extrapolate linearly, as shown by the dotted line. However, many queueing results involve a factor of 1/(1 - ρ), where ρ is the load, so the true values may look more like the dashed line, which rises much faster than linearly. 

Figure - Response as a function of load.


System Design for Better Performance

Measuring and tinkering can often improve performance considerably, but they cannot substitute for good design in the first place. A poorly-designed network can be improved only so much. Beyond that, it has to be redesigned from scratch. In this section, we will present some rules of thumb based on hard experience with many networks. These rules relate to system design, not just network design, since the software and
operating system are often more important than the routers and interface boards. Most of these ideas have been common knowledge to network designers for years and have been passed on from generation to generation by word of mouth. They were first stated explicitly by Mogul (1993); our treatment largely follows his. Another relevant source is (Metcalfe, 1993).


Rule #1: CPU Speed Is More Important Than Network Speed

Long experience has shown that in nearly all networks, operating system and protocol
Overhead dominate actual time on the wire. For example, in theory, the minimum RPC time on an Ethernet is 102 μsec, corresponding to a minimum (64-byte) request followed by a 431 minimum (64-byte) reply. In practice, overcoming the software overhead and getting the RPC time anywhere near there is a substantial achievement. Similarly, the biggest problem in running at 1 Gbps is getting the bits from the user's buffer out onto the fiber fast enough and having the receiving CPU process them as fast as they come in. In short, if you double the CPU speed, you often can come close to doubling the throughput. Doubling the network capacity often has no effect since the bottleneck is generally in the hosts.

Rule #2: Reduce Packet Count to Reduce Software Overhead

Processing a TPDU has a certain amount of overhead per TPDU (e.g., header processing) and a certain amount of processing per byte (e.g., doing the checksum). When 1 million bytes are being sent, the per-byte overhead is the same no matter what the TPDU size is. However, using 128-byte TPDUs means 32 times as much per-TPDU overhead as using 4-KB TPDUs. This overhead adds up fast. In addition to the TPDU overhead, there is overhead in the lower layers to consider. Each arriving packet causes an interrupt. On a modern pipelined processor, each interrupt breaks the CPU pipeline, interferes with the cache, requires a change to the memory management context, and forces a substantial number of CPU registers to be saved. An n-fold reduction in TPDUs sent thus reduces the interrupt and packet overhead by a factor of n. This observation argues for collecting a substantial amount of data before transmission in order to reduce interrupts at the other side. Nagle's algorithm and Clark's solution to the silly window syndrome are attempts to do precisely this.

Rule #3: Minimize Context Switches

Context switches (e.g., from kernel mode to user mode) are deadly. They have the same bad properties as interrupts, the worst being a long series of initial cache misses. Context switches can be reduced by having the library procedure that sends data do internal buffering until it has a substantial amount of them. Similarly, on the receiving side, small incoming TPDUs should be collected together and passed to the user in one fell swoop instead of individually, to minimize context switches. In the best case, an incoming packet causes a context switch from the current user to the kernel, and then a switch to the receiving process to give it the newly-arrived data. Unfortunately, with many operating systems, additional context switches happen. For example, if the network manager runs as a special process in user space, a packet arrival is likely to cause a context switch from the current user to the kernel, then another one from the kernel to the network manager, followed by another one back to the kernel, and finally one from the kernel to the receiving process. 

Rule #4: Minimize Copying

Even worse than multiple context switches are multiple copies. It is not unusual for an incoming packet to be copied three or four times before the TPDU enclosed in it is delivered. After a packet is received by the network interface in a special on-board hardware buffer, it is typically copied to a kernel buffer. From there it is copied to a network layer buffer, then to a transport layer buffer, and finally to the receiving application process. A clever operating system will copy a word at a time, but it is not unusual to require about five instructions per word (a load, a store, incrementing an index register, a test for end-of-data, and a conditional branch). Making three copies of each packet at five instructions per 32-bit word copied requires 15/4 or about four instructions per byte copied. On a 500-MIPS CPU, an instruction takes 2 nsec so each byte needs 8 nsec of processing time or about 1 nsec per bit, giving a maximum rate of about 1 Gbps. When overhead for header processing, interrupt handling, and context switches is factored in, 500 Mbps might be achievable, and we have not even considered the actual processing of the data. Clearly, handling a 10-Gbps Ethernet running at full blast is out of the question. In fact, probably a 500-Mbps line cannot be handled at full speed either. In the computation above, we have assumed that a 500-MIPS machine can execute any 500 million instructions/sec. In reality, machines can only run at such speeds if they are not referencing memory. Memory operations are often a factor of ten slower than register-register instructions (i.e., 20 nsec/instruction). If 20 percent of the instructions actually reference memory (i.e., are cache misses), which is likely when touching incoming packets, the average instruction execution time is 5.6 nsec (0.8 x 2 + 0.2 x 20). With four instructions/byte, we need 22.4 nsec/byte, or 2.8 nsec/bit), which gives about 357 Mbps. Factoring in 50 percent overhead gives us 178 Mbps. Note that hardware assistance will not help here. The problem is too much copying by the operating system.

Rule #5: You Can Buy More Bandwidth but Not Lower Delay

The next three rules deal with communication, rather than protocol processing. The first rule states that if you want more bandwidth, you can just buy it. Putting a second fiber next to the first one doubles the bandwidth but does nothing to reduce the delay. Making the delay shorter requires improving the protocol software, the operating system, or the network interface. Even if all of these improvements are made, the delay will not be reduced if the bottleneck is the transmission time. 

Rule #6: Avoiding Congestion Is Better Than Recovering from It

The old maxim that an ounce of prevention is worth a pound of cure certainly holds for network congestion. When a network is congested, packets are lost, bandwidth is wasted, useless delays are introduced, and more. Recovering from congestion takes time and patience. Not having it occur in the first place is better. Congestion avoidance is like getting your DTP vaccination: it hurts a little at the time you get it, but it prevents something that would hurt a lot more in the future.

Rule #7: Avoid Timeouts

Timers are necessary in networks, but they should be used sparingly and timeouts should be minimized. When a timer goes off, some action is generally repeated. If it is truly necessary to repeat the action, so be it, but repeating it unnecessarily is wasteful. The way to avoid extra work is to be careful that timers are set side. A timer that takes too long to expire adds a small amount connection in the (unlikely) event of a TPDU being lost. A timer not have uses up scarce CPU time, wastes bandwidth, and puts of routers for no good reason. 

3. Quality of service (QoS) is an inter-networking issue that is discussed more than defined. We can informally define quality of service as something we seek to attain. With such a backdrop answer following. 
                                                      
a) What are the different techniques to improve Quality of services (QoS)?

Ans- Technique to improve QoS:

 There are many techniques used to improve the quality of service. Some common methods are,
Scheduling: Packets from different flows arrive at a switch or router for processing. A good scheduling technique treats the different flows in a fair and appropriate manner. Some of the scheduling techniques used to improve QoS are,
FIFO Queuing: In this queuing technique, the arrival packets are stored in First Come First Serve basis. If the arrival rate is less than the processing rate, then the queue will fill up and the new arriving packets will not have any space to store in the queue and gets discarded.

Priority Queuing : In priority queuing packets are first assigned to a priority class. Each priority class has its own queue. The packets in the highest priority queue are processed first. The packets in the lowest priority queue are processed last. This process continues until the queue is empty.


Advantage:
 It provides better QoS for higher priority traffic such as multimedia that can reach the destination with less delay.
Disadvantage:
At any situation, the higher priority queue has continuous packet flow then the lower priority queue never get a chance to process. It is called as “Starvation”.

Weighted Fair Queuing – In Weighted Fair Queuing technique, the packets are still assigned to different classes and admitted to different queues. However, the queues are weighted based on the priority of the queues (higher priority means a higher weight). The system processes packets in each queue in a round robin fashion with the number of packets selected from each queue based on the corresponding weight.
Traffic Shaping: 
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the network. There are two techniques under this mechanism.
  • Leaky Bucket – If the traffic consists of fixed size packets, the process removes a fixed number of packets from the queues. If the traffic consists of variable length packets, the fixed output rate must be based on the number of bytes or bits.
  • Token Bucket – Leaky bucket algorithm outputs the data in average rate from the burst data, but it does not taken the time when the host was idle, into account.
But, the Token Bucket algorithm allows idle hosts to accumulate credit for the future in the form of tokens. For each tick of the clock, the system sends ‘n’ token to the bucket. The system removes one token for every cell (or byte) of data sent.
Admission Control:
 It is a mechanism used by the networking device like router and switches to accept or reject a flow based on predefined parameters called flow specification. Before a router accepts a flow for processing, it checks the flow specification to see if its capacity and its previous commitments to other flows can handle the new flow.
Resource reservation:
 A flow of data needs resource such as buffer bandwidth, CPU time and so on. The QoS is improved if these resources are reserved beforehand.

Ques- b) How you can apply QoS in Frame Relay and also find out the relationship among traffic control attributes in the figure given below?


Ans- b) User rate in relation to Bc and Bc + Be

If area is less than Bc , no discarding (DE=0)
If area is between Bc and Bc+Be, possible discarding if congestion (DE+1).
If area is greater than Bc + Be, discarding occurs.


Service Class


Relationship of service class-



Apple iPhone 5


Apple iPhone 5




GENERAL
2G Network
GSM 850 / 900 / 1800 / 1900 - GSM A1428  

CDMA 800 / 1900 / 2100 - CDMA A1429
3G Network
HSDPA 850 / 900 / 1900 / 2100 - GSM A1428

CDMA2000 1xEV-DO - CDMA A1429
4G Network
LTE 700 MHz Class 17 / 1700 / 2100 - GSM A1428 or LTE 850 / 1800 / 2100 - GSM A1429                   

LTE 700 / 850 / 1800 / 1900 / 2100 - CDMA A1429
SIM
Nano-SIM
Announced
2012, September
Status
Available. Released 2012, September 
BODY
Dimensions
123.8 x 58.6 x 7.6 mm (4.87 x 2.31 x 0.30 in)                                                                                         
Weight
112 g (3.95 oz)
DISPLAY
Type
LED-backlit IPS LCD, capacitive touchscreen, 16M colors                                                                  
Size
640 x 1136 pixels, 4.0 inches (~326 ppi pixel density)
Multitouch
Yes
Protection
Corning Gorilla Glass, oleophobic coating
SOUND
Alert types
Vibration, proprietary ringtones                                                                                                      
Loudspeaker
Yes
3.5mm jack
Yes
MEMORY
Card slot
No                                                                                                                                                 
Internal
16/32/64 GB storage, 1 GB RAM 
DATA
GPRS
Yes 
EDGE
Yes
Speed
DC-HSDPA, 42 Mbps; HSDPA, 21 Mbps; HSUPA, 5.76 Mbps, LTE, 100 Mbps; EV-DO Rev. A, up to 3.1 Mbps   
WLAN
Wi-Fi 802.11 a/b/g/n, dual-band, Wi-Fi hotspot
Bluetooth
Yes, v4.0 with A2DP
USB
Yes, v2.0
CAMERA
Primary
8 MP, 3264x2448 pixels, autofocus, LED flash,
Features
Simultaneous HD video and image recording, touch focus, geo-tagging, face detection, panorama, HDR 
Video
Yes, 1080p@30fps, video stabilization,
Secondary
Yes, 1.2 MP, 720p@30fps, face detection, FaceTime over Wi-Fi or Cellular
FEATURES
OS
iOS 6, upgradable to iOS 6.1.2
Chipset
Apple A6
CPU
Dual-core 1.2 GHz
GPU
PowerVR SGX 543MP3 (triple-core graphics)                                                                                
Sensors
Accelerometer, gyro, proximity, compass
Messaging
iMessage, SMS (threaded view), MMS, Email, Push Email
Browser
HTML (Safari)
Radio
No
GPS
Yes, with A-GPS support and GLONASS
Java
No
Colors
Black/Slate, White/Silver

- Active noise cancellation with dedicated mic
- Siri natural language commands and dictation
- iCloud cloud service
- Twitter and Facebook integration
- TV-out
- Maps
- iBooks PDF reader
- Audio/video player/editor
- Organizer
- Document viewer
- Image viewer/editor
- Voice memo/dial/command
- Predictive text input
BATTERY

Non-removable Li-Po 1440 mAh battery (5.45 Wh)                                                                               
Stand-by
Up to 225 h (2G) / Up to 225 h (3G)
Talk time
Up to 8 h (2G) / Up to 8 h (3G)
Music play
Up to 40 h
MISC
SAR US
1.18 W/kg (head)     1.18 W/kg (body)                                                                                                         
SAR EU
0.95 W/kg (head)     0.90 W/kg (body)    


TESTS
Display
Contrast ratio: 1320:1 (nominal) / 3.997:1 (sunlight)                                                                             
Loudspeaker
Voice 66dB / Noise 66dB / Ring 67dB
Audio quality
Noise -91.3dB / Crosstalk -76.5dB
Camera
Photo / Video
Battery life
Endurance rating 51h

Source -http://www.gsmarena.com/apple_iphone_5-4910.php