\chapter{Physical network management} \label{ch:physical} This chapter describes physical network management. Section 8.1 describes performance management such as performance metrics, management control and quality of service (QoS). \section{Performance management} Performance management is used to evaluate the behavior of managed objects and the efficiency of communications activities. Performance management collects statistical data, analyzes it, and where appropriate predicts trends in the communication between open systems~\cite{b42}. A network management system must be able to provide reports on the efficiency of the system and its current and previous performance. For documentation, reports on a daily, monthly or annual basis are required. Performance management also collects quality of service (QoS) related data, in order to improve QoS. \subsection{Performance metrics} Generally, network performance metrics is classified into two areas: network-centric metrics and end-to-end measurements~\cite{b43}. Network-centric metrics are \begin{itemize} \item Robustness of the network elements. These metrics are important in designing components of the network, and in tracking the long-term robustness of the network. Uptime Mean-time-between-failures (MTBF) Mean-time-to-repair (MTTR), etc. \item Router and switch metrics. These metrics deals with operations of routers and switches; queuing packets, processing them, and placing them on the appropriate outbound link queue. Offered load Dropped traffic and retransmissions Average queue lengths (to assess the queuing delays and potential dropped packets at a router). \item Link metrics. These metrics describe the network capacity. Bandwidth Utilization, etc. \item Metrics for the routing sub-system. These metrics describe the impact of the routing traffic and fluctuations on the network performance. The rate of route change (characterizes the stability of the network system) \end{itemize} End-to-end metrics are \begin{itemize} \item End-to-end latency and jitter \item Effective throughput, usually measured as a function of the packet size and the window size \item Packet loss \end{itemize} The tools and techniques required to measure performance characteristics in these two categories are different. In the Internet performance analysis it has been difficult to define metrics, that reflect both perspectives. \subsection{Performance management control} Performance management controls traffic and analyzes statistical distribution of the traffic. Statistical analysis gives also hints of malfunction in the network. Performance management is also related to the configuration management. It is easier for the operator to plan modifications of the network, when the use of the network is known. Operator finds out which connections and services are used and which are needed. The traffic control needs resources. Usually a separate control station is needed to collect and analyze traffic statistics. There should be a method to link a control station so that the traffic information is collected from the network. Controlling the traffic is however a reliable and cost effective method to find out problems before they exist. Traffic controlling can be limited to the most important parts of the network~\cite{b2}. \subsection{Analysis} Analysis becomes more important, when the networks increase in size and in complexity. Analyzers are used for traffic monitoring, protocol analysis and statistics collection and interpretation. Analyzers are understood as troubleshooting tools, but they should be used as proactive indicators as well. The tools available today typically gather and display volumes of detailed data rather than interpret and highlight the meaning of the data. Many of these tools look at a single element rather than the network as a whole. Data becomes information when it is organized, correlated and presented in ways that make clear its meaning, helping the network manager make the best decisions~\cite{b16}. It is important to analyze the network up to date. Managers must also predict future trends based on historical network trends and business information. Before the expert systems, alarms and events were checked manually by the operators. A stream of detailed data means by itself a little. Effective information is compact. The network management problem thus becomes an information management problem. Now expert systems are tested in the automation of this process. Various expert systems can be used in the network management. There are several artificial techniques that can be used, such as Rule-Based Reasoning (RBR), Bayesian Networks (BN), Neural Networks (NN), Case-Based Reasoning (CBR), Qualitative Reasoning (QR) and Model-Based Reasoning (MBR)~\cite{b1}. \subsection{QoS in Internet} Over the Internet, QoS can be quantified using the four major parameters: throughput, reliability, delay, and jitter. Throughput is the maximum data transmission rate that can be sustained for a particular link. Reliability is a measure of transmission errors and packet loss measured within a time interval. Delay is the time taken by a packet to travel from source to destination. Jitter is the variation in end-to-end delay. QoS is also defined as the ability to provide differentiation between customer and application types through level guarantees of one or more of these metrics~\cite{b44}. Many components in the network impact the QoS being delivered. For some network components only limited amounts of data have been available. The measured values can often only indirectly describe the QoS. For example, Internet service providers have used ping and traceroute to detect and diagnose network problems. Ping and traceroute give an indication of the changes in the network characteristics, but these tools are not sufficient to quantify the impact of these changes on availability and performance of customer-visible services. Internet offers only a very simple quality of service, point-to-point best-effort data delivery. All packets receive the same quality of service between users. Some models are being deployed in the Internet, such as integrated services model (IntServ) and differentiated services model (DiffServ). The underlying problem is that different classes of applications require different services and resources (e.g., data loss, bandwidth and timing). Web document transfers, and financial applications accept no data loss, i.e., fully reliable data transfer. Multimedia applications, such as real-time or stored audio/video, are loss tolerant applications and they can tolerate some amount of data loss. Interactive real-time applications, such as Internet telephony, virtual environments, and teleconferencing require tight timing constraints on data delivery in order to be effective~\cite{b45}. The problem of differentiating services is, that service providers do not have end-to-end information about the value of the service to the client organization~\cite{b31}. Also the decentralized architecture of the Internet makes the implementation of homogenous management and control mechanisms difficult to implement~\cite{b44}. The integrated services working group in the IETF (Internet Engineering Task Force) has developed an \textit{integrated service model} in the Internet architecture. The services that are specified so far are best-effort service for elastic applications, guaranteed service for rigid intolerant applications, and controlled load service for adaptive and tolerant applications. It also includes controlled link sharing~\cite{b5}. \textit{The Resource ReSerVation Protocol (RSVP)} was developed to allow certain data streams higher priority than others. RSVP enables a host or an application to signal the QoS requirements of its packet flow to the network. Resource reservation will need enforcement of policy and administrative controls. Policy control determines whether the user has administrative permission to make the reservation. Admission control keeps track of the system resources and determines whether sufficient resources to supply the requested QoS are available. This leads to two kinds of authentication requirements: authentication of users who make reservation requests, and authentication of packets that use the reserved resources~\cite{b5}. RSVP is considered as a complex protocol, that suffers from scalability limitations. RSVP also lacks aggregation mechanisms and creates a separate reservation for each flow. This is why the IETF has developed a simpler mechanism, differentiated service model. Anyway there should be a single service model for the Internet, otherwise it is difficult to make end-to-end service quality statements. \textit{Differentiated service model} offer a small number of class of services (CoS). It guarantees that packets with higher precedence get better service than packets with lower precedence. Within each class, packets gets best-effort service. Each packet carries an identifier specifying the requested service class. The IP protocol specification provides for a three-bit precedence field in the IP header called the type of service (ToS) field. Packets are scheduled based on this identifier. There is no admission control, thus there is no mechanism to prevent classes from becoming overloaded~\cite{b7}.