So let's have a look at what happens when you place a call from your mobile phone to someone that has their phone at home. The information is first sent over the air to the closest base station where it is now converted to a different format and sent over copper wires to a switch. The switch is designed to find the routing pattern that will send the information to the final destination. The switch will send information over what is going to be most likely an optic fiber channel to the global telephone network. The telephone network will route your information to the central office closest to the person you're calling. The central office will then send the same information in yet a different format over a coax cable to the switch that is closest to the telephone that is being called. And finally, from the closest switch to the phone in the house, there is what is called last mile, which is a longish piece of copper wire. So you see, at every change of channel, many, many things can happen. The signal can be converted to digital again and then back to analog. The modulation schemes and the signal formats that we will have to use on these different stretches of the channel will have to adapt to the physical characteristics of the medium. Every analog channel has two inescapable limits that we have to reckon with. The first is a bandwidth constraint. The signals that we can send over an analog channel will have to be limited to a certain frequency band. And the second limit is the fact that we cannot use arbitrary power over that band. There will be limits on the power of the signal that we can send. The maximum amount of information that we will be able to send over the channel, given these constraints, is called the capacity of the channel. We will see a remarkable result of information theory later on that exactly quantifies the capacity of a channel given its signal to noise ratio and its bandwidth. As communication system engineers we are given the specifications of a channel and we want to design a system that sends as much information over this channel and as reliably as possible, given this inescapable capacity constraint. Amount of information and reliability are concepts that are still a little fuzzy for the time being. They will become clearer later on, but we can certainly look at the intuition behind this problem. For instance, if we look at the relationship between bandwidth and capacity, we can do this very simple thought experiment. Suppose we are going to transmit information encoded as a sequence of digital samples over a continuous time channel. So what we do we take the samples, we interpolate the samples with a certain sampling period Ts. Now if we make Ts very small, it means that we can send more samples per second. But if we make Ts small we know that the bandwidth will grow as the reciprocal of Ts. Do you remember the formula for the interpolated signal in the sampling theorem? It says that the analog spectrum will be 0 outside of a band that goes from omega n to minus omega n. And omega n is pi over Ts. If I make Ts small, the bandwidth will grow with 1 over Ts. So we see that capacity and the amount of information that we can send per second are related in some way. Similarly, the relationship between the power constraint and capacity can be appreciated because we can never do away with noise. So at the receiver, when we send the sequence of integers, for instance, we will have to guess what has been sent after it has been corrupted by noise. So suppose we have a channel that introduces a noise variance of 1, and suppose we are transmitting the integers between 1 and 10. If the variance is one, lots of transmitted integers will have an error that will send them very close to the next integer in line. So suppose I'm sending the integers between 1 and 10 and so I'm going to say 1, but because of the noise the 1 will be 1.75 for instance. So I'm not really sure if what was sent was 1 or was 2. Now another strategy is to say, okay, let's transmit only odd numbers. So instead of everything, I will not transmit 0, will transmit 1, and then I will not transmit 2, but I will transmit 3. So I am increasing the gap between possible symbols, and so the noise that before had probably me mis-guessing the transmission of one, will still be small enough to bring me back to the original signal. Now, it is rather intuitive that, all other things being equal, a signal with a wider range will have a larger power. So if I want to keep the power constant, I will still have to send symbols between 0 and 10. But now there are only half as many odd integers between 0 and 10 than there are integers, and so the amount of information that I can send per unit of time will be halved. Let's now look at some common communication channels and see what their power and bandwidth constraints are. Maybe the simplest communication channel that we're still familiar with is the AM radio channel. AM stands for amplitude modulation and indeed the radio transmitter is very simple. We take an analog signal, it can be voice or music. We do a low pass filtering operation to limit it's bandwidth, and then we do a very, very simple sine zonal modulation with a cosine of a given carrier. The resulting modulated signal is simply put to an antenna, and it will be propagated in the radio spectrum. The radio spectrum is a very scarce resource. There's only one radio spectrum and everybody has to share it. Therefore, every frequency band in the spectrum is strictly regulated by law. In the case of AM, the band is from 530 kilohertz to 1.7 megahertz. This is divided into 8 kilohertz wide channels and each radio station gets allocated a specific channel. The power is limited by law for a variety of reasons. The first is that the propagation patterns for AM waves is very different during the day and during the night. In particular, at night time AM radio waves travel much further than during the day. And so they can create all sorts of interferences in distant places if their power is not limited. Also you don't want radio stations to use too much power because it wouldn't be healthy for people to live in the vicinity of the transmitter. And then the channel we're all all very familiar with is the telephone channel. The telephone network is more properly called a switch telephone network because, instead of taking the combinatorial approach and having each phone connected to every other phone in the world, what happens is that, when you call another phone, your phone is connected to the central office, and the central office determines which parts of the network have to be connected together so that your call can be routed to the destination phone. So the piece of wire that connects you to the central office is up to maybe say a couple of kilometers long and it's called the last mile. The central office today is a bunch of digital switches. In the old days was mechanical rotary switches. The network can be anything from optical fiber to satellite links to anything else in-between. And here you have the symmetric part where you get to your destination. The telephone channel is conventionally limited from 300Hz to 3000Hz. These are historical limits that depend on the kind of hardware that was used. In the old days, in central office, and in the network. Today these limits are a historical artifact, but they are kept because anyway voice communications are perfectly intelligible within this band. And with a reduced band you can multiplex, namely you can put together very many communications on a wider channel. The power that you can send on a telephone wire is limited from 0.2-0.7 volts root mean square, and this is a strictly enforced limit to make sure that you don't send signals that can burn the equipment at the central office. And the signal-to-noise ratio is rather good because the analog part of the telephone network operates in the basement and there is not a lot of interference at low frequencies.