[BLANK AUDIO] Today we're going to talk about words and mapping. This is an appropriate are in which to finish our studies because it ties together many of the things we talked about before. Again, this is the last of the second half of the class. And what we're talking about today mostly labeled objects. So we'll be working exponential generating functions. And again as I've been saying each week, there's many more examples in the book than we're able to do In lecture, but we will consider some techniques from analytic combinatorics in applications in the analysis of algorithms. So to start out I'm going to talk about what we mean by what is a word in a combinatorial sense. And to do that, let's just in our review, a few of the related commentarial that we've looked at. So, for example, we looked at, again, how many binary strings with N bits. It seems we talk about this almost every lecture. So, a string is a sequence of zero bits and one bits. And, with this symbolic method, using ordinary generating functions, we show that it's 2 to the N. Now that can extend to how strings drawn from an M character alphabet. Then in that case it's a sequence of one of the M character which is one over one minus MZ so again that gives us M to N is the number of strings drawn from the M character alphabet that has M characters. So how's that related to words? Well we'll get there. Now let's look at labeled objects now, and say how many labeled sets are there? Of size n. So there's exactly one label set of size two, the one that's got two objects in it. And actually, of any size n, there's only one label set. This is just a little bit of a review of what we mean by labelled objects. As we don't consider the order significant, so there's only one set, we label all the objects. Now if you want to take ordered pairs of labeled sets of n objects, then what do you get? Well, for two objects, then you can have a 1 and then a 2, or a 2 and then a 1, or you can have an empty set and two objects, or two objects and an empty set. So in this case, the ordering of the sets is significant and then the labeled objects can form sets in these different ways. And so, there's eight different ordered pairs or labeled sets of n objects. So then you can see the answer is two to the n. And that's the same as the number of bit screens, and we'll see how that relationship comes through in just a minute. So what about Instead of pairs what if we have a sequence of n sets. How many sequences of length m of label set n objects are there? And as we see want to talk about a set of thing that can hold labelled objects So now thinking of it that way, a classical way to think of it as balls and urns. So really what we're talking about is the number of different ways to throw in balls into two urns. So if there's one ball, it can go, there's two urns, one ball can go into either one of them there's two balls, both can go in the first or both can go in the second, or they can go in the two orders. The labels on the balls are significant, but not the order of the labels within the urn. We put them in the order they went in, but that order is not significant. And again, there's eight ways to throw three balls into two urns. So that's a balls and urns way of looking at really the same combinatorial structure. So again, two to the n. Now let's look at how we get at this counting results from the symbolic method. And this is just a review of what we talked about in lecture 5. If we have combinatorial classes of labeled objects then we have this basic operations that we can perform on the classes that disjoint copies or kicking ordered pairs, relabelling in all ways. And then the corresponding operations on the exponential generating function give us a symbolic transfer from the construction right to the generating function. And for labelled objects, we extend that to sequence of length k, or sequences of any length. Giving those transformations on the generating functions or for sets where we divide by k factorial. So a set of objects from a class the generating function for that is e to the generating function of the class. Also in cycles, so these are the basic operations that we use for labeled objects. So these are the ones that we are going to use now to look at and problems or words. Combinatorialy, we're going to define a word to be a sequence of M urns that have N objects. And it's the number of ways to throw N balls into M urns. It's a configuration of throwing N balls into M urns. So, I want to know how many different words there are, then we use generating functions. It's parametrized by M, so that's the number of urns in the sequence. So the generating function is the sum of all possible [COUGH] objects, that's configurations the way the ball could fall. It's either the objects size over W factorial, and then as usual that collects it together. To get us the number of ways that we could get N balls into M urns. So and again, we have all these different ways of looking at it. So this is a 9 balls into 5 urns and maybe this is one way it could come out. That corresponds to a sequence of five subsets of nine things, or you could just write out the subsets. Those all all different ways of representing the same combinatorial object. But in terms of this symbolic method to find out this generating function, or how many different ways to write or do this, it's simple. It's a sequence of length M of a set of objects, and it's nothing more than that. So that means immediately the generating function equation is e to the z to the mth power, or e to the mz. And so our number of different configurations is and factorial times the coefficient z to the n and that which is just m to the n. Now that m to the n again, that's the same as the number of strings from alphabet of length n. Well, with n characters in it. And indeed, there's a one to one correspondence between words and strings. So a string, as we talked about last time. A sequence of n characters drawn from an m character alphabet. And since for each of the n characters in the string, there's m different possibilities. There's m to the n difference strings. A word is a sequence of labeled sets where the total of N objects, and there's M to the N words. So what's the correspondence? Well the correspondence is quite simple. What we do is we take a look at [COUGH] the second set, for example, corresponds to the positions in the string where the second character happens. and that's as simple as that. There's no three so the third set is empty. The one is in position seven so the first set has seven in it and the fives are at positions five, six and nine, and the fours are at positions two and four so it's just a one to one correspondence where the word tells us the position in the string where the character happens. That is for in the word if you look at the case set for every i in the word the i th character in the string is k, or vice versa. If you look at the string, if the ith character in the string is k then you put i into the kth set in the word. So the word is just the indices where the letter appears in the string. That's the correspondence. So very familiar we've worked with strings before we're going to be working with words. And we had this one to one correspondence so this is just another way to look at it for binary strings. So looking at the eight binary strings as words, first one says that all three characters are zeroes, and there's no ones, and the last one says there's no zeroes, and all three characters are ones. And so forth. So, what's the difference? There's no difference. It's only the point of view. Last lecture when we’re looking at strings we're concerned about the sequence of characters and about the relationship between one and the next in the sequence. Looking for patterns in the string so forth. This time, we're interested in applications where the sets of indices are important. Where it matters how many zeros there are. How many ones there are. And so forth. But really, it's the same object. And strings where were using OGFs to enumerate. Words that we're going to use. So that's a difference in point of view, and a difference in technique that we use to analyze variations. So again, here's just a summary for strings, which we considered last time. Considered them as unlabeled objects, we used OGF to enumerate them. Typical string is just a sequence of characters, so the OGF is 1 / 1- Mz if there is M characters, those M to the interval. For words, we consider them as labelled objects and use an EGF and we had all these different representations. And now when we're doing sequence, we're doing star product, where we're relatable in all possible ways. And our number of different words is E to the N Z, but we get the same result out. So, we're going to be focusing on the kind of representation in this lecture. Now that's a brief introduction into what is a word [INAUDIBLE].