What does “memory” mean to me?

This sounds like a bizarre question. Memory is a pretty fundamental word we all use in our daily lives. But it’s a word that carries a lot of significance in the field of computer science – in particular the field of artificial intelligence. I use it. Often. And usually pretty haphazardly. At the encouragement of others within my department, I’ve been convinced to read a bit more about the psychology / neurology roots of the word. Actually … I should restate that. I have no idea what the etymology of “memory” is. I’m guessing it’s rooted in psychology. And then neurology. So I include those. But I’m sure there’s more fields to look at when studying the definition of the word. I digress ….

I thought before I dove into looking up how these fields view “memory” I should document what I refer to when using the word.

I admit, that my usage is likely fairly naive. When I refer to “memory” it’s generally in the context of a reinforcement learning agent. And it usually loosely means, information, that is from the past, that is available to the agent at the current time-step. So previous observations are memory. Traces are memory. Anything that encapsulates past information is “memory.” hmm. that’s about it. But I realize that is I poor definition. By that definition, a value function is “memory.” It encapsulates information from the past, and makes it available to the agent at the current time step. But that’s clearly not what I mean. So I need to go further. “Memory” to me, is more raw than a general value function. It’s close to an original observation from the past, available in the current time. But that’s not quite right either. I leave a little bit of wiggle room to massage the original observation into a representation available for the current step (but not massaged into a value function).

So there you have it. An incredibly lackluster and flawed definition. I’ll intentionally leave it at that for this post. (I think the point of studying the etymology of “memory” is not only to not annoy psychologists/neurologists by my usage of the term – but to inspire and motivate work within AI.)

Incremental things

In the startup world, calling an idea “incremental” is somewhat of an insult. Startup founders are constantly filled with cliches of “going big or going home”, “disrupting industries,” “building monopolies” and “creating unicorns. According to the script, incremental is boring. It’s unimaginative, It goes against the sprit of innovation. And it’s certainly not something you want to be accused of. If you’re a tech entrepreneur. Especially one raising venture capital.

This world of anti-incrimental, right or wrong, has been the world I’ve lived in for the last 10 years. But today, in a supervisor meeting talking about my thesis work, it was suggested that my current plans are potentially too complicated for a Masters degree. I should save that idea for my PHD, and focus on something incremental instead. I was taken a bit aback by this. I’ve become instinctually dismissive of anything but completely novel approaches in the past (Note – I’m not trying to pretend I’m Elon Musk here creating missions to mars … I’ve started my fair share of fart apps in the past decade. But I do so always with a bit of shame). I need to think about it a bit more, but I think an incremental approach – at least to a masters thesis – makes good sense.

Again – I need to do a bit more navel gazing – but the problem with a completely novel approach within scientific research (in my case, I am considering a new algorithm for discovering a cluster of beneficial general value functions within a reinforcement learning agent), is a matter of scope. Coming up with the algorithm, running a few experiments, demonstrating results is perhaps the easy part. Explaining and justifying each decision point of the algorithm, competing algorithms is the hard part. Not to mention that in a wide open research topic such as discovery (of features), the related work is immense. Each related paper and idea should be thoughtfully considered. For all these reasons, the scope explodes and perhaps exceeds that of a Masters thesis. I shouldn’t make the blanket statement that one can’t invent a new algorithm / architecture within the scope of a masters thesis. But I do believe it to be likely more appropriate for a PHD thesis.

Again, this idea that creating something new is too grand in scope, is foreign in startups. Sure, there is the lean startup manifesto which guides its followers to build things in small increments. But these increments all are intended to add up to something truly disruptive and novel.  In the startup world, you could invent a new algorithm / service. It either works or it doesn’t (Based on engagement). But in the scientific world, whether something “works” or not, isn’t measure as by user engagement. More thought needs to be dedicated to addressing each decision point, and comparisons with other approaches. Note, that achieving either (user engagement vs. a comprehensive description of the thought process and scientific steps taken to achieve a result) can be difficult. In the case of the former, it’s more of a dice roll. Like catching lighting in a bottle. You can get lucky and create something delightful for users in a couple months and “be done.” The idea that you need to go beyond creating something, but define, and defend each decision point, is something I’m still getting acclimatized to.

The nice thing about doing something more incremental during a masters thesis is that much of the groundwork has been laid for you. For example, the Deepmind unreal paper has drawn a lot of attention from us at the University of Alberta (because of our interest in GVFs and our many ties with Deepmind). It’s a fascinating body of work. It challenges the idea of a predictive feature representation – instead using the auxiliary tasks to sculpt the feature representation directly (by finding generally useful features across tasks). But many scientific questions arise because of this work. How would auxiliary tasks compare with a predictive representation in an environment like the compass world or cycle world? What is the sensitivity to the parameters in the auxiliary tasks? What types of environments do auxiliary tasks work best in? These are just a few of the questions that could be thought of as incremental. They’re not creating anything new. But they are contributing many meaningful insights and knowledge towards the field. And could form the basis of a good masters thesis.

That said, thinking about this has only increased my desire to contribute something completely novel to the field. Perhaps the appropriate path to do so is in a PHD once some of the foundation has been laid within a Masters thesis. The work from a masters lends credibility to a PHD author creating something truly novel, not to mention is truly informative to the work done towards a PHD.

 

Forward vs. Backward view of memory

I’ve worked with a few reinforcement learning environments that are partially observable. The observations seen in these environments lack information to do a good job of identifying the current state within the environment. In these situations, different states are aliased – since many of them look the exact same.

What is an environment that is partially observable? Imagine standing in an extreme snow storm in the middle of an empty field. No matter where you look, no matter how you stand, all you see is white. All you feel is cold. Each state you are in, is hard to differentiate from the next. In this environment however, consider that there is a bench in the middle of the field. When you are right next to the bench, you can see it, and you can touch it. But as soon as you take a few steps away from it, it is no longer visible. There’s no way to sense it.

In this world, if you are to rely directly on your immediate observations (what you see and what you feel), all states look the exact same, with the exception of the state where you’re directly beside the bench. But as soon as you step one step away from the bench, you’re back to a state that is aliased with almost all the others.

There are two approaches that come to mind when creating a feature representation, that may get over this state aliasing problem for such environments. Both involve some element of using “memory,” so that the feature representation is not only comprised of what the agent currently sees.

  1. A recurrent memory based approach. In this approach the feature representation of each state consists of the observation of the current time step PLUS the observations from the previous n time steps. For example, if all I see is white snow, but I also know that I saw a bench last time step, I know quite precisely where I am (one step away from a bench).
  2. A predictive based approach. In this approach, the feature representation of each state consists of the observation of the current time step PLUS the predictions from the previous time step. For example, if all I see is white snow, but at the previous timestep I predicted that I was two steps away from the bench if I kept moving forward, I would now know where I was (again, one step away from a bench).

Both approaches seem to incorporate some form of memory (apologies for using “memory” quite loosely). The latter approach has a forward view of memory. Instead of looking back at the previous n time steps and summarizes what did happen, it looks ahead into the next n time steps and summarizes what will happen. I wonder how these two approaches would compare.

One thing that comes to mind would be that a forward view of memory might generalize better. In other words, regardless of how I got to a given state, if the predictions of the future are the same, wouldn’t you want each of these states to generalize the same?

Behavior policy for multi GVF (Horde) learning

One of my favorite papers I’ve read is “The Horde: A Scalable Real-time Architecture for Learning Knowledge from Unsupervised Sensorimotor Interaction.”

TLDR – Like the name suggests, it presents a reinforcement learning agent architecture that is able to learn, and make tens of thousands of predictions at any time step. These predictions are based in low level sensorimotor observations. Each of these predictions are made via general value functions, some of which can be off policy predictions. In other words (for those not as familiar with RL terminology), these predictions are predictions IF the agent were to behave in a manner, not necessarily the same as the way it is currently behaving. For example, while driving home from work, I could start to learn (and in the future predict) how long it would take me to get to my local grocery store. I don’t intend to go into the implementation details, but the high level intuition is that the agent making this prediction (how long to drive to the grocery store) while doing something different (driving home) is able to do so, since there is behavioral overlap between these two behaviors. Where there is overlap, the agent can learn.

This off policy learning seems to be powerful. An agent is able to learn something about behavior not necessarily the same as the current target behavior. This is especially powerful when the agent does not directly control the behavior. However, there is some downside. The algorithm is algorithmically more complicated. With GTD lambda for example, a second weight vector needs to be stored and updated. Importance sampling via rho needs to also be calculated. The extra weight vector also increases storage / memory cost. So the off policy advantage comes at a cost.

I wonder, in cases like Horde, where the agent controls the behavior policy directly, and the purpose is to learn (rather than maximize some reward), how another approach would work. Perhaps each GVF could be implemented as on policy. Rather than GTD Lambda, a simple SARSA algorithm could be used. And that policy would be followed for a certain amount of time steps. Because all of the experience would be relevant, learning would happen faster in theory. Furthermore, there would be less computational overhead, so the slack that was generated could be used for planning or some other task. Another approach, rather than just enumerating through GVFs to determine the behavior policy, perhaps you could choose a GVF that needed learning the most, follow that policy for a while, and then move on to the next. “Needing learning the most” … what would that mean? I suppose it could be based on the current average TD error, combined with the number of times it’s been followed. This approach would be similar to some curiosity inspired exploration.

This idea of comparing behavioral approaches in Horde (what would the performance measure be for comparing the different approaches) seems like it could be interesting .

The maths

I must admit that at times, I’ve been frustrated by a lack of mathematical competency during my graduate studies in Reinforcement Learning. Other times though, I feel like my current level of mathematical competency is perfectly fine. I generally have enough intuition to understand the papers I’m reading. Part of me feels that when I allow myself to be frustrated, I’m allowing a certain type of author win – the type of researcher dressing up fairly underwhelming results in overly complex mathematics to make it look more impressive than it really is. Often times, complex math is needed … but other times it feels like I’m being shown dancing girls by a biz dev expert.

That said, the frustration is real.

I finished my undergraduate degree in 2002 (Currently 15 years ago) with a major in Computer Science. I took the required mathematics courses (2 calculus courses, a linear algebra course, statistics, and a numerical analysis course). I thrived in these courses (I’d have to look but I’d guess my average was around 95% where the course average was 60%) and generally enjoyed them. I’m not exactly sure why I didn’t pursue more. That said, the knowledge I gained from these courses is both distant and lacking to what would be required for easily understanding the papers I read. For example, I was reading a paper the other day talking about Fourier Transforms. A simple enough concept that I’ve studied before, but distant enough that I couldn’t quite remember the details I needed to parse the paper. The week before “LSI Systems.” In both cases I googled each term which unearthed 10 more terms and mathematical concepts I was unfamiliar with. Each term may require a weeks dedication, if not more to truly understand. It felt like pealing an onion.

I’ve thought about possible solutions. In my first year (last year) I audited a few classes – linear algebra and a statistics course. I should probably do more of that during my grad studies. But as always, 3 hours of lectures a week + time getting to and from adds up and is a pretty expensive investment. Not to mention the fact that to truly get the most out of these courses, one must not just attend classes, but do the assignments, prepare for exams etc. So this bottom up approach (read analysis, probability text books, MOOCs) is problematic because of it’s time cost when you have conferences to attend, pressure to publish, other classes to take for credit etc. Furthermore, the signal to noise ratio in this approach is pretty low. This statement isn’t mean to minimize the value in understanding the entire content of a mathematics textbook. But for the utility of applying it to research directly, a lot of pages simply aren’t relevant. However, the only way to unearth the valuable bits might be to read the entire content. In a perfect world, there’d be a system in place that would give me just enough content to understand the concept I wished. This system would understand my current level of knowledge. I could query “Langevin flow” and be returned a list of pointers to articles / moocs, etc. given my current level of knowledge. Google cant do this.

I’ve thought about a more top down approach – googling every term that’s unfamiliar – but as I stated before – that seems like exponential tree expansion.

Another more extreme solution would be spending a semester or two doing nothing but studying the appropriate math courses. This would be taking the actual courses (maybe for credit or at least doing assignments etc)? But obviously, as stated before, grad students have deadlines and should really publish / advance research … not just take background courses. This would have to be framed as a pseudo – sabbatical.

All this said, maybe it’s much ado about nothing. I have a general intuitive understanding of most concepts I’ve encountered. And for those that required a more intimate understanding, I’ve been able to pick this up. Perhaps I could even argue that a beginner understanding of mathematical concepts could come as an advantage to some researchers more mathematically minded – treating their mastery of equations and proofs as a hammer seeking a nail – solving problems that don’t really matter. My lack of mathematical know how hasn’t narrowed my research interests into purely application based areas (although I suppose that wouldn’t be a problem if it did. There’s nothing wrong with application research … as opposed to more theoretical).

None the less, the frustration is real. I desire to understand a bit more. And I don’t think I’m merely being tricked by dancing girls and flashy math equations. A better mathematical foundation would help. I’ll continue to use a more top down approach (researching terms and ideas on demand ad hoc) while sprinkling in a bit more bottom up. Listening to math lectures when I have the chance. Doing the latter is easier said than done though as “down time” is something that doesn’t occur that often. Perhaps I’ll dedicate myself to a MOOC to force the issue (the question of which content / MOOC not withstanding).

I’d love to hear other solutions to this frustration though ….

Perception vs. Knowledge in artificial intelligence

“Perception” and “knowledge.” I believe I’ve been guilty of using these words in the past to communicate ideas of research, without being overly clear in what they mean to me. When borrowing from other disciplines words such as these and ones such as “memory,” one should use them as consistent to their original meaning as possible. But I don’t believe there is anything wrong with choosing words that are out of their original context, so long as you clearly articulate the meaning of the words you’re choosing.

“Perception” and “knowledge” were two such words I was using today when talking with a fellow graduate student at the University of Alberta. Part way through our conversation, it became clear that I wasn’t being overly clear with how I was defining either word. We were discussing the idea of “perception as prediction.” An idea I have blogged about briefly in the past. The basic premise of the idea is that what we “perceive” to be true, are actually predictions that we believe to be true if we were to take certain actions. A baby perceives that it’s in the presence of it’s mothers breast because it predicts that if it were to take certain actions, it would receive sensorimotor stimulants (breast milk). A golfer perceives that she is standing on the green because she predicts that if she were to strike the ball, it would roll evenly over the ground. This theory of perception could be taken further to suggest that I perceive that Donald Trump is the current President of the United States because I predict if I were to enter his name into Google, I would discover that he was indeed the president.

The first two examples (the baby with breast milk, and the golfer) seem to fit nicely into a “perception” framework. To me, perception seems to be grounded in making sense of the immediate sensorimotor stream of data. “Making sense” perhaps means making predictions of what future implications would be to the sensorimotor stream would be if certain actions were taken now. However, the example of perceiving that Donald Trump is the president, seems to state something beyond my immediate sensorimotor stream of data. It’s less about “perceiving” what these observations mean. And more like “knowledge” that can be drawn upon to make other predictions. Knowledge seems to be based on making sense of what WOULD happen to my sensorimotor stream if I took certain actions now. Is this differentiation arbitrary? Is perception just a special case of knowledge – it’s knowledge grounded in the immediate senses vs. future senses? I believe that most people separate knowledge from perception in this way. But how do we “draw upon” knowledge when it’s needed? Is this knowledge hidden somehow in the sensorimotor data? Or is it part of a recurrent layer in a network? Or perhaps, it really is no different than “perception” in that this knowledge regarding Donald Trump could be one of millions of general value functions in a network of general value functions. If this is the case, then couldn’t you, in addition to saying “perception as prediction,” also say “knowledge as prediction?” I believe this is, actually exactly what researchers have argued in papers such as “Representing knowledge as Predictions.”

To me, perception refers to an ability to make further, more abstract sense of my immediate senses. To make “further, more abstract sense” means to be able to know that I’m “in a car” if I see street signs, and hear the hum of a motor. This more abstract sense of being “in a car” could be modeled as a set of predictions that are true, given the current sensorimotor input. Knowledge, on the other hand, can be modeled the exact same way. Through a set of predictions grounded in immediate senses. The difference, perhaps, is that perception, deals with more subjective information. It’s about making more abstract sense of my immediate senses. But the mechanism to compute perception and knowledge, could be the same. If that is the case, and these computations take the same input, and whose output is used the same, then what is the difference? Perhaps nothing?

 

Startup ideas are products of environment

I used to fashion myself as “idea” guy … in terms of the ideas of the startup variety. I felt like I could, at any point rattle off 4 or 5 startup ideas …. and not ALL of them would be laughed at by an investor. Somehow I believed there was something innate in this ability. However, since coming back for grad school at the university of alberta, my perspective on this “ability,” has changed.

It’s been over a year since starting my research in reinforcement learning. During this time, I can’t say I’ve had a single startup idea (not quite true, but close). You’d think I could say that it was because I didn’t give any thought to it … but that’s not entirely true. There’s been moments where I’ve tried to think of ideas. And each time I’ve grasped at straws. Maybe the lack of ideas, stems from the fact that I approach the search for ideas, armed with my reinforcement learning hammer, seeking a nail. This approach (trying to find a problem your technology expertise can solve), could be argued as an anti-pattern for coming up with good ideas. Furthermore, it seams that reinforcement learning is in the infancy of being applied to real world problems, and still struggles to find traction because of the lack of data (or to be more precise, the time it takes to acquire real life data). I blogged about this challenge before in a post called “Scaling horizontally in reinforcement learning.” But I think the root of it might be environmental. My time spent at the water cooler is spent talking about how to optimize algorithms. Or how to leverage GVFs to form predictive state representations. It’s not spent talking about creating an app for that, as it was when spending my days in the Bay area startup scene. For the record … I’m not complaining about this change.

The effect seems intuitive. Obviously someone who is immersed in an environment where everyone and their dog are talking about startup ideas, is going to have a few ideas of their own. But until I was immersed in the academic environment, immediately after spending several years working in SOMA in startups, I didn’t realize how much of an effect the environment has on the type of problems a person is attempting to solve.

 

Perhaps time really does move faster as you age …

When I was a kid, summer holidays (those two magical months of July and August) seemed like an eternity. Now however, as a 37 year old, it seems like it was just yesterday that I was launching fireworks to celebrate Labor day weekend (we really did. And it was amazing!). That was over 2 months ago. I’m sure most “adults” can relate to this feeling of time moving faster the older you get. But perhaps there’s a reasonable explanation for this effect.

Time is conventionally thought of in the continuous space. I’m sure quantum physicists much more intelligent than I have postulated on a discrete time domain, but for now, I perceive time as continuous. However, I have recently been implementing reinforcement learning algorithms in robotic domains, where time is discretized.

In such domains, the robot agent “wakes up” at certain frequencies for computation. At each instance the agent “wakes up”, the robot must choose an action, take the action, observe the environment, and finally learn.

RL.png

In the robot environments I have worked with, I, the designer, have defined the learning rate. This rate defines how often the agent “wakes up” – where it observes it’s most recent environment, as well as takes an action.  To such an agent, it is easy to imagine that the only definition of “elapsed time” is the number of learning cycles it has processed. It has no concept of what happened, let alone how long it took, between these learning cycles.

It is natural to believe that a young child has a much more active brain, processing at a more frequent rate than an older senior. Imagine if a young child “learns” 1000 times per second, and a senior learns only 100 times. To the child, a year represents 10X more learning cycles, so quite literally feels 10 times as long. Similarly, there is an intuition on the perception of elapsed time (or lack there of) with people waking up from a nights sleep, or from a comatose state.

I am making huge generalizations when comparing the human brain to this simple agent / environment framework. I barely have a basic understanding of neurology. But I suspect that the brain doesn’t just operate on a single discrete observation set at certain frequencies. So this comparison to the simple RL environment is somewhat naive. However, at some level, one could imagine that the computational frequency of the human brain slows down with age. If that is the case, and if you believe that is the only metric we have to perceive the passage of time, it only seems natural that time does indeed speed up as we age.

Dynamic Horde of General Value functions

Just finished documenting my work on creating an architecture for a dynamic horde of General value functions.

General value functions (GVFs) have proven to be effective in answering predictive questions about the future. However, simply answering a single predictive question has limited utility. Others have demonstrated further utility by using these GVFs to dynamically compose more abstract questions, or to optimize control. In other words, to feed the prediction back into the system. But these demonstrations have relied on a static set of GVFs, handcrafted by a human designer …

https://github.com/dquail/RLGenerateAndTest

Dynamic Horde

I’m just finishing up a research project about an architecture for a dynamic set of general value functions. I’ll look to share the documentation and source on http://github.com/dquail as per usual. But in the mean time, wanted to share the abstract.

General value functions (GVFs) have proven to be effective in answering predictive questions about the future. However, simply answering a single predictive question has limited utility. Others have demonstrated further utility by using these GVFs to dynamically compose more abstract questions (Ring 2017), or to optimize control (Modayil & Sutton 2014). In other words, to feed the prediction back into the system. But these demonstrations have relied on a static set of GVFs, handcrafted by a human designer.

In this paper, we look to extend the Horde architecture (Sutton et al. 2011) to not only feed the GVFs back into the system, but to do so dynamically. In doing so, we explore ways to control the lifecycle of GVFs contained in a Horde – mainly to create, test, cull, and recreate GVFs, in an attempt to maximize some objective.