Reinforcement learning startups?

I spent some time mingling with some CTOs of iNovia portfolio companies today. It was fun getting a bit back into the startup world, especially with the obvious AI slant to the meeting today. One subject that I discussed a lot with some of them was the apparent lack of startups leveraging Reinforcement Learning.

I’m assuming that this perception is true, but admit that I have no data on this other than personal and anecdotal. The number of startups leveraging deep learning outnumber those using RL significantly – based on the press clippings I read, and the first hand startup founders I know. So I believe there is something to this. So this begged the natural question of why?

I’ve blogged about this before, but the discussions today a bit beyond this. No one, obviously including myself, claimed to have the answer. That said, I believe there are several reasons for the apparent lack of RL startups, most of them pivoting around the availability of data (or lack there of).

Data in RL is temporally based. Unless you’re simulating games on the atari, it is difficult to come by. Especially because of the often “real world” action based nature of it. You can’t just use amazon turk and label 500,000 images of cats to train your network. You have to generate a real stream of experience. That’s more difficult to do.

More specific challenges of the acquisition of data might be:

  • The cost of experimentation – Exploration is key in creating data used for reinforcement learning; When simulating games of GO, one can experiment and make a move that is thought to be poor. The worst that can happen is that you agents self esteem takes a hit because it loses another game! The stakes are low. But in a real time stock trader, the reinforcement learning suffers a significant cost when experimenting. This exploration cost isn’t suffered in supervised classification type approaches.
  • Delayed value – Any value derived from a reinforcement agent solution doesn’t occur until after the agent has been trained. It’s hard to convince an organization to adopt your product if they have to wait a month for it to learn before providing value. Because of the lack of simulators, these agents must learn on the fly. So when they’re deployed the don’t provide immediate value.
  • Temporal nature of data – I’ve blogged about this before. But the nature of the reinforcement data is temporal in nature. Often times rooted in “real time.” Agriculture data for example. It takes a full calendar year to acquire results of how a crop may perform. Rather than 0.001 seconds to simulate a game of GO. This is similar to a degree for acquiring data for a manufacturing plant. We’re at the early stages of RL adoption, so this data just isn’t there yet, and beginning to acquire it is difficult because of the reason above – why acquire the data in the first place if there’s no immediate pay off?
  • Infancy of the technology – I don’t buy this one, but it was mentioned that RL is a more novel approach than supervised learning. RL, Supervised, Unsupervised. They’ve all been “around” a long time. The computational power and availability of data has been what’s given rise to supervised learning. I’d conclude by saying that for RL we now have the computational power, but we still lake the data.

“Hard” questions

Not to be confused with “the hard thing about hard things!” … which … incidentally is a fantastic read. The TLDR: of which is that nothing in business (and in life?) follows any sort of recipe. You’ll be faced with countless difficult situations in leadership, for which there are no silver bullets. So Ben Horowitz shares a whole bunch of stories about hard situations, and how they were handled (not always well).

Ok … so enough about the book plug. Back to “hard questions,” this time, with respect to “questions” that a General value function is looking to answer.

I’ve thought about this a bit recently. In a family of GVFs (questions) that an agent could ask about it’s data stream, some questions, seem intuitively easy to answer. While other types of questions, seem hopeless. I hadn’t given it much thought until I heard another grad student make a similar reference, during a really interesting talk about general value functions today. Her presentation was comparing different algorithms, used by a GVF in a robot setting. The comment in particular, was that the algorithms all performed similarly if the question “was easy”, but when the question was “hard,” the different algorithms performed differently.

This obviously reminded me again about what a “hard question” really is. What do we mean when a prediction, or GVF is “hard”?

In a robot setting, a GVF answers a question about it’s sensorimotor stream. Given a policy, “how much light will I see in the near future,” how many steps will I take before I hit a wall?” are examples of a GVF question. Some of these seem difficult to answer.

I think it’s fair to say that when we say that a GVF is hard to learn, it means that it is difficult to come up with an accurate prediction.

So the next question then becomes, what makes it difficult to form an accurate prediction? I can see two reasons that a prediction may be difficult to approximate.

  • There isn’t enough relevant experience to learn from. Learning how long it would take to get from Edmonton to Calgary would clearly be “hard” to learn if you lived in Regina, and rarely travelled to Alberta.
  • Your feature representation is insufficient to answer this type of question. I’m less clear about what I mean by this. But it seems as though the features required to answer one question would be quite different than the features required to answer another type of questions. “What color will I see in the near future?” quite clearly requires a different feature representation than “What sound will I hear in the near future?” If the robot agent only represents it’s state with visual sensors, the latter type of question will clearly be incredibly hard to answer. This question, to be fair, might illogical. How can you ask a question about what you’d hear, if you weren’t made available audio senses. So perhaps a better example might be a question like “What is the chance I will see an ambulance pass me on the highway?” If my representation includes audio signals, this question may be easy to answer. Because as soon as I “hear” a siren, I can quite accurately predict that I’d be able to see the ambulance. Without this audio signal however, suddenly this question becomes much more difficult, if not impossible (if you remove my rear view mirrors).

Clearly there’s more to a “hard” question than this. But it seems these two attributes are a good starting place.

Is there a universal “knowledge”

The “knowledge” is in the data.

In other words, predictions are made directly from this data stream, and these predictions are all that you need to represent “knowledge.”

That said, each agent (person) is exposed to a different subset of this data.

Therefore, everyone’s “knowledge” is personal. No?

What does this say about any sort of ground truth about knowledge? Ones “knowledge” may be in direct conflict with anothers. And both may be correct?

So, knowledge can conflict. Well, actually, since knowledge is completely subjective, there is really no universal ground truth to it. It’s personal.

What then about “facts?” Aren’t facts, thought to be bits of knowledge that are “true?” Wait … Oh no … ALTERNATIVE FACTS!!!

Mobile applications as RL test environments

Atari is a popular environment for testing reinforcement learning algorithms. This got me thinking about the possibility of using mobile applications for such purposes. There might be some practical advantages for doing – both from a researcher and commercializers perspective.

For researchers, the ability to be exposed to multiple different sensors might provide value. Not only would you have access to pixel and audio data from the game, but you may also have microphone data from the environment. Also, accelerometer data can be included.

One of the biggest technical challenges to instrumenting an application for testing within this environment might be accessing the pixel stream in an efficient manner. From what I remember, there’s no way to efficiently access the current UIView pixel data. Creating the pixel data from the UIView is expensive computationally (TBD by how much), which may be problematic since this would need to be done at each time step. This is especially problematic if one were to provide RL tools / services to application developers (more on this later), since your solution would introduce latency – something to be avoided like the plague in mobile applications.

There would be obvious commercial appeal to a RL SDK + service that was easily integrated into an application. Such a service might, using senses common to any app (pixel data, sound) rather than needing to access any non observable application state, be able to make predictions about user behavior. Such predictions would be valuable for an application developer. For example, the sdk could trigger a notification that there was a good chance the user would quit the app within 5 seconds. (Yes, enabling application developers to suck users into spending even more time staring at their phones may be a particular type of evil).

Another challenge technically is that a couple test applications would need to be used to develop such an SDK / service. Perhaps a larger company with a popular app would allow you to integrate with their application to perform research. Or perhaps better yet, in an Android environment, this pixel data may be available, so you could experiment with such a solution even without the cooperation of the app developer.

What does “memory” mean to me?

This sounds like a bizarre question. Memory is a pretty fundamental word we all use in our daily lives. But it’s a word that carries a lot of significance in the field of computer science – in particular the field of artificial intelligence. I use it. Often. And usually pretty haphazardly. At the encouragement of others within my department, I’ve been convinced to read a bit more about the psychology / neurology roots of the word. Actually … I should restate that. I have no idea what the etymology of “memory” is. I’m guessing it’s rooted in psychology. And then neurology. So I include those. But I’m sure there’s more fields to look at when studying the definition of the word. I digress ….

I thought before I dove into looking up how these fields view “memory” I should document what I refer to when using the word.

I admit, that my usage is likely fairly naive. When I refer to “memory” it’s generally in the context of a reinforcement learning agent. And it usually loosely means, information, that is from the past, that is available to the agent at the current time-step. So previous observations are memory. Traces are memory. Anything that encapsulates past information is “memory.” hmm. that’s about it. But I realize that is I poor definition. By that definition, a value function is “memory.” It encapsulates information from the past, and makes it available to the agent at the current time step. But that’s clearly not what I mean. So I need to go further. “Memory” to me, is more raw than a general value function. It’s close to an original observation from the past, available in the current time. But that’s not quite right either. I leave a little bit of wiggle room to massage the original observation into a representation available for the current step (but not massaged into a value function).

So there you have it. An incredibly lackluster and flawed definition. I’ll intentionally leave it at that for this post. (I think the point of studying the etymology of “memory” is not only to not annoy psychologists/neurologists by my usage of the term – but to inspire and motivate work within AI.)

Incremental things

In the startup world, calling an idea “incremental” is somewhat of an insult. Startup founders are constantly filled with cliches of “going big or going home”, “disrupting industries,” “building monopolies” and “creating unicorns. According to the script, incremental is boring. It’s unimaginative, It goes against the sprit of innovation. And it’s certainly not something you want to be accused of. If you’re a tech entrepreneur. Especially one raising venture capital.

This world of anti-incrimental, right or wrong, has been the world I’ve lived in for the last 10 years. But today, in a supervisor meeting talking about my thesis work, it was suggested that my current plans are potentially too complicated for a Masters degree. I should save that idea for my PHD, and focus on something incremental instead. I was taken a bit aback by this. I’ve become instinctually dismissive of anything but completely novel approaches in the past (Note – I’m not trying to pretend I’m Elon Musk here creating missions to mars … I’ve started my fair share of fart apps in the past decade. But I do so always with a bit of shame). I need to think about it a bit more, but I think an incremental approach – at least to a masters thesis – makes good sense.

Again – I need to do a bit more navel gazing – but the problem with a completely novel approach within scientific research (in my case, I am considering a new algorithm for discovering a cluster of beneficial general value functions within a reinforcement learning agent), is a matter of scope. Coming up with the algorithm, running a few experiments, demonstrating results is perhaps the easy part. Explaining and justifying each decision point of the algorithm, competing algorithms is the hard part. Not to mention that in a wide open research topic such as discovery (of features), the related work is immense. Each related paper and idea should be thoughtfully considered. For all these reasons, the scope explodes and perhaps exceeds that of a Masters thesis. I shouldn’t make the blanket statement that one can’t invent a new algorithm / architecture within the scope of a masters thesis. But I do believe it to be likely more appropriate for a PHD thesis.

Again, this idea that creating something new is too grand in scope, is foreign in startups. Sure, there is the lean startup manifesto which guides its followers to build things in small increments. But these increments all are intended to add up to something truly disruptive and novel.  In the startup world, you could invent a new algorithm / service. It either works or it doesn’t (Based on engagement). But in the scientific world, whether something “works” or not, isn’t measure as by user engagement. More thought needs to be dedicated to addressing each decision point, and comparisons with other approaches. Note, that achieving either (user engagement vs. a comprehensive description of the thought process and scientific steps taken to achieve a result) can be difficult. In the case of the former, it’s more of a dice roll. Like catching lighting in a bottle. You can get lucky and create something delightful for users in a couple months and “be done.” The idea that you need to go beyond creating something, but define, and defend each decision point, is something I’m still getting acclimatized to.

The nice thing about doing something more incremental during a masters thesis is that much of the groundwork has been laid for you. For example, the Deepmind unreal paper has drawn a lot of attention from us at the University of Alberta (because of our interest in GVFs and our many ties with Deepmind). It’s a fascinating body of work. It challenges the idea of a predictive feature representation – instead using the auxiliary tasks to sculpt the feature representation directly (by finding generally useful features across tasks). But many scientific questions arise because of this work. How would auxiliary tasks compare with a predictive representation in an environment like the compass world or cycle world? What is the sensitivity to the parameters in the auxiliary tasks? What types of environments do auxiliary tasks work best in? These are just a few of the questions that could be thought of as incremental. They’re not creating anything new. But they are contributing many meaningful insights and knowledge towards the field. And could form the basis of a good masters thesis.

That said, thinking about this has only increased my desire to contribute something completely novel to the field. Perhaps the appropriate path to do so is in a PHD once some of the foundation has been laid within a Masters thesis. The work from a masters lends credibility to a PHD author creating something truly novel, not to mention is truly informative to the work done towards a PHD.

 

Forward vs. Backward view of memory

I’ve worked with a few reinforcement learning environments that are partially observable. The observations seen in these environments lack information to do a good job of identifying the current state within the environment. In these situations, different states are aliased – since many of them look the exact same.

What is an environment that is partially observable? Imagine standing in an extreme snow storm in the middle of an empty field. No matter where you look, no matter how you stand, all you see is white. All you feel is cold. Each state you are in, is hard to differentiate from the next. In this environment however, consider that there is a bench in the middle of the field. When you are right next to the bench, you can see it, and you can touch it. But as soon as you take a few steps away from it, it is no longer visible. There’s no way to sense it.

In this world, if you are to rely directly on your immediate observations (what you see and what you feel), all states look the exact same, with the exception of the state where you’re directly beside the bench. But as soon as you step one step away from the bench, you’re back to a state that is aliased with almost all the others.

There are two approaches that come to mind when creating a feature representation, that may get over this state aliasing problem for such environments. Both involve some element of using “memory,” so that the feature representation is not only comprised of what the agent currently sees.

  1. A recurrent memory based approach. In this approach the feature representation of each state consists of the observation of the current time step PLUS the observations from the previous n time steps. For example, if all I see is white snow, but I also know that I saw a bench last time step, I know quite precisely where I am (one step away from a bench).
  2. A predictive based approach. In this approach, the feature representation of each state consists of the observation of the current time step PLUS the predictions from the previous time step. For example, if all I see is white snow, but at the previous timestep I predicted that I was two steps away from the bench if I kept moving forward, I would now know where I was (again, one step away from a bench).

Both approaches seem to incorporate some form of memory (apologies for using “memory” quite loosely). The latter approach has a forward view of memory. Instead of looking back at the previous n time steps and summarizes what did happen, it looks ahead into the next n time steps and summarizes what will happen. I wonder how these two approaches would compare.

One thing that comes to mind would be that a forward view of memory might generalize better. In other words, regardless of how I got to a given state, if the predictions of the future are the same, wouldn’t you want each of these states to generalize the same?