Are we an online only learning agent? Is the best agent one that only learns online?

Do humans learn fully online? I’ll define “online” as when learning only leverages the data incoming from the current observation. It learns only from this new data. In such a world, batch processing and experience replay are not “online.” Is that how humans work?

I suspect not. I think there’s a balance between online learning, and planning with humans. Furthermore, I thank that we balance between the two depending on the situation. When a human is faced with an intense situation – for example a professional hockey player in the middle of their shift – all of the data processing is in the foreground. They take the immediate observation, and act upon it. There is no background planning taking place. In contrast, and at the other extreme is when the athlete is back home daydreaming just before they fall asleep (or while they are actually sleeping). At this time, the human is not doing much, if any foreground processing. The observations they are receiving are minimal – given they’re laying in a dark room. And their action decisions are even more minimal. But in such a state of dreaming, their mind is free to plan (with the model of the world that has been learned).

This brings me to an ideal agent. I think an ideal agent will maximize its computational power. There is no idle time for such an agent. It learns as much as it can at every moment. There are no sleep(s)! Therefore, such an agent, if in a time where immediate action is required (like that of an athlete), this computational power must dedicate itself completely to foreground processing. But later, the same agent, when in a safe place, has the opportunity to plan using the model of the world it has learned. I usually hate suggesting that humans are the the default inspiration for designing the best agent (ie. just accepting that neural nets are the defacto best function approximater because humans implement them) … but in this case, I think we have it right.

History of state indexing

The trajectory of an MDP is given as:

screen shot 2019-01-12 at 10.42.18 am

An agent is given a state, takes an action, receives a state, a reward … and the cycle continues. Why is, by convention, the indexing of states, actions, and rewards done in this way?

I think it’s important to note that this is convention, and convention only. R1 above could just as easily been denoted as R0. I think the decision of indexes comes down to which fits our mental model of how things work. You could argue both sides.

One mental model is that a given state, action and resulting reward are each required at each learning step in most algorithms, then it would make sense that these states, actions, and rewards share the same index. In such a mental model, The above would be organized as S0, A0, R0. This would seem reasonable.

But I think of the stream of states, actions, and rewards from a more temporally consistent data stream perspective. From this view, states and rewards enter the agent at the exact same temporal step. The indexes have temporal significance, so the state and reward should share the same index.  Therefore the above indexing sequence makes most sense to me.

Driving behavior to learn abstract predictive state and knowledge

As humans born into the world, we are exposed to nothing but a stream of raw sensorimotor data – visual, audio, touch, smell. Using this data alone, we begin to form increasingly abstract knowledge.

One approach for artificial intelligent agents to represent such knowledge is to do so through predictions. The “perception as prediction” is something I’ve written about before.  One such approach would be to use the General Value function (GVF). Given such an approach, a GVF(s) could be created which would predict some raw value in the observation stream. The value of such a prediction could then be used in the question of a higher level prediction. On and on it goes, using the GVF at each layer, until more abstract knowledge is captured. But for a system attempting to learn this abstract knowledge, how would such an agent drive behavior? There are several challenges which I’m encountering with my current research in this field:

  • The GVFs at higher levels require the GVFs at the lower level to already have been learned. Therefore, it’s most efficient to follow behavior policies that optimize learning the early layers. Perhaps the agent should, in sequence follow behaviors that foster learning these low level GVFs. In other words, follow the behavior policies from the GVFs directly – with some minimal exploration. Then and only then, would the agent be able to learn other higher level GVFs. In a away, this would be sequential on policy learning, rather than parallel off policy learning.
  • The later levels are depending on input from the earlier layers. So until these earlier layers are learned, the later layers are being handed incorrect data to learn from. This ties up unnecessary computation time, and initializes the weight vector to these later GVFs to incorrect values. Often very incorrect values.
  • If this was the approach however, you can not completely forget to follow policies that learn lower level GVFs. In other words, the policy for the first layer GVF must on occasion be followed. Because we are using function approximation, rather than tabular, if the policy is never followed, the predictive value for that GVF will gradually be eroded. It’s like a human that learns that if it at the edge of a lake and extends its foot, it will feel the sensation of “wetness.” Once it has learned that, it must on occasion, experiencing the feeling of wetness by extending its foot. This is because, there are many other similar states for which the agent will NOT experience this sensation. Because of function approximation, each time the agent does not experience this sensation, the value of the GVF predicting wetness will move closer to NOT experiencing it. So the agent needs to be reminded occasionally, how it feels to experience this sensation, and which states lead to this. So this begs the question, when should the agent relearn and revisit these values and behavior policies of earlier GVFs?

 

Additionally, is there a place for an agent that is motivated to learn this knowledge? Shouldn’t it be following behavior which accomplishes some other goal? Perhaps while doing planning, the goal of learning such knowledge would be more appropriate, since this allows for accomplishing goals across multiple tasks.

 

Predictive vs. Historical features

Several of my experiments in Reinforcement learning have involved generating predictive state representation (where by part, or some of the features are predictive in nature). These learned predictive features look at the current observation, make predictions about some future cumulant, and incorporate this prediction into the current state representation. In the Cycle world setting, I’ve demonstrated how this can be useful. In a way, predictive state representations look at an aspect of the future data stream and add this information to the current state representation.

In contrast, another way to construct a state representation would be from a historical perspective. The agent would encapsulate previous observations and incorporate them into the current state observation.

Given these two perspectives, it got me thinking, is there REALLY a place for predictive state representations? What is that place? Can they just be replaced by historical ones? What follows invokes more questions than answers … but is my way of flushing out some thoughts regarding this topic.

First off, With respect to predictive features, I’m going to define useful as being able to provide information that allows the agent to identify the state. A useful feature would allow you to generalize across similar states and likewise, allow the agent to differentiate from unlike states.

For both predictive and historical features, I think they’re really “only” useful when the data stream of observation is sparse in information. In other words, it lacks information at each tilmestep which would allow the agent to uniquely identify the current state. The tabular grid world where current state is uniquely identified is the opposite of such an information lacking observation stream. In contrast to the tabular word, information sparse / partially observable settings, historical or predictive features can fill the gaps where the current observation is lacking information. If informative data exists in the past data, then historical features become useful. Likewise, if informative data exists in the future data stream, then predictive features become more useful.

The B MDP where there is unique information at the very beginning of each episode, which directly informs the agent which action to take in the future, is somewhat of a special case where all of the useful information in the stream exists only in the past (the first state). Therefor, predictive features are of no use here. But in continuous (real life) tasks, I don’t think this would ever be the case. Informative information “should”? always be observable in the future, if the agent takes the correct steps So if that’s the case (as long as the agent can navigate back to these states, then predictive features “can do whatever the historical ones” can. It’s only when the agent can’t get back that historical features are required. But these cases seem somewhat contrived.
In either case though … maybe this is stating the obvious … but historical and predictive features seem to share something in common: They are useful when the current observation is starving for more informative data. Historical features help when there is informative data in the rear view mirror. Predictive when there’s informative data ahead such as the following MDP.

cycleworld
So which (predictive or historical) really depends on what the data stream looks like … and where you are within it.

 

Reinforcement learning startups?

I spent some time mingling with some CTOs of iNovia portfolio companies today. It was fun getting a bit back into the startup world, especially with the obvious AI slant to the meeting today. One subject that I discussed a lot with some of them was the apparent lack of startups leveraging Reinforcement Learning.

I’m assuming that this perception is true, but admit that I have no data on this other than personal and anecdotal. The number of startups leveraging deep learning outnumber those using RL significantly – based on the press clippings I read, and the first hand startup founders I know. So I believe there is something to this. So this begged the natural question of why?

I’ve blogged about this before, but the discussions today a bit beyond this. No one, obviously including myself, claimed to have the answer. That said, I believe there are several reasons for the apparent lack of RL startups, most of them pivoting around the availability of data (or lack there of).

Data in RL is temporally based. Unless you’re simulating games on the atari, it is difficult to come by. Especially because of the often “real world” action based nature of it. You can’t just use amazon turk and label 500,000 images of cats to train your network. You have to generate a real stream of experience. That’s more difficult to do.

More specific challenges of the acquisition of data might be:

  • The cost of experimentation – Exploration is key in creating data used for reinforcement learning; When simulating games of GO, one can experiment and make a move that is thought to be poor. The worst that can happen is that you agents self esteem takes a hit because it loses another game! The stakes are low. But in a real time stock trader, the reinforcement learning suffers a significant cost when experimenting. This exploration cost isn’t suffered in supervised classification type approaches.
  • Delayed value – Any value derived from a reinforcement agent solution doesn’t occur until after the agent has been trained. It’s hard to convince an organization to adopt your product if they have to wait a month for it to learn before providing value. Because of the lack of simulators, these agents must learn on the fly. So when they’re deployed the don’t provide immediate value.
  • Temporal nature of data – I’ve blogged about this before. But the nature of the reinforcement data is temporal in nature. Often times rooted in “real time.” Agriculture data for example. It takes a full calendar year to acquire results of how a crop may perform. Rather than 0.001 seconds to simulate a game of GO. This is similar to a degree for acquiring data for a manufacturing plant. We’re at the early stages of RL adoption, so this data just isn’t there yet, and beginning to acquire it is difficult because of the reason above – why acquire the data in the first place if there’s no immediate pay off?
  • Infancy of the technology – I don’t buy this one, but it was mentioned that RL is a more novel approach than supervised learning. RL, Supervised, Unsupervised. They’ve all been “around” a long time. The computational power and availability of data has been what’s given rise to supervised learning. I’d conclude by saying that for RL we now have the computational power, but we still lake the data.

“Hard” questions

Not to be confused with “the hard thing about hard things!” … which … incidentally is a fantastic read. The TLDR: of which is that nothing in business (and in life?) follows any sort of recipe. You’ll be faced with countless difficult situations in leadership, for which there are no silver bullets. So Ben Horowitz shares a whole bunch of stories about hard situations, and how they were handled (not always well).

Ok … so enough about the book plug. Back to “hard questions,” this time, with respect to “questions” that a General value function is looking to answer.

I’ve thought about this a bit recently. In a family of GVFs (questions) that an agent could ask about it’s data stream, some questions, seem intuitively easy to answer. While other types of questions, seem hopeless. I hadn’t given it much thought until I heard another grad student make a similar reference, during a really interesting talk about general value functions today. Her presentation was comparing different algorithms, used by a GVF in a robot setting. The comment in particular, was that the algorithms all performed similarly if the question “was easy”, but when the question was “hard,” the different algorithms performed differently.

This obviously reminded me again about what a “hard question” really is. What do we mean when a prediction, or GVF is “hard”?

In a robot setting, a GVF answers a question about it’s sensorimotor stream. Given a policy, “how much light will I see in the near future,” how many steps will I take before I hit a wall?” are examples of a GVF question. Some of these seem difficult to answer.

I think it’s fair to say that when we say that a GVF is hard to learn, it means that it is difficult to come up with an accurate prediction.

So the next question then becomes, what makes it difficult to form an accurate prediction? I can see two reasons that a prediction may be difficult to approximate.

  • There isn’t enough relevant experience to learn from. Learning how long it would take to get from Edmonton to Calgary would clearly be “hard” to learn if you lived in Regina, and rarely travelled to Alberta.
  • Your feature representation is insufficient to answer this type of question. I’m less clear about what I mean by this. But it seems as though the features required to answer one question would be quite different than the features required to answer another type of questions. “What color will I see in the near future?” quite clearly requires a different feature representation than “What sound will I hear in the near future?” If the robot agent only represents it’s state with visual sensors, the latter type of question will clearly be incredibly hard to answer. This question, to be fair, might illogical. How can you ask a question about what you’d hear, if you weren’t made available audio senses. So perhaps a better example might be a question like “What is the chance I will see an ambulance pass me on the highway?” If my representation includes audio signals, this question may be easy to answer. Because as soon as I “hear” a siren, I can quite accurately predict that I’d be able to see the ambulance. Without this audio signal however, suddenly this question becomes much more difficult, if not impossible (if you remove my rear view mirrors).

Clearly there’s more to a “hard” question than this. But it seems these two attributes are a good starting place.

Is there a universal “knowledge”

The “knowledge” is in the data.

In other words, predictions are made directly from this data stream, and these predictions are all that you need to represent “knowledge.”

That said, each agent (person) is exposed to a different subset of this data.

Therefore, everyone’s “knowledge” is personal. No?

What does this say about any sort of ground truth about knowledge? Ones “knowledge” may be in direct conflict with anothers. And both may be correct?

So, knowledge can conflict. Well, actually, since knowledge is completely subjective, there is really no universal ground truth to it. It’s personal.

What then about “facts?” Aren’t facts, thought to be bits of knowledge that are “true?” Wait … Oh no … ALTERNATIVE FACTS!!!