Collaborative AI Learning Environments: Show Me What I’m Saying

Open-ended responses require synthesis and are uncomfortable territory for students early in the learning process.

Humans are quite good at synthesis. However, sometimes we have trouble retrieving the right information at the right time, which may prevent us from responding as thoroughly as we are capable of responding.

What are some ways we can fight this retrieval barrier? Perhaps with a collaborative agent. In a 2016 study, Eugster et. al. were able to create a recommendation engine based on neural activity. This kind of observational recommendation has existed as long as search keywords have been around (and even before).

But what about in creative learning environments, where the goal is not merely interest and identifying well-worn paths from the current location? Furthermore, how might we avoid surfacing information that the learner is tasked with retrieving?

Setting up these guidelines is critical, as we know that learning depends on intervals of information retrieval and synthesis. So the strategy shouldn’t lean into revelation territory.

The Power of Reflection and Suggestion

In the famous MIT project ELIZA, created from 1964-1966 at MIT’s Artificial Intelligence Laboratory, we know that reflective responses (where the machine collaborator responds by restating the human collaborator’s statements) are powerful. At the very least, ELIZA convinced the human collaborators that it was intelligent.

Of course, context must be taken into account. ELIZA was built specifically as a parody, and few people had been exposed to any sort of interactive computation of the sort at that point in history, so relative to the backdrop, it probably wasn’t extremely difficult to convince people that the computer was quite intelligent.

However, ELIZA (and many examples thereafter) outline the power of a relatively static collaborator in opening up a person’s thinking.

As a programmer, I use “duck debugging” on a regular basis – a process of forced, methodical articulation. When given a static collaborator (the duck), I am able to direct my attention to employing the method more completely.

Intelligent Collaborative Reflection

So how might we use collaborative reflection intelligently to help learners through an open response paradigm of engagement?

One way might be to create an intelligent collaborative reflection of what the learner is saying.

This might require a “processing” section – a scratch pad where a user might draw or type their thoughts as they process through them. With the encouragement that this scratchpad area is not being scrutinized, a student might use this area to explore information they are able to retrieve, visualize how the pieces connect, and formulate their ultimate synthesis that will end up in the “official” answer.

So how does our intelligent collaborative agent act?

Some ideas:

  • Use some basic NLP to identify object tokens, and visualize those in a parallel screen. For example, “three dots” might show three dots of equal size in the parallel screen. The learner might be able to manipulate placement using text or direct input. This makes the text more than a human readable string; instead, it connects the words to a representational model.
  • Provide a common vocabulary for generating visualizations or drawing relevance across a scratched narrative. For example, when showing a linear equation, provide the visual representation of that equation with a simple written syntax. Allow named variables so abstraction happens in the formalization step.

Of course, returning to our previous discussion, we want to avoid revealing information to the learner that they are trying to instill in their own minds; therefore, it would be necessary to provide the guardrails for what types of information may be reflected in a given circumstance. If the student is learning about 18th century European history, it’s probably not going to help their learning process if they can easily query “timeline of the Seven Years’ War” and get a visual representation of that.

Inferring what a user needs to see based on some set of guides could make a learner’s mental processing of information much more powerful, allowing them to draw synthesis with a higher degree of depth. Educators may do well to provide better tools to the learner for the process of synthesis. In the same way that pencil and paper may create an externalization of ideas, perhaps it is time for a maturation of learning tools to extend our ability to articulate raw information for analysis and synthesis.

Come Down Off the Ledge

It seems that we have more opportunities to delete things than we realize.

Why is “Delete Facebook” even a thing, anyway? We suddenly realize this little chat window, status update thing… that it’s bigger than we thought. It makes money, and it eats people’s time. It gives us back something… Perhaps it’s chewing up our time and spitting it back in our face. Maybe that’s why we want to delete it.

Perhaps this is the way all infiltration occurs – not like a SWAT team storming in, but more like a slow insider spy force moving in one person at a time. This little toy of a website or app that at one point was pretty much just Farmville and poking becomes something more, quite stealthily at that.

So many things we use, though! So many that we don’t even remember until we take direct and specific account. I have over fifty devices connecting to my router. Fifty. I remember clearly when my Internet was one-at-a-time – someone calling our home phone would “bump” me from my AIM chat sessions.

I deleted Slack from my phone and my computer for this reason – to subtract. A direct accounting of how my time is chewed up and spit back out at me, one emoji at a time.

“There’s a person on the other side of that thing, you know.” Yeah, of course there is. But my messages are chewing up their time just as quickly. The compression of that medium is more salient than ever.

When you fight over chat, you may be the only one seeing the fight, for example. Ever accidentally misread someone’s tone through the frame of your computer screen, your phone screen? Ever make a mistake in an email because you were felt behind your computer?

I use Freedom and Circle to self moderate. I’m no special case, no elite soldier. I fall prey to this elaborate waste as well. And quickly, too. A snap of a command-T+twi… off to the races, I see the silliest thing under the Moments tab on Twitter and down the hole we go.

So I choose my barriers when I’m thinking clearly.

Coming down off the ledge means realizing that you don’t have to do this. You don’t have to sacrifice the only thing that binds us all: your limited time.

When you are thinking clearly and someone asks you, “would you like to have notifications that overwhelm you at any hour you are awake?” – few people would really want to opt-in to this. No one says, “man, I really hope I don’t get much sleep tonight because I was staring at a screen too late.”

When I ask you about your life’s ambition, you wouldn’t say “to spend 12 hours on a screen every day keeping up with the latest gossip and news and hijinks to pacify my awareness of my limited existence.” That’s an uncomfortable, commonplace reality.

We think access is equivalent to convenience. We think access is actually a one-way street, too. That we have access to all the things we use – all the information we want. Every review of every restaurant so we find the BEST coffee at the perfect time of day, roasted seconds before we walk in the door so our majestic entrance is met with perfectly timed personalized service. I can hear the trumpets sounding now – a welcome to our kingly appearance of perfected modernism.

Oh, and don’t forget just how important that coffee is to your personal mission to save the world. And you’re going to do it, just as soon as you send this Tweet, check that email, and fire off a quick Slack message.

Come down. Come down off the ledge. You don’t have to do this.

But, as our friend Marshall McLuhan (or someone) once said, “We shape our tools, and thereafter our tools shape us.”

We have access to our tools, but they also have access to us. We have access to Facebook, to Slack. To iMessages and email. We have access to every TV show, virtually every book we can imagine, every piece of media. We can tour the world through our screens, leaving our bodies in the dust to rot in their inactivity.

We have all of this access, and believe we are observers, travelers, impacting only. And yet, the opposite is true.

We allow our email to have access to us. We allow our screens to impede on our ability to see clearly. We let our endless pursuit of the perfect coffee to eliminate serendipity. Our thirst for modernity and connectedness leaves us terribly alone, docile. Our time chewed up, our “deleting” becoming an act of valor.

Come down off the ledge. You don’t have to do this. You don’t have to give everything access to you. You can be whole, and you can leave all of that behind. You choose where your time goes – you choose who gets your attention. You choose, with your rational mind, how many steps to take in a day. You choose to create over consume.

But first, you have to come down off the ledge.

Stop believing that this is the inevitable way.

Stop pouring your time into endless buckets of nothingness, giving away your passions in exchange for pacification. Stop trading your sanity and your soul for safety and satiation.

The ledge feels safe, doesn’t it? Jumping into the water, off the bridge like everyone else. It feels safe, because it’s common – it’s the way everyone else is going. We’ve come so far to not learn this lesson – that the crowd is a terrible thing to follow.

Come down off the ledge.

Steal these iWatch App Ideas

I want you to steal my ideas.

I’ve said it before, and nothing has changed. Ideas are important, but they aren’t proprietary. I want these things to exist, so hopefully with this post I can inspire someone to make them, even if that person is me.

The iWatch (or whatever it is going to be called) is announced this week, and that’s exciting for entrepreneurs and developers for a lot of reasons. The ideas I present below are sort of like predictions; because I don’t know the features of the watch, I’m making a lot of assumptions.

As always, the ideas presented below are in no way my property. Think of them as money left on the sidewalk. In an envelope labeled “take me!”. The only thing I ask is that you contact me by emailing me or reach out to me on Twitter (@JCutrell).

1. Viral Crowdsourced Fashion

Very simple: allow people to make watch faces and share them. Make them available like a YouTube video. This idea is as old as desktop backgrounds, but it has a brand new twist: fashion.

Why I think it will work: Fashion is obviously a part of our lives. However, technology and fashion have not fused completely yet. The only company truly bridging the gap between personal technology and fashion is – you guessed it – Apple. Apple’s products provide social status and expression of “personality.” The iWatch will have own a large part of this market, and will mark the true fusion of technology and fashion.

Bonus points: Make the faces sellable and take a small commission. (Not sure how this would work with in-app purchases, but that’s where you can get creative, right?)

2. Access-oriented Applications

Everyone is talking about the mobile payment industry as a game changer for the iWatch, but let’s back up for a second and talk about the fundamental difference in payment from your watch versus from your wallet.

Wallets are no more “mobile” than watches. The cognitive change the iWatch introduces is a new sense of access. And along with that comes a new category in application development that hasn’t really taken off with mobile phones.

This is my personal theory, but I think the cause for the failure of mobile device access applications (case in point, Passbook) is at least partially due to the disconnect from the phone as directly connected to you at all times. With a wallet, it works partially because we were born into it (our parents did it), and also simply because we had the previous understanding of carrying cash.

With a watch, we have a newfound freedom to directly identify the watch with the wearer. I wouldn’t be surprised if the payment apps didn’t eventually take bio signs into account for fraud protection.

For these and many other reasons, it’s time to take these apps a step further. It’s time to start using the bio-connectedness that these devices will provide us with to grant access to restricted places requiring identity. This goes from more secure login services and better/more accurate TSA screenings, to check-ins and social physical presence applications.

3. Quantification, Meet The Informed Self

The iWatch will most likely mainstream the quantified self movement. We’ve seen this getting a large part of the media attention.

But for those getting into the game now, it’s time to start thinking past our current place in the quantified movement, and towards the next step. I’m making a prediction: that next step will be to make sense of the data.

Anyone who has worked with infographics for long will tell you that the single most important part of their job is finding and choosing the best metrics and clearest visualizations of those metrics. There is some science to this, but there’s also some gut.

What does this mean? It means that our quantifications don’t really provide us with anything other than raw data, regardless of how pretty it is. We need to take this raw data and turn it into something meaningful. Look at strong correlations, and suggest potential causation. Compare seemingly unrelated things, like lines of code versus number of steps walked per day. Give users a framework for making decisions, and then comes the fun part: using big data to come to conclusions about trending correlations in day-to-day behaviors of the population.

But the first step is to take the quantifiable numbers and show some kind of derived qualitative information.

4. The ultimate timer

Seriously, we’re still making time tracking applications? Yes. Because all of them suck. Well, maybe they don’t suck, but I haven’t found one that is natural. I end up watching a clock. Or my watch.


A unique opportunity for productivity apps related to time: use the watch paradigm. This one seems obvious, but whoever wins this one will win big.

I’m thinking something like setting up a few behaviors that the watch can sense and infer with, but then make it very simple – tap, swipe to client, tap again to start timer, tap again to stop timer. On my watch, not my phone. Certainly not on my computer. I use my computer to create invoices. I use my timekeeper to time things.


The iWatch presents a lot more opportunities than I’ve even discussed here. I’d love to hear what you think about these ideas, and I’d love for you to share yours with me. Or, by all means, feel free to take mine and build it.

Hit me on twitter (@jcutrell) if you want to continue this conversation.