Home / Mind boggling  
0
Image of Mind boggling

Talk about "mind=blown." Imagine a world where you can manipulate objects solely with your mind. It's here, it's now, and it's incredible. Consider every industry. Consider every skill. Consider every human action. What are the uses for this technology? Then take it to the next logical level - that this technology will be implanted in our heads. How can it... will it... change your life?

Transcript:

Up until now, our communication with machines has always been limited to conscious and direct forms. Whether it's something simple like turning on the lights with a switch, or even as complex as programming robotics, we have always had to give a command to a machine, or even a series of commands, in order for it to do something for us. Communication between people on the other hand, is far more complex and a lot more interesting, because we take into account so much more than what is explicitly expressed. We observe facial expressions, body language, and we can intuit feelings and emotions from our dialogue with one another. This actually forms a large part of our decision-making process. Our vision is to introduce this whole new realm of human interaction into human-computer interaction, so that computers can understand not only what you direct it to do, but it can also respond to your facial expressions and emotional experiences. And what better way to do this than by interpreting the signals naturally produced by our brain, our center for control and experience.

Well, it sounds like a pretty good idea, but this task, as Bruno mentioned, isn't an easy one for two main reasons: First, the detection algorithms. Our brain is made up of billions of active neurons, around 170,000 km of combined axon length. When these neurons interact, the chemical reaction emits an electrical impulse which can be measured. The majority of our functional brain is distributed over the outer surface layer of the brain. And to increase the area that's available for mental capacity, the brain surface is highly folded. Now this cortical folding presents a significant challenge for interpreting surface electrical impulses. Each individual's cortex is folded differently, very much like a fingerprint. So even though a signal may come from the same functional part of the brain, by the time the structure has been folded, its physical location is very different between individuals, even identical twins. There is no longer any consistency in the surface signals.

Our breakthrough was to create an algorithm that unfolds the cortex, so that we can map the signals closer to its source, and therefore making it capable of working across a mass population. The second challenge is the actual device for observing brainwaves. EEG measurements typically involve a hairnet with an array of sensors, like the one that you can see here in the photo. A technician will put the electrodes onto the scalp using a conductive gel or paste and usually after a procedure of preparing the scalp by light abrasion. Now this is quite time consuming and isn't the most comfortable process. And on top of that, these systems actually cost in the tens of thousands of dollars.

So with that, I'd like to invite onstage Evan Grant, who is one of last year's speakers, who's kindly agreed to help me to demonstrate what we've been able to develop.

So the device that you see is a 14-channel, high-fidelity EEG acquisition system. It doesn't require any scalp preparation, no conductive gel or paste. It only takes a few minutes to put on and for the signals to settle. It's also wireless, so it gives you the freedom to move around. And compared to the tens of thousands of dollars for a traditional EEG system, this headset only costs a few hundred dollars. Now on to the detection algorithms. So facial expressions -- as I mentioned before in emotional experiences -- are actually designed to work out of the box with some sensitivity adjustments available for personalization. But with the limited time we have available, I'd like to show you the cognitive suite, which is the ability for you to basically move virtual objects with your mind.

Now, Evan is new to this system, so what we have to do first is create a new profile for him. He's obviously not Joanne -- so we'll "add user." Evan. Okay. So the first thing we need to do with the cognitive suite is to start with training a neutral signal. With neutral, there's nothing in particular that Evan needs to do. He just hangs out. He's relaxed. And the idea is to establish a baseline or normal state for his brain, because every brain is different. It takes eight seconds to do this. And now that that's done, we can choose a movement-based action. So Evan choose something that you can visualize clearly in your mind.

Evan Grant: Let's do "pull."

Tan Le: Okay. So let's choose "pull." So the idea here now is that Evan needs to imagine the object coming forward into the screen. And there's a progress bar that will scroll across the screen while he's doing that. The first time, nothing will happen, because the system has no idea how he thinks about "pull." But maintain that thought for the entire duration of the eight seconds. So: one, two, three, go. Okay. So once we accept this, the cube is live. So let's see if Evan can actually try and imagine pulling. Ah, good job! (Applause) That's pretty amazing.

So we have a little bit of time available, so I'm going to ask Evan to do a really difficult task. And this one is difficult because it's all about being able to visualize something that doesn't exist in our physical world. This is "disappear." So what you want -- at least with movement-based actions, we do that all the time, so you can visualize it. But with "disappear," there's really no analogies. So Evan, what you want to do here is to imagine the cube slowly fading out, okay. Same sort of drill. So: one, two, three, go. Okay. Let's try that. Oh, my goodness. He's just too good. Let's try that again.

EG: Losing concentration.

TL: But we can see that it actually works, even though you can only hold it for a little bit of time. As I said, it's a very difficult process to imagine this. And the great thing about it is that we've only given the software one instance of how he thinks about "disappear." As there is a machine learning algorithm in this --

Thank you. Good job. Good job.

Thank you, Evan, you're a wonderful, wonderful example of the technology.

So as you can see before, there is a leveling system built into this software so that as Evan, or any user, becomes more familiar with the system, they can continue to add more and more detections, so that the system begins to differentiate between different distinct thoughts. And once you've trained up the detections, these thoughts can be assigned or mapped to any computing platform, application or device.

So I'd like to show you a few examples, because there are many possible applications for this new interface. In games and virtual worlds, for example, your facial expressions can naturally and intuitively be used to control an avatar or virtual character. Obviously, you can experience the fantasy of magic and control the world with your mind. And also, colors, lighting, sound and effects, can dynamically respond to your emotional state to heighten the experience that you're having, in real time. And moving on to some applications developed by developers and researchers around the world, with robots and simple machines, for example -- in this case, flying a toy helicopter simply by thinking lift with your mind.

The technology can also be applied to real world applications -- in this example, a smart home. You know, from the user interface of the control system to opening curtains or closing curtains. And of course also to the lighting -- turning them on or off. And finally, to real life-changing applications such as being able to control an electric wheelchair. In this example, facial expressions are mapped to the movement commands.

Man: Now blink right to go right. Now blink left to turn back left. Now smile to go straight.

TL: We really -- Thank you.

We are really only scratching the surface of what is possible today. And with the community's input, and also with the involvement of developers and researchers from around the world, we hope you can help us to shape where the technology goes from here. Thank you so much.

TEDTalks - Tan Le: A headset that reads your brainwaves

DISCUSS!

Original posting by Braincrave Second Life staff on Dec 20, 2010 at http://www.braincrave.com/viewblog.php?id=410

You need to be logged in to comment.
search only within braincrave

About braincrave

relationships/dating/braincravebraincrave

We all admire beauty, but the mind ultimately must be stimulated for maximum arousal. Longevity in relationships cannot occur without a meeting of the minds. And that is what Braincrave is: a dating venue where minds meet. Learn about the thoughts of your potential match on deeper topics... topics that spawn your own insights around what you think, the choices you make, and the actions you take.

We are a community of men and women who seek beauty and stimulation through our minds. We find ideas, education, and self-improvement sexy. We think intelligence is hot. But Braincrave is more than brains and I.Q. alone. We are curious. We have common sense. We value and offer wisdom. We experiment. We have great imaginations. We devour literacy. We are intellectually honest. We support and encourage each other to be better.

You might be lonely but you aren't alone.

Sep, 2017 update: Although Braincrave resulted in two confirmed marriages, the venture didn't meet financial targets. Rather than updating our outdated code base, we've removed all previous dating profiles and retained the articles that continue to generate interest. Moving to valME.io's platform supports dating profiles (which you are welcome to post) but won't allow typical date-matching functionality (e.g., location proximity, attribute similarity).

The Braincrave.com discussion group on Second Life was a twice-daily intellectual group discussions typically held at 12:00 PM SLT (PST) and 7:00 PM SLT. The discussions took place in Second Life group chat but are no longer formally scheduled or managed. The daily articles were used to encourage the discussions.

Latest Activity