OK, this might require a new thread about brain-machine interfaces or applications of AI or whatever.
Anyway remember
Firefox and 'think in Russian'? Here is mind-reading brain-machine interfacing that uses AI to reconstruct perceptions from brainwaves.
The audio sounds like it’s being played underwater. Still, it’s a first step toward creating more expressive devices to assist people who can’t speak.
www.nytimes.com
A thing that Peter Watts has pointed out (remember him from another thread?) is that consciousness is Dilbert's 'pointy-haired boss' of the brain - it takes the credit for initiatives that have already been set in progress. A true human-machine fusion would require not just someone reading information off an instrument panel and pushing buttons in response - think of Ripley using the power loader to fight the Xenomorph queen at the climax of
Aliens. In practise it would use an intuitive 'preconscious' reading of intentions by the computer controlling the mechanical augmentation would be greatly more efficient. There's plenty of data showing that we perform basic tasks like the brain instructing an arm to reach for a glass of water
before consciously 'deciding' to do so.
Now we have machines that can to a limited degree read minds in an experimental setting. Practical applications would be exoskeletons that could synchonise with cues from their wearer's nervous systems and weapons systems that can read mental cues.