Loss of the voluntary muscular control while preserving cognitive functions is a common symptom of neuromuscular diseases leading to a variety of functional deficits, including the ability to operate conventional interfaces like mouse, keyboard, or touchscreens. As a result, the affected individuals are marginalized and unable to keep up with the rest of the society in a digitized world. The MAMEM project develops novel interfaces that can be controlled through eye movement and mental commands, using eye tracking and EEG devices. More specifically, we have implemented the GazeTheWeb Browser, a custom made browser that builds upon the Chromium framework and substitutes basic browser functionalities with interaction elements that can be operated through eye-gaze, EEG and GSR signals. Moreover, it enables dynamic modification on the way a web page content is displayed to the user, augmenting it to be operated through the eyes. GazeTheWeb has the capability of augmenting many different kinds of websites, performing browsing, photo editing, social media sharing, messaging, or multimedia reproduction. Furthermore GazeTheWeb has been coupled with ‘Lab Streaming Layer’ that is able not to only simultaneously collect the signals coming from the eye tracker, the EEG recorder and the GSR device, but also synchronize them with the events that are taking place in the computer screen of the user. In combining the different modalities under a paradigm of multimodal interaction, the EEG and GSR signals were primarily used to compensate for the shortcomings of the eye tracker, either by using brain waves to switch between reading and navigation mode, or to perform automatic error correction in a gaze-based keyboard. To introduce a novel assistive device into the everyday life of our end users, we have used persuasion strategies for engagement. More specifically, we have gamified the process of training the end users on operating our system, breaking it down into levels of varying difficulty so as to accommodate both for users without prior experience and users who have used assistive technologies in the past. Users could only proceed to the next level after they had acquired a certain skill, which was ensured by prompting them to redo a level until they reached a sufficient score, based on completion time and failures. Further gamification elements were integrated in the training experience of our users, including trophies, a way to recognize player accomplishments within a social group. Scoreboards, comparing performance against previous runs. Leaderboards, comparing performance between players and assignments, short-term objectives shaping the gameplay narrative. In phase I of MAMEM trials our goal was to have our end user testing the developed algorithms and interfaces. Three cohorts of Parkinson disease, Neuromuscular disorder and Spinal Cord injury were put to the task, consisting of 6 patients and 6 able-bodied subjects that were matched in terms of gender and age, executing an experimental protocol of approximately 5 to 8 hours. The first part of the experimental process was for the user to learn how to use the GazeTheWeb browser using only the eyes. This part was gamified according to persuasive design principles, making the experience enjoyable for our subjects. In the second part of the process, the user was asked to perform two EEG related exercises by wearing the EEG cap. Initially the user was tasked with typing some phrases from the GazeTheWeb keyboard using the eyes. The EEG cap logged the brain waves emitted when a typing error was performed, so that the system could deploy auto correction measures based on the detection of brain signals stimulated by the error, the so called ‘Error Related Potentials’. The second EEG-related exercise was intended to train the end-users in regulating their ability to issue mental commands, for switching between the reading and navigation mode of GazeTheWeb. More specifically, the users were asked to move the left or right hand in the form of a fist clench using both real and imaginary movements, while a bar was used to provide feedback about the actual result of the mental command. In the third part of the process, the subjects were asked to perform a number of ordinary tasks, such as sending an email, editing a picture, tweeting a message and playing a video on YouTube. In undertaking these tasks, the conventional web pages were dynamically enhanced through GazeTheWeb to facilitate smooth interaction with the user’s eyes. During the execution of these tasks the EEG and GSR sensors were operational capturing the generated signals for future analysis. The last part of the process, consisted of repeating the EEG-related exercises using a lightweight version of the EEG and eye tracking sensors, simulating the conditions of home use. Throughout the whole process the subjects answered a number of questions asking them to assess the effectiveness of persuasion strategies, the system usability and user satisfaction, as well as the commercialization potential of our system. Overall the whole experiment process went smoothly, as participants from various demographics were intrigued by the GazeTheWeb browser and how its use tackle with interface obstacles they encounter in their daily lives. They were also positively surprised on how easily they could perform social media actions through ordinary web applications that were enhanced by GazeTheWeb and they showed a remarkable interest and eagerness to take part in the EEG-related exercises.