Prosthetic Limbs and Machine Learning
I had worked on this project collaboratively with my close friend and colleague, Grant Pausanos, with whom I share equal contribution of work.
My partner and I had started working on this project actively at the start of March 2019 and had finished/presented the project at the of April 2019. This is an old project that I worked on under a short time frame, as such please excuse some of the poor design choices, code, and presentation. Nonetheless, I am proud and grateful of this project for serving as a learning opportunity.
1. Summary and Introduction
A prevalent issue that is abundant in modern day society is the development of medical technology that aids amputated citizens in order to attain more convenient day to day lifestyles. Especially during a time where war and fighting is rampant throughout many parts of the world, the need for an affordable bionic alternative is important. A 3D printed bionic arm modeled from scratch will be tested on a variety of tasks a typical hand would handle such as holding genuine items, pouring safe substances, and other normal hand gestures. Moreover, an electromyogram (EMG) will act as a biological input device detecting for electric contractions in muscle; so that the control from user is as intuitive as possible. These contractions will act as input for controlling servo motors that move the arm in various ways. Its mechanical component consists of a programmed Arduino with several servo motors and strings to allow the flexion of proximal, distal, and intermediate phalanges. In addition to the electrical component, an EMG sensor will be connected to a circuit to control the motion of the hand.
A closer look at the EMG portion of the design reveals the much needed purpose of machine learning in this project. Much like a lot of biological signal data, the wave data is: messy, noisy, and hard to interpret individual finger contractions. While technically only a binary mode of detection is needed, so that once the muscle contracts the hand closes its hand, however this would prove only useful in select cases. As a result, an accurate method is needed to classify wave data into certain labels based on the available features. This is where the idea of machine learning comes into play, so that we can identify individual finger contractions based on noisy wave data.
2. Procedure of Mechanical Aspect
Initially, the hand was sketch with the reference of one’s right hand and forearm. These measurements must be precise in order for the artificial hand to achieve realistic dimensions. This major draft include the proximal, intermediate, and distal phalanges of the fingers and thumb. Looking at a top view of one’s hand, the width of the phalanges do not have to be recorded due to the implementation of screws with specific length in each phalanges. Be extremely careful with the palm as it’s sketch will eventually form into an irregular polygon with specific angles accounted for. In terms of the forearm, only record the length as its width might vary due to the five servo motors that will take a good portion of its volume.
Going into the Auto desk Fusion 360 application, each piece of the hand would have its own “component” in the software so it would be ideal to transfer or modify individual components of the hand at ease. Moreover, it is recommended to sketch each part in two dimensions first and then extruding the object rather than moving ahead to model in three dimensions since one can develop a variety of unique shapes when one can control what the particular shape would look like before extruding and this project requires the irregular polygon of the palm to be extruded.
3. Electrical Aspect and Machine Learning
In terms of Arduino code, all that was done was to import the appropriate servo libraries so the Arduino can read them and assign them values based off the raw data received. This is where the integrated circuit also rectifies the data. Once this is done it tests it against a threshold value where if it is crossed then the arm itself contracts. Generally speaking, this is very simplistic code and does not even go near tackling the more complex tasks and issues that a regular arm should be capable of. The schematic for the 5+ servos was not anything difficult, simply an arrangement of servos to a breadboard. The plan was to use a servo shield however we had encountered some issues and due to the time constraint at the time, the shield was dropped.
The machine learning of this project was the most difficult aspect of this project since it required to learning a completely new skill set that would require higher level mathematics than I knew at the time. Additionally, building a machine learning model with specific application to biological systems (general AI-healthcare field) was still growing. We did not have the time to incorporate the following machine learning models however I believe it was valiant effort and an invaluable learning moment.
I had followed a very straight forward data pipeline to analyze and attempt to classify the signal data: movement, movement detection, feature/label extraction, dimensionality reduction, and finally classification.
The patient who is controlling the arm flexes their muscles or does some other biologically controlled mechanical movement.
The Myoware EMG sensor picks up on the change in action potential and records it. Some data rectification is done here combing the next step, however more computationally intense rectification can not be done here (ie. Fourier transform). The sensor takes a difference of two inputs (gain=110) which is then blown up using a differential amplifier (gain=-15). This amplified data is pass through a filter (gain=-1,fc=106.1Hz) and takes any negative values to be positive. Lastly, smoothing(gain=-1,fc=1.975Hz) and inverting(gain=-20, amplified again) the data to send to Arduino.
Feature and Labels Extraction
While most of this was technically done on the circuit board itself, any remaining data manipulation can be done here (other than one-hot which comes later). Fourier transforms was not used since signals as a function of frequency was not a valuable feature that was needed (also really hard considering I was taking Calc I at the time).
There were not too many features to reduce other than some feedback that the servos were giving us since we were not using pulse-width-modulation (PWM). It should follow that the raw data was pretty much fed to the classifier. However I will say that at this stage I did do some one-hot encoding to make it easier for the classifier to classify.
fvecs=for line in open(filename):
row = line.split(',')
#fvecs.append([float(x) for x in row])#fvecs_np = np.matrix(fvecs).astype(np.float32)
fvecs_np = emg_health[:,0]
labels_np = np.array(labels).astype(dtype=np.uint8)labels_onehot = (np.arange(5) == labels_np[:, None]).astype(np.float32)return fvecs_np, labels_onehotif __name__ == '__main__':
run = extract_data(emg)
Since I was unable to deploy any models for the actual arm to use, this stage was never reached but only developed. I had decided to use an Extreme Learning Machine since I had hoped that different finger contractions would essentially give slightly different signal data. I read several papers and watched many videos accomplishing what I had sought out to do, however the papers I had read had more than a two channel EMG (meaning two electrodes on arm).
def __init__(self, n_hidden_units):
self.n_hidden_units = n_hidden_unitsdef fit(self, X, labels):
X = np.column_stack([X, np.ones([X.shape, 1])])
self.random_weights = np.random.randn(X.shape, self.n_hidden_units)
G = np.tanh(X.dot(self.random_weights))
self.w_elm = np.linalg.pinv(G).dot(labels)def predict(self, X):
X = np.column_stack([X, np.ones([X.shape, 1])])
G = np.tanh(X.dot(self.random_weights))
I had also prepared some other models as well, but these were also not tested.
## Linear Discrim Analysis
lda = LinearDiscriminantAnalysis(n_discriminants=2)
X_lda = lda.transform(X)## Spectral Regress Discrim Analysis
srda = mlpy.Srda()
srda.predict(X_train) # predict srda model on train data
srda.realpred # real-valued prediction
mysrda.weights(X_train, y_train) # compute weights on training data
Three tests were done in order to test the efficacy of the arm as an actual practical piece of technology. One had used an EOS lip balm, a plastic cup, and a roll of electrical tape. The EOS lip balm was spherical in shape and was able to test the grip of the distal phalanges, the hand was safely able to hold the object in place. By ascertaining the hand’s ability to grip spherical objects one gain a better understanding of how it will perform with other objects. The next object that was tested was the plastic cup, in this case the hand had trouble initially grabbing the cup, due to the inherent flexibility of the plastic, however once the hand had a secure grip, the hand was able to hold it. This introduces a more practical aspect of using the hand in an everyday setting, specifically using the cup to drink water. Lastly, the hand attempted to grab and hold a roll of electrical tape. This test would indicate how well the hand could lift heavier objects since the electrical tape was heavier than the other objects. The hand failed to hold the tape for a prolonged amount of time. I failed to incorporate the machine learning pipeline to the arm in time for the presentation, as such it was dropped.
The initial goals of this project was to introduce a novel design for a biomechanical hand such that amputees and others in a healthcare setting can apply such technology in order to better enhance the quality of life and productivity. Another goal was to develop a low cost 3D printed myoelectric
prosthetic arm for third world countries where such technology is needed. The final design yielded moderately well to good performance and exhibits characteristics of a better design in that much of the features are easily extendable. Over the course of testing the system it is been proven that the design requires minimum maintenance and for the most part proves to be an excellent amateur design when compared against other non-academic designs.
Overall, it was shown that implementation of the Myoware muscle sensor was insufficient for finger classification but adequate for EMG signal acquisition for muscle contractions. Better sensors could have been deployed however then the issue of cost became a concerning problem. One way to avoid the cost issue is to build an EMG completely from scratch using differential amplifiers and varying gains either on a breadboard or integrated circuit. The results of the machine learning algorithms and its implementation with the mechanical hand were proven to be inconclusive and not useful for the project. As such, the entire machine learning portion of the project had to be put on hold for further investigation. To conclude, the myoelectric prosthetic arm was a success and it is clearly shown that it functions in the most basic sense of the word as a human hand.
The project was such a success that my high school had actually invited Grant and I to speak at our school’s award ceremony to show case our project and encourage future engineers. It was truly an amazing journey, one which I learned a lot and gained plenty of relevant skills that have transferred over to other projects.