Recent Posts

Interpretable neural networks are currently nothing more then a pipe dream. However, if we were to make these machines even partially interpretable, maybe we could learn more about how they work. One way to look at interpretability is to consider embeddings. Not embeddings like word-2-vec or autoencoders–both of which at some point provide a fixed dimensional representation of data–but visual embeddings. Methods like T-SNE and PCA are common methods to do this, providing an intuitive 2-D / 3-D representation of some data.

CONTINUE READING

Projects

Intel Innovate FPGA

Semi-finalist of Innovate FPGA. Accelerating a neural network with an FPGA (DE10-Nano) for detection of bicyclists at intersections.

Green Raccoon

A react-native application which takes a picture and tells you if something is recyclable or not. react-native, google-cv-api, snack.expo

Deep Margins

Attempting to increase decision margins in neural networks through techniques similar to margin maximization in SVMs. Mentored by Prof. Nicholas Ruozzi, UTD. Python, Tensorflow, open-cv, alot of bash.

PhysVR

Developed a unobtrusive method to virtually display a physical object for collision avoidance in virtual reality. Work done in the Future Immersive Virtual Environments Lab (Mentored by Dr. Ryan P. McMahan) through the Clark Summer Research Program. Unity3D, C#, and the HTC VIVE headset.

MyUTD

Cross-platform mobile application that tracks and displays the locations of on-campus transportation systems realtime. QT, QML, JavaScript, and C++.