About Me

The current stage of AI is quite good from human point of view, but still far from strong enough: It requires efficient and powerful infrastructure to support its brain(LLM) | It requires a more clear understanding what the physical world is | It needs good foundation to manage all the components. That's why My research expertise focuses on GPU programming, Machine Learning System, AI robotics system, and Operating System, supported by a strong foundation in artificial intelligence (AI) and cognitive science. I am dedicated to advancing intelligent systems through rigorous scientific investigation. I have advanced proficiency in C++ and Python, leveraging libraries such as PyTorch, NumPy, and Pandas for data analysis and machine learning, alongside expertise in CUDA and ROCm/CANN for GPU/NPU-accelerated heterogeneous processing.

My technical contributions encompass the development of advanced LLM/ML application and the optimization of high-performance parallel algorithms in robotics systems. Notable projects include implementing a context-aware generative pretrained transformer (GPT) model using RAG framework LangChain and FastAPI, as well as designing a Pre-LayerNorm transformer architecture in Apple MLX inspired by the seminal work "Attention is All You Need." I have also contributed to autonomous data management and perception systems for the Edinburgh Formula Student (EUFS) initiative, utilizing my skills in ROS2 and Linux to enhance real-time robust systems in perception pipeline and parallel processing frameworks - eufs_pcl_parallel.

Outside of my technical pursuits, I stay active with regular gym sessions, running, and playing tennis and badminton. In my downtime, I enjoy reading books on productivity, psychology, and science fiction to inspire new ideas and refresh my perspective to the world. Additionally, my steam account is CyberAlbertCamus

Skills

Back to the Portfolio