GitHub X
← Wonder

My First SO100 Repo

This was one of the first projects I took up. The adrenaline of working on the freshness of a repository codebase alongside the joy of being new to even git, GitHub. We often take big leaps, huge steps, or just a decision to work on something today that makes us learn plenty—and much like this did to me.

As I remember the sim2real PPO on SO100 I go to the very many times that we calibrated the camera to match or figure out the dynamics of a sim-to-real system just before we got a hang of it.

How did we start about? It was a mentorship program, SRA at Eklavya. (The best thing.) We had a printed SO100 arm and because the open-source ecosystem is so damn good we had everything to make it work. We just had to put things in place. We read, we understood, learnt ACT, ALOHA algorithms that made sense. Tried our hands on MuJoCo for the first time.

This is what we did basically: repo link. We loaded an XML model on MuJoCo, tried out forward kinematics and inverse kinematics, went on with integrating the boundaries of the workspace. This was fun—two weeks. This was my first time dealing with actual robotics, trying out something even on the simulation. This was July 2025 and since I have loved to be here.

The next part was hardware. And it is not as good as we want it to be. We figured out blinking motors to a heated-up motor driver—not because we did not have things available but because we learn a lot even while putting things together.

The first thing that comes to my mind while learning today is implementation. Available things, available resources took a considerable time to be put together because we were learning and the trajectory was beautiful.

The LeRobot sim2real

Here another set of so much. Debugging hardware is the worst thing to do but the dopamine it triggers is pure surreal. When we transfer sim2real it indicates that the training has been on complete software. We do not make the robot do the task and record it. Instead we make a simulator which has an overlay of your greenscreen in its feed. The robot is calibrated through the maximum and minimal angles.

The calibration here for me consisted of a greenscreen image of the background on which the robot was then pasted, and then synthetic data was generated in a reality physics simulator.

We had a working sim2real pipeline where the cube spawned in the given workspace, the cube was picked up by the robot. The robot succeeded in simulation as well as in real life. True fun.