Hacker Newsnew | past | comments | ask | show | jobs | submit | softservo's commentslogin

step.parts is an open source directory of 12,000+ STEP parts for your next CAD project.

Many CAD tools like Autodesk have huge closed libraries of standard mechanical and electrical parts.

The goal is to build an equivalent catalog for the open source CAD community (that’s easy for humans and agents to use).

The directory is seeded from dozens of existing open source catalogs and generators, catalogued and organised by family, standard, size, etc.

The directory also includes an API, llms.txt and skill to make it easy for agents to download relevant STEP files in generative CAD tools. I’ve added the skill to my text-to-cad repository:

https://www.cadskills.xyz

I’m hoping many can contribute to the repo and expand the directory to hundreds of thousands of standard parts!

Happy building



Not intentionally deceptive, the prompts are just too big to include on the home page!

I actually used GPT 5.5 Pro to generate the prompts from simpler one sentence prompts, so hypothetically it’s just an extra step in the harness for an agent to unpack / add detail to a prompt based on the user’s goal.


This issue is fixed by the way!

Hi all, repo author here, appreciate the kind words and feedback!

I'm brushing up on robotics after spending the last 10 years working in software land. After being humbled by modern CAD tools like Onshape, I built this harness / skill to help me generate some basic CAD models for a 7dof robot arm I'm designing.

It ended up working much better than I expected, particularly on the latest GPT 5.5 and Opus 4.7 models. It's been a lot of fun to work on. I've learned a lot about how STEP files work (opencascade, breps, etc) as well as 3d rendering tools like threejs.

I don't have much intention of turning this into a business, it's really just a fun open source tool that I'll continue to maintain as long as myself and others find it useful. Very open to ideas and contributions.

P.S. I just pushed a major update that improves the workflow and scripts/tools for the CAD skill. I also added some basic benchmarks to start measuring performance over time.



Proud of you

The purpose of this repo (harness and skills) is really to just give the models more direct tools to generate and inspect STEP files. It basically generates a topology sidecar for every STEP file that can be used to quickly read the BREP (faces/edges/vertexes) without loading in the full STEP.

There's also a bunch of work going into the SKILL.md to plan for more complex parts (this is mostly a stop gap while the models don't have amazing spatial reasoning).


I appreciate that effort, seeing Claude start to prototype physical objects that can get mass-produced is unbelievable but wow it uses up tokens like crazy.

I'm using Opus 4.7 w/ the 1M context option on the vibrating mesh nebulizer repo and have hit compacting pretty often which is a restart-the-conversation flag for me on relatively small OpenSCAD files like the adapters and enclosures here which are like 10-40kb: https://github.com/dmchaledev/VibratingMeshNebulizerControll...



Working on benchmarks at the moment! Always open to feedback / PRs.

im def working on benchmarks for how my own general harness improves task performance vs same model in a commodity setup. its hard to do!

i will say that my current harness: https://github.com/cartazio/oh-punkin-pi is a testbed for a bunch of 2nd gen harness tech, largely optimized for reasoning llms only. the next one after this harness is gonna be epicccc


This is text-to-cad, an open source tool for generating 3D models in Codex / Claude Code!

Use it to prompt and edit complex 3D models. Export STEP, STL, GLB, DXF and URDF files. Built for CAD newbies. Link to github below.

CAD is hard. As a software engineer getting back into robotics, I’ve been humbled by new tools like Onshape. Struggling to kick old habits, I started prompting Codex to generate 3D models and had some limited succes. After a few iterations I found a recipe that actually works:

1. Generate a python script for every STEP file. The agent can easily edit each part’s source without touching the raw STEP file. Use build123d > cadquery. 2. Reference specific faces and edges in prompts for precise edits. I built a basic local ui to inspect / cache STEP B-reps to make this easier. 3. Maintain markdown explaining important part features in plain English so the model can index on project context quickly. 4. Verify results with screenshots and geometry. Models don’t have great spatial awareness, but they can interpret images and verify constraints very well.

For the best results I’ve been using GPT 5.4 xhigh / Opus 4.6+. Fair warning, this will burn through tokens, I recommend the Pro/Max plans if you’re planning on building anything serious. PRs welcome!


Backstory: About one month ago while visiting us from overseas, my wife’s parents were in a terrible car accident. Everyone involved is alive and going to be okay. But what followed was a series of emotional, physical and logistical challenges that pushed my wife and her family to their limits.

For the first week we were practically glued to our phones, contacting family members, insurance companies and air ambulance services. I found myself obsessively checking my phone for updates, sending empty messages and mindlessly scrolling feeds. My screen time reached all time highs. I was spending 12 of my 16 waking hours staring at a screen instead of being there for my wife and her parents. It felt like I was hiding on my phone.

I don’t have a particularly addictive personality, but I am undeniably addicted to my phone. And this was the week I finally needed to deal with it.

I tried Apple Screen Time and a few popular screen time management apps, but found the blocks were too easy to bypass. I also realised that most apps (e.g. YouTube) were as useful as they were distracting depending on the context. I didn’t necessarily want to use my phone less: it’s an incredibly useful tool, and the distractions were sometimes helpful.

What I really needed was intentional stretches of time spent away from my phone. I built touchgrass.fm as a simple way to record and incentivize those stretches of time. It’s not quite finished (built in a few hours of downtime), but it helped me stay present during hospital visits, meals and important conversations.

I decided to share it on the off chance it helps others get some control back and be a bit more present in their day to day lives!

Link: https://www.touchgrass.fm

What Are You Working On (June 2025): https://news.ycombinator.com/item?id=44416093#44427955


Thanks for sharing, I will give this a try today. What is the rationale for requiring the app to be open and leaving the screen on (draining battery)? Was this a technical limitation?


Thank you, feedback greatly appreciated !!

I tried a few different implementations (e.g. using background video/audio), but ultimately the device’s unpredictable management of background apps made it impossible to distinguish between navigating away from the app and locking the phone. I ended up going with the “dumb” solution.

It actually works quite nicely (especially as a PWA on iOS) because it makes the stretch feel more intentional, and it discourages “gaming” the system by recording long stretches while sleeping.

I’ve found the battery drain is also pretty negligible over 2-3 hour stretches, the dark screen / pixels seem to help a lot.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: