generalvia Hacker News AI

Rigor: Open-Source Proxy to Combat AI 'Enshittification'

A new open-source project called Rigor aims to prevent AI services from degrading over time. It acts as a proxy for OpenCode and Claude Code, grounding projects in an epistemic graph with LLM-based evaluation.

Rigor: Open-Source Proxy to Combat AI 'Enshittification'

A developer has created Rigor, an anti-enshittification proxy designed to maintain the quality of AI agent subscriptions. The tool grounds projects in an epistemic graph and uses LLMs as judges to evaluate outputs, ensuring consistent performance without relying on expensive subscriptions. The project was developed during the GrowthX OpenCode hackathon and is currently in a waitlist phase before being open-sourced.

Rigor addresses a growing concern in the AI community about the degradation of services over time, often referred to as 'enshittification.' By providing a transparent and evaluative framework, Rigor aims to keep AI agents reliable and effective. This is particularly relevant as more users rely on AI for critical tasks and projects, making the stability of these services paramount.

The project is set to be open-sourced once a waitlist is established, indicating a community-driven approach to combating AI service degradation. The developer plans to reach out via email once the core is ready. This initiative could set a precedent for other open-source projects focused on maintaining the integrity of AI tools, potentially leading to broader industry standards for quality and transparency.

#ai#open-source#proxy#enshittification#agents#epistemic-graph