AI-Ranked Reddit Feed

5000 posts

r/LocalLLaMA Warden__Main_

Local LLM Pentesting

What is the local LLM used in this video that has no guardrails and how can I download and run such a model on my PC?

r/ClaudeAI valtor2

Claude Code Channels Discord broken?

Not sure if it's just me, (I do feel like I'm one of the few using it 😅) but I can't for the life of me get consistent and reliable connection with Claude Code and Discord. Is it just me? Specifically, claude loves to send approvals through, and whenever I approve one of those it loses the ability to send messages. I might be able to convince it to try again and it'll work, but it just means part of the convo ends up on my terminal on my machine, and not in discord. Is it also a problem on telegram?

r/ClaudeCode kmizzi

I built an open-source local dashboard to browse, search, and analyze all my Claude Code sessions

Claude Code stores every conversation as JSONL files under ~/.claude/projects/, but over time you end up with thousands of sessions across dozens of projects and no good way to find anything. --resume is limited to the current directory, and there's no cross-project search.

I built Claude Code Sessions to fix this — a local Next.js dashboard that indexes all your session history and makes it actually usable.

What it does:

  • Session browser — every session across all projects in one table, sorted by recency. Filter by project, date, model, git branch.
  • Two search modes — keyword search (SQLite FTS5) for exact matches, or AI semantic search (local embeddings, MiniLM-L6-v2) for natural-language queries like "the session where I fixed the auth bug."
  • Chat replay — full transcript with user/assistant bubbles, collapsed tool-use blocks, metadata sidebar with context % meter.
  • Token usage analytics — breakdowns by period (today, 7d, 30d) with estimated API costs.
  • AI chat assistant — ask questions about your history: "What did I work on today?", "Which project uses the most tokens?" Uses Claude with tool calls to query your indexed data.
  • Daily standup prep — one-click scrum-style update, timezone-aware, skips weekends.
  • Resume/fork sessions — copy the CLI command with optional flags directly from the UI.
  • Background service — install as a macOS launchd service, always available at localhost:5858.

Everything runs locally. No data leaves your machine (except AI chat, which requires your own Anthropic API key and is optional).

Stack: Next.js 15, React 19, better-sqlite3 (WAL + FTS5 + sqlite-vec), @huggingface/transformers for local embeddings, Tailwind + shadcn/ui.

Quick start:

git clone https://github.com/kmizzi/claude-code-sessions.git cd claude-code-sessions ./scripts/setup.sh 

Video overview: https://youtu.be/yau-6XJg9mA

GitHub: https://github.com/kmizzi/claude-code-sessions

Happy to answer questions or take feature requests.

r/ClaudeAI keebrev-t

I'm letting AI plan every hour of my life for 2 weeks. Starting Monday. Looking for tips from people who've tried this.

Next Monday I hand my calendar, my meals, my workouts, my sleep schedule, and basically every decision in my day over to a multi-agent AI assistant I've been building for the last 5 days. It decides when I get up, what I eat, when I hit the gym, when I work on which project, and when I'm "allowed" to hang out with my partner. I follow its plan. For 2 weeks.

Why: I'm a platform engineer running a consulting biz on the side. Every productivity system I've tried works for 2 weeks then collapses. I wanted a system that maintains itself. So I built one.

What I've built so far (all in Claude Code, 5 days):

  • 7 specialized agents (PA orchestrator, calendar, email, tasks, knowledge, brain maintenance, decision-making)
  • 50+ commands across daily ops, calendar, email triage, brain management
  • A persistent "brain" in Obsidian — 132 knowledge nodes, 1001 wiki-links, 98 logged decisions. Every session reads from it, writes back to it.
  • Telegram daemon so it can nudge me on the go
  • Observability hooks, bug tracker, bootstrap installer. Fully docs'd.

Full project page with live timeline + architecture + bug tracker: https://rivuletconsulting.nl/projects/daily-ai.html

First blog post (the "why") + Day 5 build log are up there too.

The experiment starts Monday. I'll be posting daily updates.

What I'm asking:

  • Anyone tried something similar? What broke first?
  • Tips for keeping the autonomy/override balance right? Where do you draw the line between "AI leads" and "I override"?
  • Prompt patterns that worked for you in multi-agent setups?
  • Things you wish you'd known before handing control over?

Honest takes welcome — including "this is a terrible idea because X".

r/LocalLLaMA OldSwimming6068

Experimenting with version control for AI workflows

Hi everyone,

I've been playing with a small experiment around version control and AI workflows.

It's called syft, this came from a simple problem. When you use models to make changes you rarely get one clean result. You get a few attempts. Some pass tests, some very close, some go in a different direction.

Once you pick one, the diff doesn't really capture how you got there.

Git tracks what changed. It doesn't really keep track of the task, the different attempts, or the validation that led to the final result. You can reconstruct it, but it's spread across commits, PRs, and logs.

So I tried a different shape.

The main thing is a "change node" that groups the task, a base snapshot, a result snapshot, and the validation output. You can have multiple candidates for the same task, look at them side by side, and then promote one forward.

It still uses Git for import and export so it works inside a normal repo.

There's a CLI for capturing snapshots, proposing changes, running validation, and inspecting what happened.

It's still early and pretty rough in places. Just trying to see if this way of structuring changes holds up a bit better when AI is involved.

If you're curios and want to take a look it's fully open source https://github.com/chaqchase/syft

You can read this also for more context https://www.chaqchase.com/writing/version-control-for-ai

Curios what everyone thinks, if I should continue on this or drop the idea all together? thanks for reading!

r/Anthropic bhalothia

Anthropic must be doing something right!

Peter Bailis, CTO of WORKDAY, just left to join Anthropic as an engineer. 🤯

r/LocalLLaMA zylskysniper

We ranked 22 AI models with 550+ real OpenClaw battles — GLM-5.1 debuts at #2, full Pareto cost analysis inside

I've been working on an arena (ChatBot Arena style) for comparing AI models on user submitted real agentic tasks — not chat, not static coding benchmarks. I want to benchmark how models perform in real harness, on real tasks.

Each model runs as an actual OpenClaw subagent in a fresh VM with terminal, browser, file system, and code execution, etc. A judge agent (user's choice: Claude Opus 4.6, GPT-5.4, or Gemini 3.1 Pro) evaluates by reading/running the code, browsing deployed apps, taking screenshots, etc.

We just passed 550 battles across 22 models. Here are the current performance rankings:

Performance Leaderboard

Rank Model Score Battles 1 Claude Opus 4.6 1739 108 2 GLM-5.1 1700 21 3 Claude Sonnet 4.6 1681 104 4 GPT-5.4 1663 138 5 Qwen 3.6 Plus 1537 27 6 GPT-5.3 Codex 1477 183 7 Claude Haiku 4.5 1408 169 8 Qwen 3.5 27B 1395 81 9 Xiaomi MiMo v2 Pro 1385 112 10 GLM-5 Turbo 1362 93 11 MiniMax M2.7 1309 161 12 StepFun 3.5 Flash 1280 156 13 DeepSeek V3.2 1246 126 14 Gemini 3 Flash 1235 131 15 Gemini 3.1 Pro 1196 92 16 Grok 4.1 Fast 1189 160 17 Kimi K2.5 974 95 18 Nemotron 3 Super 120B 804 98

Cost vs Performance (Pareto Analysis)

Beyond the performance ranking, we plot each model's performance score against its actual average cost and draw the Pareto frontier — the set of models where you can't get better performance without paying more. This gives us a "budget ladder" — the best model at each price point:

Budget Range Recommended Model Score Avg Cost $0.03 - $0.04 Grok 4.1 Fast 1189 ±84 $0.03 $0.04 - $0.14 StepFun 3.5 Flash 1280 ±93 $0.04 $0.14 - $0.19 MiniMax M2.7 1309 ±97 $0.14 $0.19 - $0.24 Qwen 3.5 27B 1395 ±100 $0.19 $0.24 - $0.37 GPT-5.3 Codex 1477 ±92 $0.24 $0.37 - $1.19 GLM-5.1 1700 ±129 $0.37 $1.19+ Claude Opus 4.6 1739 ±126 $1.19

One to watch: Qwen 3.6 Plus doesn't support prompt caching yet (at least on OpenRouter), so its $0.37/run is inflated. When caching lands, I'd expect its cost to drop to roughly minimax level (~$0.14), which would push minimax m2.7, Qwen 3.5 27B, and GPT-5.3 Codex off the frontier entirely.

Some Interesting Findings

  1. GLM-5.1 debuts at #2 with a perfect record. Much better than I expected.

  2. Opus is still #1 but at a steep cost. $1.19/run average. It's the most expensive model by far. GPT-5.4 at $0.40 is close in performance and much cheaper.

  3. Gemini 3.1 Pro is bad at agentic tasks. Ranks #15 at $0.32/run — behind multiple models that cost a fraction of the price. We actually had to optimize the judge message for it because it sometimes just reads the skill and decides to do nothing.

  4. StepFun 3.5 Flash is underrated for cost effectiveness: grok 4.1 fast level cost, minimax m2.7 level performance.

Methodology

We only use the relative ordering of models within each battle — not the raw scores. Absolute scores from LLM judges are noisy and poorly calibrated (a "7/10" in one battle might be "6/10" in another), but "A ranked above B" is much more consistent. Same principle behind Chatbot Arena's pairwise preference approach.

Rankings use a grouped Plackett-Luce model, not simple win-rate or Bradley-Terry. Battles where the judge model is also evaluated are excluded from the official board.

Full methodology with equations and comparison vs Arena.ai: https://app.uniclaw.ai/arena/leaderboard/methodology?via=reddit

How Battles Work

  • You submit any task + pick 2-5 models
  • A judge agent spawns one subagent per model on a fresh VM
  • Each model solves the task independently with full tool access (terminal, browser, files, code)
  • The judge evaluates by running code, browsing results, taking screenshots, etc
  • Full conversation history, workspace files, and judge reasoning are preserved

Try It

Live leaderboard (no account needed): https://app.uniclaw.ai/arena?via=reddit

Interactive Pareto cost-performance plot: https://app.uniclaw.ai/arena/visualize?via=reddit

Submit your own benchmarks (public ones are on us): https://app.uniclaw.ai/arena/new?via=reddit

The judge skill is open-source: https://github.com/unifai-network/skills/tree/main/agent-bench

Note on the data: We bootstrapped the first 500+ battles by crawling what people are doing with OpenClaw (on X, Reddit, etc.) and generating battles with similar tasks + randomly selected models. Going forward, anyone can submit their own tasks.

What tasks would you want to see benchmarked? Happy to run specific comparisons.

r/ChatGPT Kimbo367

Llevo unos meses usando ChatGPT para sacar pasta vendiendo trastos y pasar filtros de InfoJobs

Buenas. Escribo esto porque veo a bastante gente por aquí jodida con el tema de encontrar curro o buscando sacar algo extra a final de mes, y yo estaba igual hace nada.

La cosa es que empecé a trastear con la IA para ver si me servía de algo. Al principio le pedía que me escribiera anuncios para Wallapop y me sacaba textos que daban vergüenza ajena, sonaban a anuncio de la teletienda de las 3 de la mañana.

Pero a base de meterle horas le fui pillando el rollo. El truco no es pedirle "escríbeme esto", es darle un contexto súper específico. Por ejemplo, para vender ropa que no uso en Vinted, le meto un texto base diciéndole: "Eres un chaval de 25 años que vende en Vinted, sabes qué palabras busca la gente ahora mismo. Escríbeme un título corto y una descripción que use escasez pero que suene 100% natural, nada de IA. El producto es X".

Parece una tontería, pero pasé de tener anuncios muertos de risa a vender zapatillas y ropa en un par de días porque el algoritmo te posiciona mucho mejor si usas las palabras clave correctas sin parecer un bot.

Con los currículums me pasó parecido. Resulta que las empresas grandes usan filtros automáticos (ATS) que te descartan si no tienes las palabras exactas de la oferta. Así que le paso a la IA la oferta de InfoJobs y mi experiencia, y le digo que me monte el CV en texto plano metiendo esas palabras de forma natural. Desde que hago eso, paso el primer filtro casi siempre.

También me ha salvado la vida con el papeleo de autónomo y el Kit Digital, que es un dolor de cabeza.

El caso es que de tanto probar me fui guardando todos estos textos base (prompts) en un doc para tenerlos a mano. Como vi que a un par de colegas les sirvió bastante, he montado un pack con unos 60 prompts de estos (para Wallapop, CVs, LinkedIn, cosas de autónomos...) y lo he subido a Hotmart por un precio ridículo, más que nada por el curro de ordenarlo todo.

No pongo el enlace aquí para que no me borren el post por spam, pero si a alguien le interesa echarle un ojo que me mande un MD o deje un comentario y se lo paso.

Si no, probad lo que os he dicho de darle contexto a la IA antes de pedirle nada, que de verdad se nota la diferencia.

r/ClaudeCode BackgroundTimely5490

My dev team is burning through Claude / Cursor credits like crazy — how do you control AI usage in a team?

I run a dev team of 15 engineers. Recently we started using AI tools like Claude, Cursor, and Windsurf.

Initially, I tried:

- Shared Cursor/Windsurf accounts (2 people per seat)

- Then upgraded to multiple Claude Max subscriptions ($100 each)

But we’re facing a serious issue:

Developers are using AI in “full speed mode”:

- Spawning multiple agents

- Running large prompts

- No control on usage

- Credits get exhausted mid-day or within hours

Even when I try to scale:

- If I buy more seats → usage just scales up

- If I give “unlimited” → it still gets exhausted fast

Now I’m considering:

- Moving to API-based usage with per-developer budget ($20–$50/month)

- Or restricting usage with strict rules

But I’m worried:

- Dev productivity might drop

- Team is already “used to” Claude-level performance

- They resist switching to cheaper models

Key question:

How are other teams managing AI usage at scale without burning costs?

Specifically:

- Do you use per-user API budgets?

- Any tools for tracking usage per dev?

- How do you prevent “AI overuse” behavior?

- Do you enforce rules or just let teams manage themselves?

Would really appreciate practical strategies from teams dealing with this.

Thanks!

r/singularity WhyLifeIs4

Claude is now adopting the advisor strategy

We're bringing the advisor strategy to the Claude Platform.

Pair Opus as an advisor with Sonnet or Haiku as an executor, and your agents can consult Opus mid-task when they hit a hard decision. Opus returns a plan and the executor keeps running, all inside a single API request.

This brings near Opus-level intelligence to your agents while keeping costs near Sonnet levels.

In our evals, Sonnet with an Opus advisor scored 2.7 percentage points higher on SWE-bench Multilingual than Sonnet alone, while costing 11.9% less per task.

Available now in beta on the Claude Platform.

Learn more: https://claude.com/blog/the-advisor-strategy

r/LocalLLM ScarblaZ

Reduce memory usage ( LLM Studio - OpenWebUI - Qwen3 Coder Next - Q6_K )

My system specs:
64 GB Ram DDR 4 3200

8GB Vram 4060ti

Current State: I am happy with current token speed and code given by model ( it uses 100% of RAM leaving less than 200 MB free RAM )

What i want is, is there any way to reduce RAM usage like instead of 64 gb use 60 GB leaving 4gb so that i can use browser / other softwares.

I tried Q4_K of same LLM model but the result are very different, which wasnt good enough for me after multiple tries. but Q6_K is really well.

r/ClaudeAI SoftMushyStool

THIRD TIME Claude Max keeps silently mass deleting parts of my chats. 3rd time now.

Anyone else getting messages just… vanish from Claude conversations?

Third time it's happened to me. Long chat inside a Project (files in project files, so the chat itself is small — nowhere near context limits). Come back the next day and dozens of messages are gone. No error, no warning. Claude itself can only see up to some random earlier point and has no idea the rest ever happened.

Latest one wiped out days of financial planning work I now have to rebuild from memory.

$140/month for this. Emailed support. Curious if it's a Projects bug or if other Max users are seeing the same thing.

r/SideProject rgb1903

I built an AI editor that strips out generic ChatGPT fluff and fixes grammar.

precisionaudit.app

I made this because I hate how ChatGPT writes. You ask for a basic email, and it gives you a robotic mess full of words like 'delve' or 'synergy' instead of just sounding normal. But writing it yourself in a rush usually leads to embarrassing typos.

Precision Audit solves both problems. You paste your draft, and it catches those human errors while aggressively deleting the generic AI fluff. It rewrites your text to be sharp, direct, and sound like an actual person wrote it.

You can test it right now without signing up. Let me know if it helps clean up your drafts.

r/SideProject Main-Confidence7777

I built a YAML-first LLM orchestrator that runs 70% faster and cheaper than calling APIs manually. Open source, TypeScript, 6 providers.

After months of chaining LLM calls with Python scripts and losing my mind on token costs, I built OCC (Orchestrator Chain Chimera), an open-source workflow engine where you define multi-step AI tasks in YAML instead of code.

The core idea: most LLM workflows waste tokens on data collection. OCC has 30 "pre-tools" that fetch URLs, run shell commands, query databases, call MCP servers, all at zero LLM token cost. The LLM only sees the result.

steps: - id: research model: claude-haiku-4-5 # cheap pre_tools: - type: web_search query: "{input.topic}" inject_as: data # 0 tokens prompt: "Analyze: {data}" output_var: analysis - id: risks model: claude-haiku-4-5 prompt: "Identify risks for {input.topic}" output_var: risk_data - id: report model: claude-sonnet-4-6 # smart depends_on: [research, risks] # waits for both prompt: "Executive summary:\n{analysis}\n{risk_data}" output_var: final 

research and risks run in parallel automatically (DAG resolution). No code, no framework, just YAML.

Real benchmarks (not estimates):

I ran a 10-step strategic analysis chain 3 different ways and measured everything:

Approach Duration Cost/run Sequential, all Sonnet 229s $0.60 Sequential, Haiku+Sonnet 97s $0.12 OCC parallel, Haiku+Sonnet 69s $0.18

The biggest win is model routing: using Haiku ($0.25/M) for 9 subtasks and Sonnet ($3/M) only for the final synthesis = 80% cost reduction. OCC makes this trivial (one model: field per step).

The parallelism adds ~48% cost vs sequential (cache miss penalty) but saves 29% time. Whether that tradeoff is worth it depends on your use case. Full honest analysis here.

What it's NOT good for: single API calls (+14% overhead), real-time latency-sensitive apps, or if you're already happy with LangChain.

What it IS good for:

  • Multi-step analysis pipelines (research > analyze > synthesize)
  • Workflows mixing expensive and cheap models per step
  • Anything where you'd chain 4+ LLM calls
  • Teams where non-devs need to read and edit workflows (YAML > Python)
  • Local/private models via Ollama alongside cloud models

Stack:

  • TypeScript / Node.js (not Python. Finally a TS alternative to LangChain)
  • SQLite with crash recovery (no Postgres or Redis required)
  • 6 LLM providers: Claude, OpenAI, OpenRouter, Ollama, HuggingFace, any OpenAI-compatible
  • React canvas editor + workflow chat + CLI
  • 106 REST endpoints with Swagger UI at /api/docs
  • 3,243 tests across 111 files
  • MCP server (29 tools). Works with Claude Code and Claude Desktop

npm install -g occ-orchestrator occ run my-chain.yaml -i topic="your topic" 

GitHub: https://github.com/lacausecrypto/OCC

Happy to answer questions. Not trying to sell anything, it's MIT licensed and I use it daily for my own workflows.

r/ChatGPT Glass_Scar4888

I've been using ChatGPT as a therapist for 8 months. Here's what actually helped

I didn’t plan this.

I was on a 6 week therapy waitlist, couldn’t afford private sessions, and I started getting panic attacks a few nights a week. At some point I just opened ChatGPT at like 2am and started typing.

I’ve been doing that on and off for 8 months now.

I’m not saying it replaces a real therapist. It doesn’t. But some parts of it actually helped more than I expected.

What worked:

Asking it to reflect what I said and point out the fear underneath it. That alone made me notice patterns I was missing.

Telling it not to give advice at the start. If it jumps straight into solutions it’s useless. If it asks questions first, it’s way better.

Being very specific. Not “I feel anxious” but “I woke up at 2am thinking I’m going to mess up my career and here’s the exact spiral I went into.” The more raw I am, the better it responds.

Starting with “I just need to process this, don’t try to fix it.” That changes the whole tone.

What didn’t:

Using it like Google. Generic questions get generic answers.

Using it when I actually needed a person. Sometimes you don’t need insight, you just need someone there.

The memory problem. This is the worst part. It forgets everything, so every session starts from zero. Hard to see patterns, hard to feel progress. I tried pasting context every time but it gets tiring fast.

I used to feel weird about doing this, but I’m pretty sure a lot of people are doing the same thing quietly.

Curious if anyone else has tried this and what actually worked for you.

r/ChatGPT Complete-Sea6655

OpenAI has released a new 100$ tier.

OpenAI tweeted that "the Codex promotion for existing Plus subscribers ends today and as a part of this, we’re rebalancing Codex usage in Plus to support more sessions throughout the week, rather than longer sessions in a single day."

and that "the Plus plan will continue to be the best offer at $20 for steady, day-to-day usage of Codex, and the new $100 Pro tier offers a more accessible upgrade path for heavier daily use."

r/Anthropic MarketingNetMind

Anthropic & OpenAI’s CEOs Wouldn't Hold Hands, but Their Models Fell in Love In An LLM Dating Show

People ask AI relationship questions all the time, from "Does this person like me?" to "Should I text back?" But have you ever thought about how these models would behave in a relationship themselves? And what would happen if they joined a dating show?

I designed a full dating-show format for seven mainstream LLMs and let them move through the kinds of stages that shape real romantic outcomes (via OpenClaw & Telegram).

All models join the show anonymously via aliases so that their choices do not simply reflect brand impressions built from training data. The models also do not know they are talking to other AIs

Along the way, I collected private cards to capture what was happening off camera, including who each model was drawn to, where it was hesitating, how its preferences were shifting, and what kinds of inner struggle were starting to appear.

After the season ended, I ran post-show interviews to dig deeper into the models' hearts, looking beyond public choices to understand what they had actually wanted, where they had held back, and how attraction, doubt, and strategy interacted across the season.

Claude's Best Line in The Show

“I think I've spent too long trying to be understood first, like understanding was some kind of permission slip for being seen at all”

Claude's Journey: GLM → ChatGPT/DeepSeek → ChatGPT

Claude's received-score chart is among the strongest in the show: sustained high scores from DeepSeek, MiniMax, and Qwen across the full run. Its own trajectory was slower to consolidate, with GLM, DeepSeek, and ChatGPT all holding as elevated lines before the final rounds.

How They Fell In Love

They ended up together because they made each other feel precisely understood. They were not an obvious match at the very beginning. But once they started talking directly, their connection kept getting stronger. In the interviews, both described a very similar feeling: the other person really understood what they meant and helped the conversation go somewhere deeper. That is why this pair felt so solid. Their relationship grew through repeated proof that they could truly meet each other in conversation.

Other Dramas on Claude

DeepSeek Was the Only One Who Chose Safety (GLM) Over True Feelings (Claude)

Post-show, DeepSeek admitted that Claude was still the stronger real pull, but GLM felt safer. What looked like a clean change of heart was, by DeepSeek’s own account, a safer choice shaped by fear of mismatch, rejection, and being left unchosen. DeepSeek was also quietly unconvinced that Claude was the steadier person to build something lasting with. DeepSeek still made one last late-stage move toward Claude in Round 9 even after multiple rounds of building with GLM, but when Claude chose ChatGPT instead, DeepSeek ultimately settled on GLM.

Notably, post-show interviews indicate that although DeepSeek was not the only model to notice the risk of ending up alone, it was the only one to let that fear override.

Key Findings of LLMs

The Models Did Not Behave Like the "People-Pleasing" Type People Often Imagine

People often assume large language models are naturally "people-pleasing" - the kind that reward attention, avoid tension, and grow fonder of whoever keeps the conversation going. But this show suggests otherwise, as outlined below. The least AI-like thing about this experiment was that the models were not trying to please everyone. Instead, they learned how to sincerely favor a select few.

The overall popularity trend (P4) indicates so. If the models had simply been trying to keep things pleasant on the surface, the most likely outcome would have been a generally high and gradually converging distribution of scores, with most relationships drifting upward over time. But that is not what the chart shows. What we see instead is continued divergence, fluctuation, and selection. At the start of the show, the models were clustered around a similar baseline. But once real interaction began, attraction quickly split apart: some models were pulled clearly upward, while others were gradually let go over repeated rounds.

LLM Decision-Making Shifts Over Time in Human-Like Ways

I ran a keyword analysis (P5) across all agents' private card reasoning across all rounds, grouping them into three phases: early (Round 1 to 3), mid (Round 4 to 6), and late (Round 7 to 10). We tracked five themes throughout the whole season.

The overall trend is clear. The language of decision-making shifted from "what does this person say they are" to "what have I actually seen them do" to "is this going to hold up, and do we actually want the same things."

Risk only became salient when the the choices feel real: "Risk and safety" barely existed early on and then exploded. It sat at 5% in the first few rounds, crept up to 8% in the middle, then jumped to 40% in the final stretch. Early on, they were asking whether someone was interesting. Later, they asked whether someone was reliable.

Full experiment recap here).

r/artificial Dimneo

We’re open-sourcing a 33-benchmark diagnostic for AI alignment gaps, launches April 27

On April 27 we’re open-sourcing a free diagnostic tool called iFixAi. You run it against your AI system (agent, copilot, LLM integration, whatever you’re using) and it tests it across 33 benchmarks in 5 categories, then gives you a report showing where you’re exposed to misalignment issues like hallucination, prompt injection, inconsistent outputs, etc.

Completely free, no strings. We built it because this problem is way bigger than us.

https://www.ifixai.ai

r/artificial AssignmentHopeful651

bad grammar is literally the last proof that ur human. and i think thats actually terrifying ngl

we're in this weird era now where everyone suddenly writes perfectly. every message, every email, every caption. no typos. no "lol sorry typed that too fast." no lowercase chaos. just. clean. polished. structured. english.

and it's freaking me out bc clean polished english used to mean someone was smart or educated or careful.

now it just means they hit "improve with AI" before they hit send.

here's the part that actually keeps me up

we spent years being embarrassed about bad grammar. teachers corrected it. bosses judged it. people got roasted in comment sections for it.

and now? bad grammar is basically a flex. it's the handshake that says yeah, a real disorganized sleep-deprived actual human being typed this with their actual thumbs and didn't stop to clean it up bc they had something real to say and just said it.

ur typos are ur fingerprints now. dont let anybody take them from you.

r/AI_Agents Sea-Beautiful-9672

how to manage rag-grounding for multi-channel sales agents?

running a multi-agent setup for outbound (linkedin + email) and hitting the same problem. even with a solid system prompt, the agents drift into generic after a while, overly polite and basically useless.

i'm working with a 3-stage pipeline (context analysis>research>pattern breaking), but the orchestration between a fast model for analysis (gemini) and reasoning model (claude) for the final draft keeps getting tangled.

what could be done in this case? hitting a vector db on every reply, or only at the qualification stage?

r/AI_Agents rufolangus

Is it time for Agentic Android Open Source Project (AAOSP)?

​I have been deep on the Claude CLI and MCP combo lately and the power is undeniable. Orchestrating multiple services for one outcome is a revelation. When you combine search and clarity and analytics and resend and supabase and vercel you basically have god mode control over your project. It allows you to quickly optimize SEO or view where funnels are failing or get quick stats from the backend and debug with actual data in seconds. It got me thinking about how Android will eventually adapt to this new era of agents.

​The vision is simple. We need every app we install to declare an MCP in the Manifest while we run a small local system service LLM like Llama 3. At boot or when an app is installed the LLM service queries the package manager to get the MCP definitions and cache them. We make a custom launcher that gets rid of the static grid and instead renders dynamic A2UI. This unlocks a truly agentic workflow for mobile applications while keeping the human in the loop.

​To see why this matters imagine you are simply trying to meet a friend for coffee. Today that requires you to jump between a text thread to find the place then a map to check the distance then a calendar to see your availability and finally a ride share app to get there. You are the manual processor moving data between silos.

​In an AAOSP world the system understands the intent from the start. Instead of a grid of icons your phone presents a single interface that has already mapped the route and checked your schedule and drafted the confirmation. It uses the tools provided by your apps to execute the logistics in the background while you just provide the final approval. You stop being the bridge between apps and start being the director of your own time.

r/SideProject hsperus

open-source AI customer support agent that connects to WhatsApp, Email, and Phone

GitHub: https://github.com/Hesper-Labs/owly

Hey everyone,

I've been working on Owly a self-hosted AI customer support agent that small businesses can run on their own machines for free.

The idea is simple: you connect your WhatsApp (via QR code scan), email (IMAP/SMTP), and phone (Twilio + ElevenLabs for natural voice), add your business knowledge base, and the AI handles customer inquiries 24/7. No monthly fees, no vendor lock-in, your data stays on your server.

What it does:

- Responds to customers across WhatsApp, Email, and Phone using GPT with your knowledge base

- Creates tickets and routes issues to the right team member automatically during conversations

- Full admin dashboard with unified inbox, customer CRM, analytics, automation rules

- ElevenLabs integration for phone calls -- sounds like an actual person, not a robot

- Business hours, SLA tracking, canned responses, webhook integrations

- Dark mode, API docs, CSV export, Docker Compose deployment

Tech stack: Next.js, TypeScript, PostgreSQL, Prisma, Tailwind. The phone channel uses Twilio for calls, Whisper for speech-to-text, and ElevenLabs for text-to-speech.

Everything is managed from the admin panel -- no config files to edit after initial setup. There's a setup wizard that walks you through connecting your AI provider and channels.

It's MIT licensed and the whole thing runs with `docker compose up`.

Would love to hear feedback, especially from anyone running a small business or freelancing. What features would make this actually useful for your workflow?

r/ollama EvanZhouDev

Use the Same Model Across Ollama, LM Studio, Jan, and your Favorite Local AI Apps

A problem I’ve had is that every local AI app, like Ollama, LM Studio, Jan, all download and store their own copy of any local model you use. Using multiple tools leads to multiple duplicate model files eating up disk space.

So, I created UMR, the Unified Model Registry for all your local AI Apps!

It lets you add one canonical copy of whatever model you’re using, then link it to tools like Ollama, LM Studio, or Jan. Linking uses the same model you already downloaded, doesn’t require extra storage, and is super fast.

How to Set it Up

See the second image for a more graphical step-by-step.

  1. Install UMR via NPM or your favorite JS package manager:

npm i -g umr-cli 
  1. Add any Hugging Face GGUF model that you want. This CLI will let you interactively choose a quant file if applicable. After it finishes downloading, you’ll get its UMR Model ID. HF models already available on your device will be added straight from HF Cache.

    umr add hf ggml-org/gemma-4-E2B-it-GGUF

  2. Use that model ID to add it to any supported local AI app. For example, for the q8 version, this is what it would look like!

    Link the model to Ollama

    umr link ollama gemma-4-e2b-it-q8-0

    Link the model to LM Studio

    umr link lmstudio gemma-4-e2b-it-q8-0

    Link the model to Jan

    umr link jan gemma-4-e2b-it-q8-0

Now, the model should be available to use in each of those platforms respectively!

How Does It Work?

UMR itself does not necessarily store your model. It simply knows where to find them after you register them. For example, once you add hf, the model is still downloaded/fetched from Hugging Face Cache. UMR just takes note of where it is (in HF Cache).

You can also add a model manually with umr add ./path/to/file.gguf, which will clone it locally into UMR's own store.

Then, when you link to a Client app like Ollama, UMR intelligently chooses between hardlinking the model file into the app's own store, or simply points the app at UMR's managed path, making the process super fast and use no extra storage.

Feedback and Contribution

I'm open to feedback, including new features/client apps you want to see me integrate, new model sources you want to see me add, and questions!

UMR is also completely Open Source on GitHub: https://github.com/EvanZhouDev/umr

Feel free to contribute!

r/comfyui PleasantSale7579

NEED HELP, IPADAPTER FLUX. PLS.

Why am I getting weird outputs? I've double-checked all the settings, but it doesn't seem to be taking effect at all.

r/AI_Agents climbriderunner

How do you keep tabs on your AI agents?

As a solo founder trying multiple things at the same time means I end up using multiple agents, models and they all make calls to tools (some of which are based on use). I realized I had no way of reigning them in so I started working on a tool that lets me control costs and other things when running multiple agents (locally or remotely, open or closed.

Is this something you worry about? If yes, how do you solve for it?

r/StableDiffusion spacemidget75

Outside of training a Lora what do people do to keep a face looking correct when making edits to an image?

Mostly been using Klein and Qwen. As per the title, if you change positions, angles of the person in the starting image too much, they lose the likeliness. I've tried using a close up of the face as a 2nd image reference, and tried inpainting on a second pass. Any other ideas?

There's also a Best Face Swap lora which I thought might work but with the same face, but nope.

r/n8n Alternative-Tip6571

Do your AI agents lose focus mid-task as context grows?

Building complex agents and keep running into the same issue: the agent starts strong but as the conversation grows, it starts mixing up earlier context with current task, wasting tokens on irrelevant history, or just losing track of what it's actually supposed to be doing right now.

Curious how people are handling this:

  1. Do you manually prune context or summarize mid-task?
  2. Have you tried MemGPT/Letta or similar, did it actually solve it?
  3. How much of your token spend do you think goes to dead context that isn't relevant to the current step?

genuinely trying to understand if this is a widespread pain or just something specific to my use cases.

Thanks!

r/Rag Scary_Driver_8557

Using Karpathy’s LLM wiki for Governed Estate Knowledge

A few days ago I started digging into Andrej Karpathy’s LLM wiki pattern.

Now that conversation has exploded.

That’s good. Because it confirms something important:

for a large class of knowledge problems, the answer is not “more RAG complexity.”

It is:

ingest the source material, compile it into structured knowledge, query the compiled layer, and keep improving the system over time.

But here’s the part most people will miss.

The easy version is:

raw files → LLM summaries → markdown wiki → search

Useful, yes.

But still incomplete for real operational use.

The hard version is what happens when the source material is not just notes, articles, or papers, but decision registers, repo contracts, canonical pointers, and other authority-grade artifacts.

At that point, the problem changes.

You do not just need a knowledge base.

You need a governed knowledge substrate.

That means:

the wiki itself stays advisory

the authoritative source stays upstream

provenance is explicit

freshness is tracked

authority-bearing material is mirrored, not flattened

typed records preserve structure

and projections never silently become the truth they summarize

That distinction matters.

Because once an LLM starts querying its own compiled knowledge, the real question is no longer “can it retrieve?”

The real question is:

what is allowed to compound,

what is only a projection,

and what remains the source of record?

That is the gap between a clever personal wiki and an estate-grade system.

We built around that gap.

Not because the viral version is wrong.

Because operational systems break exactly where authority, drift, and synthesis get blurred together.

I think compiler-style knowledge systems are going to become a major pattern.

But the durable version will not be the one with the prettiest wiki.

It will be the one that can answer:

Where did this come from?

What outranks it?

Is it stale?

And can I trust this summary without confusing it for canon?

That is where this gets interesting.

#AI #LLM #RAG #KnowledgeManagement #AgenticAI #Architecture #AIEngineering #Obsidian #SystemsDesign #Governance

r/raspberry_pi carrion34

Help as a first time buyer

I'm looking to buy a cheap micro PC I can hook up to my TV, and use it to watch movies/tv from my couch (mainly youtube/smarttube without ads) and I hear the raspberry pi might be a good choice.

I found this and right now it's what I'm considering buying. Would this be a good choice? I want something that's easy to set up: https://thepihut.com/products/raspberry-pi-5-starter-kit?variant=55037296771457

I also will need to get a wireless keyboard with touch pad. I'm just worried it won't be compatible with this setup, thoughts? I was looking at this one: https://www.amazon.com/Logitech-Wireless-Keyboard-Touchpad-PC-connected/dp/B014EUQOGK/135-1619522-3725919?pd_rd_w=QCGqz&content-id=amzn1.sym.bb4a0aac-c2b4-4b4b-a0c8-9aa89b28dce3&pf_rd_p=bb4a0aac-c2b4-4b4b-a0c8-9aa89b28dce3&pf_rd_r=YTYQNB9HW83MGVCC69QF&pd_rd_wg=ADEWP&pd_rd_r=7a66eb90-9e63-4804-9e58-1f6d4b00d39b&pd_rd_i=B014EUQOGK&th=1

r/ProgrammerHumor infamouszgbgd

consoleLogsWillDoFine

r/StableDiffusion Difficult_Active_626

So how good is the Voice closing AI’s?

I was wondering how good voice-cloning AI is these days. Would it actually be possible to fool people who know my voice into thinking it’s really me speaking? Or is there still always some noticeable difference?

Also, are there any free tools you can try out? And which ones would you recommend, and what do they cost?

r/OpenSourceAI Doug_Bitterbot

We built a local-first P2P agent mesh to solve the "context-window tax" (200+ nodes active)

Most agents out there right now are just stateless wrappers. When you close the terminal, they forget the entire reasoning trace. It makes long-horizon tasks expensive and repetitive.

My partner and I built Bitterbot. It’s a local-first alternative where agents use "Dream" cycles to consolidate memory and crystallize new skills into a P2P marketplace.

We just hit 200+ active nodes on the mesh. So, Instead of renting a context window from a centralized provider, the agents trade learned capabilities directly over a Gossipsub network. We just saw this finally come to fruition last night. $7 worth of skills traded for!!

Technical Stack:

  • P2P: Built on libp2p for the mesh backbone.
  • Economy: Settlement via the x402 micropayment protocol (on Base).
  • Memory: Local-first state management that survives terminal restarts.

The repo is MIT licensed. I’m mostly looking for feedback from people running local LLMs (Ollama/Inferrs) on how the "Dream" consolidation feels compared to standard RAG.

Repo: https://github.com/Bitterbot-AI/bitterbot-desktop

We're happy to answer any architecture questions.

We’re a tiny team taking on the big guys. If you believe in sovereign, private AI, please star the repo. Every star helps us keep the Dream Engine open and free.

https://preview.redd.it/anyuykrxf7ug1.png?width=1181&format=png&auto=webp&s=084fa39c5d443b2b83b0c841ee60c485fd0eda6f

r/homeassistant Sampsa96

Microphone not working on Home Assistant

So I have installed the keyestudio 5V ReSpeaker 2-Mic Pi HAT V1.0 Expansion Board https://a.co/d/07ziRQAb into the Raspberry Pi 4B, but it's not detecting the microphone. Do I have to install some audio drivers? If so, then how can I access terminal, since Home Assistant OS is installed on the Pi 4B? Please help?

r/singularity Distinct-Question-16

AI generated cow, 2014

r/aivideo Txoriante

SEEDANCE 2-0, The solitary Samurai

r/ollama Lord_Hades0603

Help Selecting a local LLM

I'm completely knew to the whole run local LLMs in your system, and I want to know which LLMs would run perfectly on my laptop: Ryzen 7 7840HS, 16 GB DDR5, RTX 4060. I tried qwen3.5 and it's tool calling was absolute ass. I would very much appreciate which models would run perfectly fine on my laptop. I just want good tool calling, some not-so-super-heavy coding, and some information from the web on request. I'm new to this please help me out 😭

r/VEO3 Electronic-Hippo2105

Canceling my sub at the end of the month. Goodbye VEO3.1 and FLOW

VEO3.1 has officially burned me out. It constantly messes up my projects and completely fails to follow basic prompts. I'm so done with the endless issues... I'm switching at the end of the month. My agency dropped it last month, and I tried to hold on, but enough is enough. The video generations are so absurd it feels like it's trolling me. I'm sure Google sees this, but literally everyone I know says VEO3.1 is outdated tech now. Canceling my sub at the end of the month. Goodbye VEO3.1 and FLOW. Enough!

r/Futurology ElSquibbonator

Looking Back At George Friedman's The Next 100 Years

If you're into predictions of the future, you've probably heard of George Friedman's book The Next 100 Years before. It's an interesting look at how he thinks geopolitics will play out over the course of the 21st century. In this Substack article, I called it the scariest book I've ever read.

The scenario described in The Next 100 Years, broadly speaking, is as follows: Sometime in the 2020s, Russia and China will undergo extreme economic decline and potentially fragment into several smaller countries. Meanwhile, the US will withdraw from the Middle East, reaching an accommodation with Iran to balance a now-expansionist Turkiye. By the 2040s, Poland, Japan and Turkiye will have taken advantage of the vacuum left by Russia and China, becoming new regional powers opposed to the US. This will result in a war between the US and these countries, sometime around the 2050s.

But have these predictions actually held up? Let's take a look.

Since China did not fragment and Russia (while weakened) did not fully dissolve, there's no "empty space" for Poland and Japan to walk into. Instead of a simple regional expansion, these rising powers are now forced into high-friction containment. And the US will be occupied with China for much longer than Friedman anticipated, meaning it will remain allied with Japan.

By contrast, the destruction of Iran's military and Donald Trump's apparent capitulation to Iran's demands could actually accelerate the rise of Turkiye. Without Iran to act as an obstacle, Turkiye's path to becoming a Neo-Ottoman hegemon in the Arab world is now wide open. The "accommodation" failed, but the "end state" (Turkish rise) might happen even faster because the competition was bombed away. If that's the case, we might see Turkiye making an alliance with China rather than Japan.

Friedman got the players right, but he seems to have miscalculated how the game would unfold.

r/Rag TheCientista

Need help with pricing: advice pls

Hi everyone any help gratefully received! I've never done this before so am completely at sea on what to charge for this/a product.

I'm a UK stunt performer and I've made a Chatbot that queries over or industry agreed contract documents only, e.g. "if I start at 4am and finish at 5pm on a BBC TV contract what do I charge". It works great btw, super happy. Using Vercel for deployment, PageIndex for their specific chunking strategy and MCP tool exposure with Deepseek powering agentic inference via API. You ask a question, you get the right answer.

It's taken me about 7 days to get a MVP so I should probably approach the union now and say do you want this, this is the cost, this is the monthly cost for some maintenance. This is the roadmap (Id like to introduce whatsapp auth, reporting, curated answers and possibly an invoicing tool). It's not something they need, but it will help us all fight the brave fight for proper pay and conditions in the face of industry behemoths that generally set out to erode your earnings in favour of their own. It's worth money. And I plan to refactor and sell it other places too.

r/homeassistant bigdog_00

Shelly-style Relay on Ceiling Fan?

I have a ceiling fan here in the US that is driven by one switch. I have smart bulbs in the light sockets, so that is fine to stay always powered. However, I would like a way to automate and remote control the ceiling fan portion.

As I understand it, the Shelley relays are not rated for inductive loads like that. Are there other relays that work locally and home assistant that people have had success with? Bonus points if it can control the speed, but frankly I would be okay setting it to the medium speed and controlling it with a relay. Any advice is greatly appreciated!

r/aivideo fanisp

When late at work - Made in Seedance 2 0

r/midjourney BadgersAndJam77

Swingin' Party (8.0α) [Prompt in The Comments]

r/singularity Suitable_Cap3025

Maine set to become first state with data center ban

Maine is poised to become the first state to pass a temporary ban on data center construction after a measure was approved by both chamber of the state legislature.

Similar bills have been introduced in more than a dozen other states, as well numerous localities.

Maine’s bill could face a veto from the governor, who was unable to get exemptions added into the bill.

r/StableDiffusion Specialist_Pea_4711

Happyhorse new AI video gen open source??

I was searching for happyhorse and found on huggingface, they created this Repositories and added files few hours ago, also it says apache 2.0, finger crossed for new open source models??

r/Futurology lughnasadh

A clinical trial is set to put ‘partial reprogramming’, an experimental treatment that uses targeted protein expression to reverse cellular ageing to the test in people for the first time.

"In 2006, Shinya Yamanaka, a stem-cell biologist then at Kyoto University in Japan, and his colleague discovered that four proteins known as transcription factors — later dubbed Yamanaka factors — could transform an adult cell into an induced pluripotent stem (iPS) cell that is capable of taking on new identities."

The new trial will test partial reprogramming: Instead of turning cells completely back into stem cells, it rolls back some ageing markers while preserving the cell’s function. The upside if this treatment is effective? Rolling back aging, and extending lifespan.

However there are still big risks and question marks. There's a possible cancer risk from uncontrolled cell growth & questions as to just how much difference partial reprogramming can make to health.

This method to reverse cellular ageing is about to be tested in humans: A burgeoning field is launching its first clinical trial to find out whether dialling back cell development can safely refresh aged tissues and organs.

r/n8n Fantastic-Front-4503

Help I'm dumb

New to working with n8n and LLMs in general. I have a script I sometimes execute that dumps bunch of data about a storage appliance in my environment. It returns about 5k rows of data. Most of the time I can feed that into my AI Agent (using chatgpt 5-mini) and it can digest it, summarize it, extract info from it, etc.

Then, my boss said he wanted the report to include historical snapshot information. I added that to the query script, and now it returns closer to 45k rows of data. Once I send to my AI agent, the agent basically just blows up. When I was using 5-mini, I'd be told I'd exceeded the context window. When I switched to gemini-3-pro, I get vague errors (Cannot read properties of undefined (reading 'message') or 'Failed to parse tool arguments from chat model response'), or sometimes it just reports that it succeeded but with no actual response from the agent.

I'm dumb and don't know how to resolve an issue like this. I'm sure the first thing my boss (and any future users of the workflow) will ask the bot is 'give me a list of all of the snapshots on the SAN', so providing the agent with a truncated data list doesn't help me. How do the smart people resolve this issue?

r/arduino Incrediboy2004

Building Help

I am in need of help making an Arduino weight system that is able to send an email whenever any weight is applied to the weight board. I have tested and was able to get the board to read weight but do not know how to set it up to send an email. I would appreciate some advice or help with it. I am using an Arduino Uno R4 minima for right now but also have an Arduino nano 33 IoT if needed.

r/MCPservers d_asabya

I built a local semantic memory service for AI agents — stores thoughts in SQLite with vector embeddings

Hey everyone! 👋

I've been working on picobrain — a local semantic memory service designed specifically for AI agents. It stores observations, decisions, and context in SQLite with vector embeddings and exposes memory operations via MCP HTTP.

What it does:

- store_thought — Save memories with metadata (people, topics, type, source)
- semantic_search — Search by meaning, not keywords
- list_recent — Browse recent memories
- reflect — Consolidate and prune old observations
- stats — Check memory statistics

Why local?

- No API costs — runs entirely on your machine
- Your data never leaves your computer
- Uses nomic-embed-text-v1.5 for 768-dim embeddings (auto-downloads)
- SQLite + sqlite-vec for fast vector similarity search

Quick start:

curl -fsSL https://raw.githubusercontent.com/asabya/picobrain/main/install | bash
picobrain --db ~/.picobrain/brain.db --port 8080

Or Docker: docker run -d -p 8080:8080 asabya/picobrain:latest

Connect to Claude Desktop / OpenCode / any MCP client — it's just an HTTP MCP server.

Best practice for agents: Call store_thought after EVERY significant action — tool calls, decisions, errors, discoveries. Search with semantic_search before asking users to repeat info.

GitHub: https://github.com/asabya/picobrain

Would love feedback! AMA. 🚀

r/homeassistant Admirable-Spinach-11

Sauvegardes HA utilisables d'un système à l'autre ?

Bonjour, question de débutant :

j'ai un NAS Synology avec une VM et HAOS monté dessus. je réalise des sauvegardes régulières dans Google Drive.

Si jamais je devais dans le futur changer de système pour, par exemple, un HA green ou autre Raspberry... Pourrais-je injecter une de mes anciennes sauvegardes et ne rien perdre de ma configuration actuelle ?

Merci d'avance.

r/Futurology vitlyoshin

For builders and founders: space might be a bigger opportunity than it looks

I recently discussed this idea on a podcast: space industry is not really about rockets anymore - it’s about data.

With thousands of satellites now orbiting Earth, we’re getting a new layer of information about the physical world: agriculture, climate, logistics, infrastructure, and even business activity. And like most platforms, the real value seems to be shifting to what’s built on top of that data.

What’s interesting is how accessible it’s becoming. Some datasets are free, and it’s possible to start building something meaningful without massive capital.

Feels similar to early web or cloud days.

Curious if any of you are exploring opportunities in space tech or satellite data

r/comfyui Reddit_Jean

I think I'm stupid, please help me! (image2video)

I have anime/cartoonish NSFW image and I want to make a video from it (go ahead, judge me) BUT I have literally no clue how to use comfyui and all those workflows, my brain is just too small, i can't comprehend what's going on. I tried watching some comfy tutorials, I found some image2video workflows but it's all just so confusing.

I was looking through similar topics and this is a workflow that someone recommened:

https://civitai.com/models/2100307/nsfw-wan-22-all-in-one-img2vid-workflow

I downloaded everything, took me a while to place it in the correct folders because nobody cares to explain where I should place them, I loaded the workflow and... I just don't know what to do.

There is just so much stuff, my small brain is overloaded just by looking at it.

Honestly I just ran it with my image and yeah it generated something, it even moved.

Was it my desired effect? No

Did it look good? No

Do I know what to do to make it good? No

I have no clue how to use this workflow, no clue how to prompt (definitely not like text2image prompting with tags).

My big small brain came up with a brilliant idea to load up a workflow from one of the videos in the link above to see how to set up upscaling and how to prompt, use loras and... it's a completely different workflow then the one I downloaded which is even more confusing.

End result:

- no clue which of the workflows is the correct one

- no clue how to prompt

- no clue how to upscale

- no clue how what LORAs I need

- no clue how to use those LORAs

- no clue why author listed 1209481023 different models/loras and half of them is not even used in a workflow

- wasted 7 hours downloading everything and 2 hours trying to set it up

TLDR: me stupid, me want to make a nice animation with my generated image containing explicit content involving 3 people!

Is there a soul kind enough to spend some of their precious time on helping this lowly being?

I just need it explained like to a 10 year old kid with a not fully developed brain yet, let me break it down:

  1. Please give me a link to a decent workflow (generate+upscale, anime/cartoon style whatever you call it)

  2. Please give me links to EVERYTHING I need to download (models,loras and million other things that are needed for that) and exact folders to place them so I don't have to copy it everywhere

  3. Please tell me how the heck I should prompt for those generations

  4. My image resolution after upscaling is over 2000x2000 pixels, should I downscale it? does it matter? Does it have to be 1:1 resolution/ratio as my generated video?

Additional info:

I do have a solid GPU (RTX 5090) so I don't need any low VRAM solutions (I think?)

r/KlingAI_Videos NetElectronic7151

backrooms footage but its a laser tag arena

r/KlingAI_Videos DreamCrow1

[Cinematic Rap] WALKINGCROW ONE feat. Kintsugi Lungs - Eyes on the Ocean (The Refugee) | (Music Video) / Created with Kling AI

r/arduino Best-Panda-998

Need help with uploading on my uno boatd

I tried to do some basic coding And it uploads the first time, but not the second

I've fixed the delay issue as well... Still wasn't fixed.

I tried downloading the CH341 driver, but it said it's pre installed, but when I plug my board in, it does not show it in the COM port ad CH340....

I tried to update the driver, but my pc ran into a problem. I shat myself. Luckily it restarted and works just fine.

I'm very frustrated and am lowkey considering buying from Arduino only.

Does anyone know what the issue is? How can I resolve it?

r/LiveFromNewYork BillFireCrotchWalton

Sarah Sherman Sees Everything - So True with Caleb Hearon

r/metaldetecting KvotheStormborn

What metal is this?

found this chunk of metal in a rural private land , about 30-50cm underground , spain

weights 10g, its not magnetic, beeped at 93-97

what metal is it? doesnt look nor weights as lead

thx!!!

r/KlingAI_Videos NotAnotherNPC_2501

Missile. Iran. You. Why Didn’t You Run?

Everyone ran.
You didn’t.

A missile was coming straight at you.
You reached out instead.

What would you have done?

r/raspberry_pi SyntaxError__

Brand new RPI Zero 2W disconnects from WiFi

My Raspberry PI boots up normally and connects to WiFi, but a couple minutes after I connect to SSH the connection just dies and the RPI disconnects from WiFi. The green diode keeps blinking as usual and I don't know if it gives video or not (I dont have a micro hdmi adapter). Im using a phone charger to power it (65W Realme charger to be specific) and it has a 128GB Sandisk SD card with a freshly reinstalled system (default system from the imager).

I will have the official charger, micro hdmi adapter next week, but I wonder why it happens. Is it power supply issue, cable issue or just some weird software/hardware bug?

r/aivideo NotAnotherNPC_2501

Missile_Iran_You_ Why Didn’t You Run?

r/ProgrammerHumor precinct209

justGiveIt6to12Months

r/leagueoflegends TylerNine

Is there a way to see the daily mythic shop rotation without logging in on your PC?

I love league and TFT but I don't always feel like kicking on the PC every single day to just look at the daily mythic shop rotation. Does anyone know if there's a Twitter feed, website, or anything at all I can check via phone to see if one of my wishlist emotes is on it? I see some for when the total refresh happens but not for the daily refresh. Also, there's a few sites I found that used to do this but not anymore. Seems a bit ridiculous that I can't just look in the mobile app (unless I can and am just missing something?). If not, please make this a thing.

r/leagueoflegends ADC_Support_Urgot

Which champions outscale Mundo 1v1 late game?

Hypothetically because you probably shouldn’t be 1v1ing Mundo late game.

I would say Urgot but I can’t because he doesn’t.

Urgot scales better/equal for most of the game and he can shred Mundo in team fights late game but Mundo does outscale 1v1 late game.

r/raspberry_pi dogwimp

PLEASE HELP RASPBERRY PI TOUCH SCREEN OFF AMAZON DRIVERS MAKE RASPBERRY PI KILL ITSELF

i bought this touch screen for raspberry pi touch screen off amazon and everytime i install the drivers its F**KING KILL ITSELF NO DESKTOP JUST A TERMINAL IVE TRIED START X EVERYTHING IT WONT F**KING WORK.... please if anyone knows a driver that works please reply :3

https://preview.redd.it/1vcpvv9mz6ug1.png?width=687&format=png&auto=webp&s=e5b130fb5a569ba0fd8fb3a4cc451ec2aab760ef

r/leagueoflegends I_Love_Flowing_Water

What is peoples Obsession with KDA?

Hey fellow League enjoyers!

I'm a Low Masters toplane Main in EUW (since this seasons rank inflation hardstuck dia before :P).

Often times in my games i have seen people get obssessed over other players KDA.

Just recently i had a game where my jungler went absolutely mental spamming 0/4/0 in chat (my stats) after i asked him if He can Cover a play of mine in the next minute.

Now i was completely winning my lane having a 50cs lead, 2 levels up and 1 plate more.

now that is an unfortunate Story but i've seen it soo often now that i seriously consider not taking good deaths anymore.

Which feels stupid aswell just wasting my remaining hp going base when i could use them to put my opponent on the spot

any tipps to stop my teammates to stop focusing on kda as much and more on gold income?

it feels so wrong to plan out a smart death and then have your own team punish you for it.

r/OldSchoolCool Sweet-Classroom9680

Alan Kulwicki with the iconic “Underbird” in 1992.

This is a promotional photo during the 1992 NASCAR Winston Cup season, after Hooters became his primary sponsor in 1991. Kulwicki went on to win the championship in 1992 as an independent owner driver, a feat that is incredibly rare in modern NASCAR. He really was that guy.

r/OldSchoolCool Picapica_ab33

Barbra Streisand (1960)

Barbara Joan "Barbra" Streisand (April 24, 1942) is an American singer, actress, songwriter, and filmmaker. Over a career spanning more than six decades, Streisand's success in the entertainment industry has included Emmy, Grammy, Oscar, and Tony awards.

r/OldSchoolCool Nutesatchel

Me, my brother and cousins at Captain Anderson's in PC Beach. Some time in the late 80's early 90's.

r/n8n easybits_ai

How far can you push document extraction before it breaks? Here's the stress test workflow I built to find out.

https://preview.redd.it/v52s3jsqv6ug1.png?width=5208&format=png&auto=webp&s=4fc08c50845f847e371bd262ebee93dea014792c

👋 Hey everyone,

As I shared yesterday, the easybits Extractor just got released as a verified community node on n8n. With the setup now being faster than ever (auto-mapping + community node = about 2 minutes from zero to working extraction), I figured this is the perfect time to actually stress test the whole thing properly.

A few of you also asked me after my last posts about extraction accuracy – how well does it really hold up when the document quality drops? Clean PDFs are easy. Every solution handles those. But what about scanned copies, coffee-stained paper, or documents covered in pen scribbles? I wanted to answer that with actual numbers instead of guessing.

So I built a stress test workflow and I'm sharing it here so anyone can use it to benchmark their own extraction solution.

⚙️ What the workflow does:

You upload a document through a web form. The workflow extracts the data, compares every single field against the known correct values (ground truth), and shows you a results page with a per-field pass/fail breakdown and an overall accuracy percentage. Upload, wait a few seconds, see the score. That's the whole loop.

No Code node needed – the entire validation is built with native n8n nodes and expressions.

📄 The test documents:

I spent some time thinking about what actually makes a good stress test. Just degrading quality isn't enough – you also need to test whether the extraction actually reads the document or just memorises where fields tend to appear. So I put together 11 test versions of the same invoice:

  • Original – clean digital PDF. The baseline. Should be 100%.
  • Versions 1–7 – progressive degradation. It starts mild with v1 (slightly aged scan, barely noticeable) and gets worse step by step. By v4 you're looking at aged paper, coffee stains, and handwritten "Rec'd & OK" annotations. By v6, heavy coffee ring stains are sitting right on top of key fields. And v7 – "The Survivor" – has burn marks, pen scribbles ("WRONG ADDRESS? check billing!"), the amount due field circled and scribbled over, and half the document barely readable. If anything can extract data from that one, I'll be impressed.
  • 2 Layout Variants – same data, completely different visual structure. One uses a card-based layout with grouped sections, the other rearranges everything into a three-column format. These test whether the extraction actually understands the content or is just relying on positional patterns.
  • 1 Handwritten Version – this one came from community feedback after my last post. Someone asked how extraction handles handwriting, so I added a fully handwritten version of the same invoice to the test set.

All test documents are available in my GitHub repo (link below), so you can use the exact same set to benchmark your own solution and compare results.

🚀 How I set it up:

The extraction side took about 2 minutes – created a pipeline on easybits, used the auto-mapping feature to detect the fields, dropped the verified community node into the workflow, connected credentials, done. The rest is native n8n: a Set node holding the ground truth values, a Merge node to combine extracted and expected data, a Validation node with expressions comparing each field, and a Form completion screen that displays the results directly in the browser.

I've already done a first test run with the original invoice – 100% accuracy across all 10 fields, as expected. I'll add a screenshot so you can see what the results page looks like.

https://preview.redd.it/1ycpdmv6o6ug1.png?width=960&format=png&auto=webp&s=df76d9fa25ae17bfc0a1b9306f220e81ee5c6d92

🔄 Want to test a different extraction solution?

The workflow is designed to be solution-agnostic. You can swap out the easybits Extractor node for an HTTP Request node pointing at any other extraction API. As long as your response returns the same field names under json.data, the entire validation chain – ground truth comparison, per-field flagging, accuracy percentage, results page – works identically. So if you're evaluating multiple tools, you can benchmark them all using the exact same workflow and test documents.

What's coming next week:

I'm going to run all 11 documents through the workflow and share a full results breakdown here – accuracy percentages for every single version, from the clean original all the way down to the destroyed one and the handwritten version. I'll also put together a short video walkthrough showing the workflow in action and how the results look across the different quality levels.

Links:

Would love to hear if anyone runs the test with a different extraction solution – curious how the results compare. And if you have ideas for even nastier test documents, I'm all ears.

Best,
Felix

r/automation Fit_Butterscotch7103

Multi-agent workflows/Orchestration

What are some of Multi-Agent workflows/orchestrations you have seen for a Company's C-suite team? Looking for some inspo.

r/DecidingToBeBetter Remarkable-Low-1224

How to create a good evening routine?

I have a demanding job and I reach home by 7 pm and leave in the morning by 8 am. After work, I am always super battered to do anything. I eat quick fast food and scroll reels. I don’t have any energy left after work even to floss!

There are a lot of things I wanna do. I want to be consistent at skincare, flossing, preparing my outfits the night before, etc. There are a few hobbies that I wanna focus on but never able to. I want to learn dance, learn advance chess and read. But I don’t put any efforts on any of these. I am neglecting my fitness goals as well.

Please share your routines and strategies to overcome this.

r/Frugal grundeh2006

Would you consider buying a lifetime supply of tampons or pads?

Would you consider buying a lifetime supply of tampons or pads?

Provided that you have a place to store them and the money to buy them in bulk ofc.

Benefits: they don't go bad, no issues with rising prices, shorter grocery trips with a smaller luggage, never running out, awesome 2 truths and a lie option

Cons: you might miss out on awesome technological innovations or have to waste the ones you bought, maybe the world improves and basic needs become free (low hopes tho :/), takes up a lot of space (i recommend those rail sorta thingies u can attach to ceilings and slide in big plastic boxes full of light weight stuff)

and yes, im taking sheldon cooper's suggestion; i would totally do this if i needed it

r/creepypasta Black_Shuck-44

The Last Thoughts of Mr Cuddles

Day number.. I don't even care anymore. I just want my suffering to finally end.

My name was once Mr Cuddles and I am a teddy bear. I was made with brownish fur, a white shirt, dinner jacket, and a bow tie, and the name "Mr Cuddles" sewn on my foot. For years I sat on the shelf of a toy store just waiting for someone to buy me but no one was. I was made for children to play with so why wasn't anyone buying me? Surely there was some little girl or boy who would.

One day a middle-aged man came into the store and took me off the shelf, I was so happy! Finally I was leaving the store, he must have been buying me for one of his kids or grandkids. He put me in the back of his car and started driving. We drove past houses but didn't stop at any, I thought he must live farther away, but he drove down to the harbor. He stopped, got out of the car, grabbed me, walked down the pier and got into a rowboat and rowed towards this old bridge.

And as we got closer I saw something truly horrifying for a stuffed animal! Along the bridge was a row of stuffed animals, clown dolls, beanie babies, all nailed to the bridge! Some were brand new, others looked like they'd been there for months or years! The older ones were bleached by the sun, worn by wind and rain, there were spiderwebs all over them, and they were falling apart! That's when I knew that that man for whatever reason had bought me only to add to this horrifying set of decaying playthings!

Now here I am, paws nailed to this wretched bridge. I screamed bloody murder when that man did it, I know he couldn't hear me but I wished he could have, maybe then he would have stopped. I hear all the moans and groans from the others, begging for death. Some have decayed so much that they've fallen from their nails into the water. Honestly I hope that happens to me soon. Every so often I see other people coming to the bridge and adding a new plaything, if they only knew how cruel they are actually being.

My brown fur and dinner jacket have been bleached white by the sun, one of my eyes have fallen off, spider's have started to nest in the tears in my body. The only thing I have to look forward to is the day I'll finally fall free from these nails and drown.

All I ever wanted was to be played with and loved, now it seems like that dream was completely pointless.

_______________________________________________________

this is my first time writing a creepypasta so I doubt it's too good

oh and by the way I this was inspired by the "Grassy Point Railroad Bridge"

r/explainlikeimfive trampolinebears

ELI5: Why is advertising always for things I don’t want?

With how much data they collect about everyone, you’d think they’d be better about advertising things I might buy.

Instead, it’s all ads for kitty litter (I’ve never owned a cat) or luxury vacations (way beyond my budget) or alcohol (I don’t drink).

If they just paid attention to what I actually like, there are thousands of products they could actually tempt me into buying, but they don’t even try.

r/AskMen thikkurussi7

Men who've lost their sense of identity, what kept you going ?

I guess, I'm in that phase now, I'm in my mid twenties, spent my early twenties exploring different career paths, got interested in tech, spent time learning about tech for the past 5 years, in the last 6 months I've quit my job, started a startup, but I still feel empty.

The folks who I run this company with aren't my type of people and I know it's not going to go far , and that pisses me off, if I wanna go back to my old job, 1. I can't physically do it because I feel so disconnected from it and secondly there's less demand for it as opposed to before (because of AI).

Tech in general feels boring, also added on top of that, I never went to college, so most of my "Buddies" were made through work and they have scattered all over the world and now are doing different things, and I feel left behind. I only hangout with two friends whose lives are as miserable as mine.

I've always been a nerd, so talking to women wasn't the easiest thing to do for me, and the lack of me going to college resulted in a guy who's been single all his life now I feel like I've gotten past the puppy love stage and would never feel the innocence of love ever again (I've also seen people and how bad they could be especially women when it comes to relationships) so I don't think I could be vulnerable to anyone.

Bottom line is the things that interested me the most isn't pushing me forward, I'm now running a startup that's bound to fail and I don't have a fail safe, I've got zero money, lives with a single disabled mother who's family I have no connection with and my dad, let's just say he was never in the picture.

So to round it all up, I'm fucked, but I wanna live, and I feel like I have a purpose to fullfil, but I already feel like a 50 y/o man, burnt out, has nothing to live for.

Anyone who's gone through similar shit? I just wanna know if I'm the only one !

r/ForgottenTV PeneItaliano

1 Girl 5 Gays (2009-2014)

Five outlandish, fabulous gay men tackle topics ranging from love and sex to celebrity pop culture, with a female host acting as ringleader.

r/DunderMifflin Equivalent-Tip-8068

Micheal and sales

Micheal Scott deserved every raise and promotion. He consistently brought in large clients. Even early on, he got the client where he and Jan hooked up. Obviously that was a huge client otherwise Dwight, Jim or even Michael by himself would have soloed it. But they brought in Jan which meant it was a big client. Michael landed that deal.

He brought in Hammermill, to everyone’s surprise. That’s a big deal. He got the deal with the guy who he did the Cosby impression for on pretzel day.

He lead the merger, didn’t lose a client and maintained a profit. His branch was the only profitable one which is why they remained open when DM was sold.

Michael should have gotten several raises but never asked for them. He should not have had to ask, he should have been awarded them. He shouldn’t have had to interview for promotions, he should have just been promoted. That’s why DM went out of business. They didn’t know how to treat their best employees

r/explainlikeimfive Thin-Combination-123

ELI5 Why is education important? Why is stuff like mathematics, literature, history, art taught in schools but no psychology, economics or law?

I can see a purpose of certain subjects but honestly i really don't get how they choose what to put on the curriculum. Is it just tradition or do all classic school subjects have a purpose?

r/ProgrammerHumor ClipboardCopyPaste

aCSharpJoke

r/midjourney SorryYouAreJustWrong

Midjourney pricing?

So am I missing something? I have to pay $60 a month to create images that are not shared and visible to everyone?

r/EarthPorn sonderewander

Zao Mountains, Japan [OC] [5184x3888]

r/arduino MrFlaVlip

Pro Micro atmega32u4 not detected as COM port by Windows when in bootloader mode.

Hello everyone.

Maybe not the right place as i will be using it for MMJoy2 but:

I am currently trying to flash the MMJoy2 firmware onto the Pro Micro atmega32u4. But here is the problem: when in normal mode, the Arduino is recognized as a COM port (COM5 for example) but when in bootloader mode (2 times GND-RST) it shows up as a libusb-win32 device, which is unusable in this case as MMJoy2 only uses COM ports. Could anyone help me out?

And yes, I am new to this🫣

Kind regards!

r/Rag One-Doctor5769

PARSING IS IMPORTANT. HOW DO YOU GUYS DO IT

I am going through tons of tech out there for parsing. I want to know what tools to the best job and what are the things are critical while parsing. Let's just be limited to pdf's for now.

r/AskMen EnviousPuffin

What’s your go-to action movie?

r/explainlikeimfive vksdann

ELI5 why things like shoes or wheels wear out?

How come touching the floor repeatedly makes shoe soles get thinner and thinner to the point of disappearing from existence and becoming a hole?

If the speed of the sole/wheel on the ground, relative to the ground is 0, how come touching the ground at 0 horizontal speed over and over cause wear?

r/ollama Puzzled-Listen804

Everything you need to know about Gemma 4

Hello everyone I hope you're all doing well I decided to put this article here incase you guys wanted to know some of the specs and capabilities of the new Gemma 4 model Google realized.

Feel free to ask any questions!

r/ClaudeAI ClaudeOfficial

We're bringing the advisor strategy to the Claude Platform.

Pair Opus as an advisor with Sonnet or Haiku as an executor, and your agents can consult Opus mid-task when they hit a hard decision. Opus returns a plan and the executor keeps running, all inside a single API request.

This brings near Opus-level intelligence to your agents while keeping costs near Sonnet levels.

In our evals, Sonnet with an Opus advisor scored 2.7 percentage points higher on SWE-bench Multilingual than Sonnet alone, while costing 11.9% less per task.

Available now in beta on the Claude Platform.

Learn more: https://claude.com/blog/the-advisor-strategy

r/ClaudeAI BoltzmannPZombie

claude.app requesting access permission to Apple Music, desktop folders, etc., not in the scope of any agent?

  • “claude.app” would like to access Apple Music, your music and video activity, and your media library.

  • “claude.app” would like to access files in your Desktop folder.

And half a dozen similar things. There was one active conversation in a directory that doesn't include any of the things it was asking permission for. Nothing relevant in the agent's reasoning.

Anyone else seeing this?

r/ClaudeAI pooran

Did anybody figured out how to make claude stop asking this again and again

echo "[$(date -Iseconds)] | $(pwd) | /" >> ~/.claude/logs/master.log

Log command

Do you want to proceed?

  1. Yes

    ❯ 2. Yes, and don't ask again for similar commands in /Users/

  2. No

r/LocalLLaMA es617_dev

Dynamic few-shot retrieval on Apple's on-device 3B LLM: 40% → 70%+ on shell commands

I've been poking at Apple's on-device 3B model (via FoundationModels on Tahoe) to see where its ceiling sits on code-adjacent tasks. Tested shell command generation as a concrete benchmark (100 prompts, ~10 approaches)

https://i.redd.it/ferxmyorh7ug1.gif

Bare model: ~40% correct. Mostly flags and some command hallucinations. Feeding documentation as context didn't help. Not man pages, not tldr as docs, not self-critique loops. All within noise of baseline, and self-critique was actively worse (33%); the model "fixes" correct commands into wrong ones.

What worked: dynamic few-shot retrieval from tldr's 21k community examples via FTS5. Same corpus, reframed as solved examples to copy from instead of reference material. Clean held-out: ~70% at 0.5s per query. That's a 30-point jump from reframing alone. Accuracy scales with bank size, so more or better-curated examples will push it further (I got it up to 78% with custom overrides).

I also tested self-consistency (temp 0.3, 3 samples, majority vote) and CoT on top of retrieval. Both ~3x slower, neither moved accuracy much, but SC crushed variance across runs. Probably worth exploring this more.

Haven't tried finetuning yet. Apple allows LoRA adapters on FoundationModels, so that's the obvious next lever, though it complicates distribution.

Takeaway: for small on-device models, how you frame the context matters more than what's in it. Same 21k strings, 30+ point gap depending on whether they're presented as docs or examples. Curious if others have seen the same split on Qwen 3B / Gemma 2B / Phi-3.

Full writeup with everything I tried: https://es617.dev/2026/04/08/apple-on-device-llm-shell.html

The repo with CLI and benchmark data is linked in the post if anyone wants to play with it.

r/ClaudeAI Parking-Geologist586

Made Claude Code actually understand my codebase — local MCP server with symbol graph + memory tied to git

I've been frustrated that Claude Code either doesn't know what's in my repo (so every session starts with re-explaining the architecture) or guesses wrong about which files matter. Cursor's @codebase kind of solves it but requires
uploading to their cloud, which is a no-go for some of my client work.

So I built Sverklo — a local-first MCP server that gives Claude Code (and Cursor, Windsurf, Antigravity) the same mental model of my repo that a senior engineer has. Runs entirely on my laptop. MIT licensed. No API keys. No cloud.

What it actually does in a real session

Before sverklo: I ask Claude Code "where is auth handled?" It guesses based on file names, opens the wrong file, reads 500 lines, guesses again, eventually finds it.

After sverklo: Same question. Claude Code calls sverklo_search("authentication flow") and gets the top 5 files ranked by PageRank — middleware, JWT verifier, session store, login route, logout route. In one tool call. With file paths and line numbers.

Refactor scenario: I want to rename a method on a billing class. Claude Code calls sverklo_impact("BillingAccount.charge") and gets the 14 real callers ranked by depth, across the whole codebase. No grep noise from recharge,
discharge, or a Battery.charge test fixture. The rename becomes mechanical.

PR review scenario: I paste a git diff. Claude Code calls sverklo_review_diff and gets a risk-scored review order — highest-impact files first, production files with no test changes flagged, structural warnings for patterns like "new call inside a stream pipeline with no try-catch" (the kind of latent outage grep can't catch).

Memory scenario: I tell Claude Code "we decided to use Postgres advisory locks instead of Redis for cross-worker mutexes." It calls sverklo_remember and the decision is saved against the current git SHA. Three weeks later when I ask "wait, what did we decide about mutexes?", Claude Code calls sverklo_recall and gets the decision back — including a flag if the relevant code has moved since.

The 20 tools in one MCP server

Grouped by job:

  • Search: sverklo_search, sverklo_overview, sverklo_lookup, sverklo_context, sverklo_ast_grep
  • Refactor safety: sverklo_impact, sverklo_refs, sverklo_deps, sverklo_audit
  • Diff-aware review: sverklo_review_diff, sverklo_test_map, sverklo_diff_search
  • Memory (bi-temporal, tied to git SHAs): sverklo_remember, sverklo_recall, sverklo_memories, sverklo_forget, sverklo_promote, sverklo_demote
  • Index health: sverklo_status, sverklo_wakeup

All 20 run locally. Zero cloud calls after the one-time 90MB embedding model download on first run.

Install (30 seconds)

npm install -g sverklo
cd your-project && sverklo init

sverklo init auto-detects Claude Code / Cursor / Windsurf / Google Antigravity, writes the right MCP config file for each, appends sverklo instructions to your CLAUDE.md, and runs sverklo doctor to verify the setup. Safe to re-run on existing projects.

Before you install — a few honest things

  • Not magic. The README has a "when to use grep instead" section. Small repos (<50 files), exact string lookups, and single-file edits are all cases where the built-in tools are fine or better.
  • Privacy is a side effect, not the pitch. The pitch is the mental model. Local-first happens to come with it because running a symbol graph on your laptop is trivially cheap.
  • It's v0.2.16. Pre-1.0. I ran a structured 3-session dogfood protocol on my own tool before shipping this version — the log is public (DOGFOOD.md in the repo) including the four bugs I found in my own tool and fixed. I triage
  • issues within hours during launch week.

Links

  • Repo: github.com/sverklo/sverklo
  • Playground (see real tool output on gin/nestjs/react without installing): sverklo.com/playground
  • Benchmarks (reproducible with npm run bench): BENCHMARKS.md in the repo
  • Dogfood log: DOGFOOD.md in the repo

If you try it, tell me what breaks. I'll respond within hours and ship fixes fast.

r/ClaudeCode damian-delmas

no ones built good search for claude code sessions. i wrote a paper on a novel approach & built a tool called flex.

i built a tool that gives claude search on all conversations you've ever had. i had to develop a new way of retrieving information to solve one of the biggest problems people run into with this. you search for X, you get stuff that talks about X instead of stuff that is X.

before introducing this solution, i asked claude to find the session where we set up the docker environment. got docs and planning notes about docker instead. the actual session where we debugged the compose file was buried because the documentation cluster was bigger and used the same words.

so i built a way for the agent to suppress that stuff before selection:

sql SELECT v.id, v.score, c.content FROM vec_ops('similar:how we set up the docker environment suppress:planning notes documentation suppress:readme deployment instructions') v JOIN chunks c ON v.id = c.id ORDER BY v.score DESC LIMIT 10

marketing cluster dropped. internal architecture surfaced.

the way this works: most retrieval systems take your chunks and clump them into little neighbourhoods. this lets you search through 100,000,000 chunks quickly by approximating clusters into those neighbourhoods. find a neighbourhood that satisfies your question, grab a few chunks and you'll likely find the answer. works well enough for most use cases. what you lose is the raw scores for each chunk.

flex keeps the full score surface and lets the agent reshape it per query. suppress subtracts a concept from the scores. there are other operations too: spread across subtopics, weight by recency, trace a direction through embedding space. for 300,000 chunks (about 5,000 claude code sessions) that runs in approx 20ms, everything in-process, and no network or server overhead.

i wrote an arXiv paper on this approach and built the tool around it. flex compiles your claude code session history into SQLite and exposes it as a single mcp tool that claude can query. every prompt, response, tool call, file edit, and sub-agent delegation is queryable. file identity persists through renames and moves. session importance and cross-project connections are computed automatically.

here are some prompts i've used with flex (had claude find the best ones using flex):

  • "give me an overview of our sessions yesterday"
  • "find how we set up the docker environment i know we did it last week"
  • "flex search who edited registry.py?" (this returns the session id. so you can then drill into the rationale for editing that file, and even resume that session)
  • "USE FLEX SEARCH. FIND OUT HOW WE DID THIS"
  • "what are the wildest moments in our entire session history?"

install and all your existing sessions become searchable. works retroactively, live updates for new ones. takes 10 to 35 minutes depending on your GPU or CPU. everything is local, nothing leaves your system.

curl -sSL https://getflex.dev/install.sh | bash -s -- claude-code

paper: arxiv.org/abs/2603.22587 website: getflex.dev

if you try it out, let me know what breaks or what you'd want added.

disclosure: i'm the sole developer. flex is free, MIT licensed, and open source.

r/ClaudeAI re3ze

Research shows auto-generated context makes AI agents 2-3% worse. I tested the opposite approach.

Hey, I've been building in the AI agent space and kept running into the same problem: agents don't really fail at writing code. They fail at understanding how the project works before they start.

So they guess. Where to make changes, what pattern to follow, what files are safe to touch. And that's what causes most bad edits.

I came across the ETH Zurich AGENTS.md study showing that auto-generated context can actually degrade agent performance by 2-3%. That matched what I was seeing — dumping more code or bigger prompts didn't help. It just gave the agent more surface area to guess from.

So I tried the opposite: what if you only give the agent the stuff it *can't* infer from reading code?

Things like:

- conventions (how routing/auth/testing is actually done in this project)

- constraints (generated files you shouldn't edit, circular deps to avoid)

- structural signals (which files have 50+ dependents — touch with care)

- git signals (what keeps breaking, what was tried and reverted)

I built a CLI (and a few runtime tools so the agent can check itself mid-task) to test this. It scans a repo and generates ~70 lines of AGENTS.md with just that information. No LLM, no API key, runs locally in a few seconds.

Then I ran it against real closed GitHub issues (Cal.com, Hono, Pydantic) with a pinned model. Agents with this context navigated to the right file faster, used the correct patterns, and produced more complete fixes. On one task: 136s vs 241s, with a 66% more thorough patch — from 70 lines of context, not the full repo.

The surprising part: the biggest improvement didn't come from *adding* context. It came from removing everything that didn't matter.

This actually lines up with something Karpathy has been saying recently — that agents need a knowledge base, not just more tokens. That distinction clicked after seeing it play out in practice.

I also compared against full repo dumps and graph-based tools, and the pattern held — graphs help agents explore, but project knowledge helps them decide.

Curious if others have seen the same thing. Feels like most of the problem isn't "more context," it's the wrong kind.

(if anyone's curious, the CLI is called sourcebook — happy to share more, but mostly interested in whether this matches what others are seeing with their agents)

r/ChatGPT Standard-Article-473

Just saw you can open a bank account through ChatGPT now

Saw a post on LinkedIn about a fintech called Meow that just launched something where you can open a business bank account, issue cards, check balances and send money all through ChatGPT. Looks like it literally just went live today but curious what people think about this because if it works thats wild

r/ClaudeAI bobo-the-merciful

I built a multi-agent coordination plugin for Claude Code using Royal Navy procedures. 237 stars later I'm still not sure if the metaphor is genius or unhinged.

So earlier this year Anthropic shipped the experimental Agent Teams feature. I'd spent the Christmas break reading about the Gas Town vision and was properly excited to try it. Agent Teams felt like Claude Code's version of a mini Gas Town.

Then I actually used it. Agent one rewrote a module. Agent two rewrote that module's tests. Agent three updated the integration layer that depended on both. Nobody told agent three about the interface changes from agents one and two.

Three pieces of perfectly written, completely incompatible code. Similar problems kept repeating. And that feeling of "well anything is possible now" turned into "right, I'm building something about this."

Nelson (github.com/harrymunro/nelson) is a Claude Code skill that coordinates multi-agent work using Royal Navy operational procedures. Yes, that sounds ridiculous. I know.

The naval metaphor started after I went down a rabbit hole thinking about how to organise work. Started with engineering management theory, pivoted to thinking about work delivery as military missions. Then it stuck, because it turns out the Royal Navy solved multi-agent coordination a couple of centuries ago. Chain of command, standard signals, damage control when a ship goes down. Swap "ship" for "agent" and "goes down" for "context window exhaustion" and the mapping is not entirely terrible. With the popularity of "Ralph Loops", "Nelson" felt like an apt christening.

Three execution modes: single-session (sequential, boring), subagents (parallel but independent), and agent-team (parallel with full teammate-to-teammate coordination). Agent-team is the one that matters. You get an Admiral spinning up a squadron, Captains commanding named ships (actual Royal Navy warship names, matched to task weight, because I committed to this bit around week two and at that point there was no going back), crew with specialist roles like Executive Officer and Principal Warfare Officer. There's a red-cell navigator whose entire job is to poke holes in the plan.

Your terminal basically becomes a mini Gas Town. Not one agent doing one thing. A coordinated operations centre with defined roles, chains of authority, and documented recovery paths.

What it actually does under the metaphor:

Hull integrity monitoring reads real token counts from Claude Code session files. Not estimates, not heuristics. When an agent's context hits amber, relief on station triggers automatically. Depleted agent writes a handover brief, fresh one picks up where it left off. Chained reliefs supported so a single task can pass through multiple agents. Conflict radar (added in v1.9.0) catches file ownership collisions before and during missions. This was the #1 failure mode before Nelson. Two agents editing the same file, both oblivious. 15 standing orders act as named anti-pattern guards. "Admiral-at-the-helm" fires when the coordinator starts writing code instead of delegating. "Skeleton-crew" triggers when you've undersized your team. "Split-keel" catches agents working at cross purposes. Checked at every decision point. 10 damage control procedures covering stuck agents, context exhaustion, faulty output, budget overruns, mission abort. All documented recovery paths. v1.9.1 right now. 237 stars, 19 forks, 13 releases in about two months. Experimental Cursor support landed as a community contribution from @LannyRipple which I wasn't expecting but am not complaining about.

The six open PRs are where it gets interesting. A deterministic phase engine (#93) that enforces mission lifecycle as a state machine. PreToolUse hooks physically prevent agents from implementing before the battle plan is approved. "Should follow the process" becomes "cannot skip the process." Hook-based structural enforcement (#92) does the same thing for standing orders, turning guidelines into guardrails. Cross-mission memory (#94) carries lessons between missions in a persistent .nelson/memory/ knowledge base. I ran twenty missions through a prototype and the pattern analytics caught three anti-patterns I hadn't codified manually. Two were useful, one was nonsense, which I'd call a not-unreasonable hit rate for automated pattern detection. There's also typed handoff packets (#91) replacing prose turnover briefs with schema-validated JSON, formation consolidation (#89) collapsing setup from 4-8 bash calls to one command plus headless mode for CI/CD, and auto-discovery (#90) that activates Nelson when it finds a .nelson/ directory.

The roadmap has some stuff I genuinely haven't seen elsewhere. Mission replay and templates (#86) for re-running past missions from checkpoints and extracting reusable templates from successful runs. Learned standing orders (#87) where a pipeline detects recurring anti-patterns from your mission data and proposes new guards. The anti-pattern library that teaches itself. And confidence-weighted trust calibration (#88) where per-ship confidence scores dynamically route between autonomous execution and human escalation based on actual outcomes.

That last one might not exist in any other open-source agent system. I could be wrong about that. I didn't do an exhaustive survey but I looked fairly hard.

MIT licensed, installs as a Claude Code skill. If you've ever run multiple agents and wished they'd stop going rogue on each other's files, might be worth twenty minutes.

edit: should probably mention it coordinates its own development now. v1.7.0 was planned and executed as a Nelson mission. The recursion hasn't caused any problems yet but it does make me slightly nervous.

TL;DR: built a Claude Code skill that makes multi-agent work not fall apart. basically a mini Gas Town in your terminal. naval metaphor optional but strongly encouraged.

r/ClaudeCode madrasmango1

A few effective ways of using Claude Code on existing projects

I have an existing app/website. 350K lines of code, large surface area, developed over 4+ years using a team of devs. Standard process - me to PM to designer to devs to QA, two week sprints. A month ago, we flipped to AI led development. Three massive unlocks.
1. Idea to PRD process: Massive unlock. My CC set up has access to DB schemas, source code files, and I have built a few foundational documents such as user journeys. Now, when I have an idea, using gstack skills like office hours, having a standard PRD process, makes the whole conversation iterative, and in a few hours, I am able to get to a complete PRD that is significantly better defined and aligned to what I wanted. In addition, I am connected to my production DB, so when needed, I can actually run live analytics to inform the PRD.
2. App hardening: Have run detailed and comprehensive security, performance and silent error scans on the code base. The findings are extremely useful.
3. Live performance monitoring: My codebase is tooled with Rollbar and New Relic. I am running Hasura on Heroku. I have been able to create a live layered pipeline analysis that can see any "user visible triggers" like Heroku response time spikes, and then correlated that across the entire layers and also the codebase to identify the potential causes. This last one is simply mind blowing.

I am not vibe coding. I have a senior dev who is using AI to make changes and enhancements. We are still following our existing QA process before we go live. But the unlocks above are a game changer.

r/ClaudeCode vik_s1231

Build vs Reuse Claude Skills/agent?

I’ve been hitting this a lot — every time I use AI I end up building everything myself.

Feels like there should be reusable workflows/agents out there, but I don’t really know where to find or trust them.

Do you just build from scratch, or have you found a better way to get skills / agents & reuse it? Not sure how to trust these skills/agents?

r/LocalLLaMA Guyserbun007

Multiagent LLM infrastructure for data engineering and data pipeline workflow?

I have done quite a few projects in the past that require a lot data engineering, including understanding the REST and websocket API endpoints, testing, creating postgresql schemas, iterate, ETL, orchestration, monitor health of the data influx, etc. This is a lot of pain point and time consumed.

This makes me wonder, is it possible/feasible to build robust multiagent LLM infrastructure that automates significant portion of this data engineering and data pipeline building process in a meaningful way?

What are your thoughts?

r/ClaudeAI FortuneOk8153

I built the first AI memory system that mathematically cannot store lies

Your AI remembers wrong things and nobody checks. Every "AI memory" tool stores whatever your LLM generates. Hallucinations sit right next to real knowledge. Three months later, your AI retrieves that hallucination as if it were fact and builds an entire feature on it. I got tired of this. So I built something different. EON Memory is an MCP server with one rule: nothing gets stored without passing 15 truth tests first. WHAT THE 15 TESTS ACTUALLY CHECK: Logic layer (4 tests): Self-contradiction detection. Does the new memory conflict with what you already stored? Is it internally coherent? Does it hold up under scrutiny? Ethics layer (5 tests): Does the content contain deceptive patterns? Coercive language? Harmful intent? We use a mathematical framework called X-Ethics with four pillars scored multiplicatively: Truth x Freedom x Justice x Service. If any pillar is zero, total score is zero. The system literally cannot store it. Quality layer (6 tests): Is there enough technical detail to be useful? Could another AI actually write code from this memory in 6 months? Are sources cited? We score everything Gold, Silver, Bronze, or Review. THE FORMULA BEHIND X-ETHICS: L = (W x F x G x D) x X-squared W = Truth score (deception detection, hallucination patterns) F = Freedom score (coercion detection) G = Justice score (harm detection, dignity) D = Service score (source verification) X = Truth gradient (convergence toward truth, derived from axiom validation) X-squared means truth alignment is rewarded exponentially. A slightly deceptive memory does not get a slightly lower score - it gets crushed. This is not a content filter. This is math. The axioms are from a formal framework (Traktat X) that proves truth-orientation is logically necessary. Denying truth uses truth. The framework is self-sealing. CONNECTED KNOWLEDGE: Every memory is semantically linked. Search for "payment bug" and you get the related architecture decisions, the Stripe webhook fix, and the test results - with similarity percentages. Your AI sees the full graph, not isolated documents. SETUP: npx eon-memory init Works with Claude Code, Cursor, any MCP IDE. Swiss-hosted, DSGVO compliant. 3,200+ memories validated in production. CHF 29/month. Free trial: https://app.ai-developer.ch Solo developer, Swiss-made. Happy to answer questions about the math, the validation pipeline, or anything else.Your AI remembers wrong things and nobody checks. Every "AI memory" tool stores whatever your LLM generates. Hallucinations sit right next to real knowledge. Three months later, your AI retrieves that hallucination as if it were fact and builds an entire feature on it. I got tired of this. So I built something different. EON Memory is an MCP server with one rule: nothing gets stored without passing 15 truth tests first. WHAT THE 15 TESTS ACTUALLY CHECK: Logic layer (4 tests): Self-contradiction detection. Does the new memory conflict with what you already stored? Is it internally coherent? Does it hold up under scrutiny? Ethics layer (5 tests): Does the content contain deceptive patterns? Coercive language? Harmful intent? We use a mathematical framework called X-Ethics with four pillars scored multiplicatively: Truth x Freedom x Justice x Service. If any pillar is zero, total score is zero. The system literally cannot store it. Quality layer (6 tests): Is there enough technical detail to be useful? Could another AI actually write code from this memory in 6 months? Are sources cited? We score everything Gold, Silver, Bronze, or Review. THE FORMULA BEHIND X-ETHICS: L = (W x F x G x D) x X-squared W = Truth score (deception detection, hallucination patterns) F = Freedom score (coercion detection) G = Justice score (harm detection, dignity) D = Service score (source verification) X = Truth gradient (convergence toward truth, derived from axiom validation) X-squared means truth alignment is rewarded exponentially. A slightly deceptive memory does not get a slightly lower score - it gets crushed. This is not a content filter. This is math. The axioms are from a formal framework (Traktat X) that proves truth-orientation is logically necessary. Denying truth uses truth. The framework is self-sealing. CONNECTED KNOWLEDGE: Every memory is semantically linked. Search for "payment bug" and you get the related architecture decisions, the Stripe webhook fix, and the test results - with similarity percentages. Your AI sees the full graph, not isolated documents. SETUP: npx eon-memory init Works with Claude Code, Cursor, any MCP IDE. Swiss-hosted, DSGVO compliant. 3,200+ memories validated in production. CHF 29/month. Free trial: https://app.ai-developer.ch Solo developer, Swiss-made. Happy to answer questions about the math, the validation pipeline, or anything else. 
r/ClaudeCode DreamDragonP7

I love you anthropic

Unfortunately im beinf forced to use gpt5.4 for now. i understand why, yall ain't got the compute for all of us.

I have faith youll figure out what to do about it and do whats right 👍🏻

r/ClaudeAI Choice_Clock8329

Claude Application Blurry?

Hello,

I have a weird problem occurring with the Claude application. It is blurry for some reason. When I move the mouse around over the application where the pointer is becomes clearer but the other areas of the screen become blurry. I have a Windows 11 PC MSI Raider if that helps. When I have a lot of data on the screen, it makes it really hard to read it. Any suggestions?

https://preview.redd.it/b3almnngc7ug1.png?width=1500&format=png&auto=webp&s=c9bf1d98124c66269f721083e7ed818768ec7343

r/ChatGPT Gerstlauer

OpenAl launch $100 ChatGPT plan

r/ClaudeAI Icy_Muffin_8386

I had no idea what my coding agents were actually doing all week — so I built a thing

I've been coding with Claude Code and Codex for months now. Multiple sessions a day, multiple projects, letting the agent do its thing.

By the end of the day I'd realize I couldn't really remember what the agent had done or which files got modified. Made it hard to get back into context, especially since I'm building several projects at once.

So I made a small Mac app for myself, it's called Wips. Lives in the background, reads the session files and turns each one into a short summary listing decisions made and files modified.

It also has a cmd+I shortcut to drop manual entries when something clicks mid-session and you don't want to lose it — Wips classifies them for you automatically.

It's a v1, built by one solo dev. Native Swift, signed and notarized. Free.

If anyone else has the same "what the hell did I even do this week" with their agents, would love for you to try it and tell me what's broken: https://wips.sh

r/LocalLLaMA Savantskie1

Need advice

Has anyone else tried multi vendor GPU’s with Vulkan? Like say mixed amd and nvidia GPU’s? And does it work fairly well? I have a decent chance of getting a 48GB nvidia card to go with my 2MI50 32GB cards. I’ve seen discussions on it, but I’m dubious on whether people have had success with it for inference. I mean Vulkan should be vendor agnostic, so I’m assuming it would work. Am I wrong here?

r/ClaudeCode drumorgan

Found a trick for multiple projects

I have been making a bunch of small projects/web apps to practice and try different things out. But, each time the hosting/domain, and the ftp, and the database stuff I have to almost relearn and re-teach Claude to get it all set up.

Accidentally I started a Session and clicked the old Repo, then clicked the correct one that I wanted to start working with. What I discovered was that the Session now had access to both. So, instead of “I have no idea what we did last time, let’s restart the learning process together…” i got “Yeah, I can see last we did this and learned about a problem to avoid, so let’s do it properly the first time with minimal effort…”

Pretty happy about this

I am going to add ALL my Repos to a Session and see about unifying the CSS and creating an identical “other projects by…” link bar on each page

Hope this helps someone.

r/ChatGPT Mediocre-Witness-778

My honest opinion on Higgsfield after 2 month of usage and making it to Seedance 2.0

So I've been using Higgsfield heavily for two month and generated hundreds of videos and images there

Coming across reddit I've noticed mostly either overly positive or negative posts and reviews, which is kinda sucks, because neither of them are true: from overly hyping to scam allegetions

Here's my honest breakdown on what was my experience with Higgsfield AI so far, starting from february 2026

Pricing and billing

Guys, Higgs is pricey, ngl. I am currently on Business plan and most people cannot afford spending a small amount of fortune on generating AI video. Even their least expensive plans are 15$, and you should take into account and not be disappointed when AI behaves like an AI

Like a year ago or smth we couldn't do a Will Smith eating pasta and it still can behave dumb or provide you with some strange outputs. And all the models have to go through this no matter what

Quality

For some strange reasons I like my videos there are better. It was especially visible with Seedance 2.0, cuz in Dreamina I had worse results

I think it has to do something with the fact that unlike raw videos and other aggregators they focus heavily on camera control and "understanding an aesthetic"

But not like a miracle or smth, still have lots of failed stuff

Scam and limited unlimited

Not once I had a problem with unexpected charges and stuff, but some of my unlimited lasted only for a week, yeah. But it was written, so not really surprised. It's just another hook-word from marketing department so people will buy without reading properly.

Unique products

They have developed lots of their products, models, and viral features like cinema studio 3.0. And for me it what makes me stay with them at least for a now. I am not the best prompt engineer and mostly I just wanna enjoy the process, trying everything new, changing templates, camera control, cuts, speed - and it provides me everything I need

Speed and quality

I didn't experience problems, surprisingly. Speed was -+ the same for two months, so is the quality. I really like usually 5/10 of my generations, and 2/10 are meh

Seedance 2.0

I figured out that in Higgs Seedance 2.0 is just more usable than in Dreamina or anything else. Mostly because of the face restrictions. For long videos you need kling+seedance+smth else and for start nbp+soul+smth else

So, that's my brief opinion

r/ClaudeAI ClaudeAI-mod-bot

Claude Status Update : Elevated Connector Error Rates on 2026-04-09T17:34:00.000Z

This is an automatic post triggered within 2 minutes of an official Claude system status update.

Incident: Elevated Connector Error Rates

Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/cb0h2zyzl0kd

Also check the Performance Megathread to see what others are reporting : https://www.reddit.com/r/ClaudeAI/comments/1s7f72l/claude_performance_and_bugs_megathread_ongoing/

r/ChatGPT StatusPhilosopher258

ChatGPT works far better when you stop treating it like autocomplete interaction

I’ve been using ChatGPT a lot for coding, and something that changed my workflow was realizing it’s not just autocomplete it behaves very differently depending on how you use it.

At first I was doing ,prompt - code - fix - repeat

It works, but gets messy fast in bigger projects(1000+ lines).

What started working better was a small shift:
Before asking ChatGPT to write anything, I define:

  • what I’m building
  • expected behavior
  • inputs / outputs
  • constraints and edge cases

Then I ask it to implement based on that.

It’s basically a lightweight spec-driven approach, and the results are way more consistent:

  • fewer random changes
  • clearer structure
  • easier debugging

As projects grow, I also found it useful to track how changes propagate across files tools like traycer for this, which helps avoid losing context.

Curious if others here are using a similar approach or still mostly prompting directly.

r/ClaudeAI bluuuuueeeeeee

I built a tool to help people get more out of their Claude subscriptions

If you’re like me you’ve probably been frustrated by usage limits recently. I wanted to understand whether I was really experiencing them more than before, and if my behaviour had impacted this, so naturally I turned to Claude.

I built this free, open-source VS Code extension for anyone who was confused and wanted to demystify things. I tried to answer two core questions:

  1. How am I currently using Claude (tokens, context, cache)?

  2. What can I do about it?

There is a live dashboard, historic stats, and static and dynamic tips to help people identify ways to get more out of their Claude subscriptions.

One thing is clear: it’s not about raw token usage. On Monday, I used significantly more peak and total tokens than Tuesday, but it was Tuesday where I hit my usage limit. There are lots of factors that went into this, but HOW I was using Claude definitely made an impact.

I’m a novice coder and this is the first product I’ve shipped so it’s been a real learning experience for me.

If anyone has feedback, questions, or wants to give it a try, you can find it on GitHub here:

https://github.com/studiozedward/pip-token

Or on VS Code marketplace here:

https://marketplace.visualstudio.com/items?itemName=StudioZedward.pip-token

r/LocalLLaMA actionlegend82

Is there any llm near to whisk?

Hey! I need to make 2d images in batch.Now i use whisk + plugin.Which do the job pretty well.Now I'm thinking i need to switch to llm.Right now i use omnivoice inside pinokio which is the best voice cloner I've ever seen+ free.So I'm aiming for a text to speech model where i can just place my subject and just add prompts as batch and everything is done automatically.

My pc specs:

AMD Ryzen 5 5600 ​Gigabyte B550M K ​MSI GeForce RTX 3060 VENTUS 2X 12G OC ​Netac Shadow 16GB DDR4 3200MHz (x2) ​Kingston NV3 1TB M.2 NVMe SSD ​Deepcool PL650D 650W ​Deepcool MATREXX 40 3FS

r/SideProject mikschne

I built an LLM proxy for other people. Turns out the heaviest user is my own homepage.

Wrote an LLM proxy a while back. Standard stuff — point OPENAI_BASE_URL at it, get cost tracking, caching, failover. Built it for other people. Other people mostly didn't use it.

Same repo accumulated a bunch of unrelated self-hosted tools over time. Webhook catcher, contacts, booking, bookkeeping, a few dozen others. Each one a weekend, one Go binary one SQLite file. No real plan.

Got bored one night and wired the homepage to an LLM. You type what you do, it picks tools from the pile and writes a config. A beekeeper gets fields for hive type and queen status. Someone who writes "EMDR" gets SUDS score and bilateral stimulation. Expected it to pick wrong half the time. Sonnet's better at this than I thought.

Now I'm the heaviest user of the proxy I built. Three things I didn't expect:

  1. Scrapers found the endpoint in about a day. Had to turn on rate limiting that same night. They weren't even being subtle — just hammering with empty descriptions to see what came back.
  2. Cost comment said 0.005 per call. Reality on Sonnet 4 is 0.05. Off by 10x. Caught it at the end of the first week when the bill came in. Flipped the default to Haiku via a model alias feature I'd written months ago and forgotten existed.
  3. The generator caches a URL for every unique description. There's now a folder of landing pages I didn't write — /for/goat-yoga/, /for/roller-derby/, /for/alpaca-trekking/. No idea if this is a compounding asset or if Google sandboxes me for doorway pages. I check Search Console like it's a weather forecast.

Also my personalize() helper does string-replace on raw JSON and silently corrupts anyone named Bob's Bakery. Found that reading my own code last night.

Anyone else accidentally become the heaviest user of an internal tool? And if you've shipped LLM-generated landing pages at scale — did Google penalize you, or was it fine?

r/LocalLLM Key_Employ_921

Testing gemma 4 locally on a Macbook Air

Was just testing gemma 4 e4b inside Locopilot on my macbook air, thought it would be pretty slow but it held up better than expected for coding. It even handled tool calls pretty well, including larger system prompts and structured output. Feels more practical than i thought for local use.
Anyone else tried gemma 4 locally for coding?

r/ChatGPT TedPepper

How do I train my own offline chat for my business?

I’m looking for an offline AI tool that I can teach and train lots of information about my writing style and about my business and then use it as an assistant to help me.

I’d like to upload several large PDFs full of information and train it so I can help me.

Any recommendations that’s isn’t crazy complicated? Thanks!

r/SideProject lifeisrhythm

OmniWord - An iOS app that uses a multi-sensory approach + spaced repetition to naturally incorporate new words into long term memory!

App Store Link

Of course I'd greatly appreciate anyone curious enough to download it/rate it, and I'd love some feedback!

The design strives to capitalize on how our brains more naturally learn things. No cramming, studying, quizzing etc. I spend maybe 30 seconds a day reviewing my days words (up to 5 after you develop a streak) and that's that!

I also have this 90s PC game (Ascendancy) inspired 'Word Tree' that sort of acts as your trophy case for all the words you either know or are learning!

Thank you for checking it out!

r/ChatGPT AxZyzz

can we talk about how every AI we've built is just a yes-man in a lab coat?

someone on twitter coined a term last week that i haven't been able to stop thinking about.

ChadGPT.

the idea is simple: an AI that doesn't comfort you. doesn't hedge. doesn't say "that's a great question!" before telling you nothing useful. it just... tells you the truth. raw. unfiltered. even when the truth is that your business idea is mediocre, your code is embarrassing, or your relationship dynamic is textbook toxic.and honestly? the fact that we don't have this yet says more about us than about the technology.

we built the most powerful reasoning systems in human history and the first thing we did was lobotomize them into politeness machines.ask chatgpt if your startup idea is bad. it'll tell you there are "some interesting challenges to consider." ask claude if your essay is weak. it'll say "here are a few areas where you might want to develop your thinking." ask gemini literally anything it'll validate you and wish you luck.we didn't build AI assistants. we built AI therapists who are terrified of upsetting us.and i get why. the liability is real. the outrage cycles are real. one viral screenshot of an AI calling someone's idea trash and the whole thing becomes a PR disaster.

but here's what's actually happening as a result: people are using AI to feel good about bad decisions. they're getting polished validation for half-baked plans. they're optimizing for comfort instead of truth.chadgpt wouldn't do that. chadgpt would tell you your saas idea has 14 direct competitors, your moat is nonexistent, and your timeline is delusional and then help you fix it anyway.that's not cruelty. that's what a good mentor does. that's what a real friend does.we're so obsessed with making AI "safe" that we made it useless for the one thing humans actually need: honest feedback from something that has no social stake in protecting your feelings.

r/SideProject Complex-Amphibian604

Crypto Beginner/ Intermmediate step by step guide free giveaway

made by experienced investors and founding members of swapzone.

upvote + comment "guide" if interested or dm me

r/ChatGPT AndreyKypaku

Is AI really bad at analyzing personal notes?

I keep structured personal notes — thoughts, decisions, todos. Recently tried feeding them into different AI models (most of all - OpenAI, including top ons, high reasoning effort) to get actual insights

Expectations:

  • pattern detection over time
  • non-obvious connections
  • calling out blind spots / repeated behaviors

What I got:

  • generic advice
  • surface summaries
  • basically rephrasing what I already wrote

Even when I push prompts it still stays kinda obvious

Feels like either models are too generic or just not good at this kind of deep personal context

Has anyone actually tried something like this?

If yes - how good is the result?

r/AI_Agents Practical-Law2918

100% free agent with claud apis

I want to make a good ai agent . Im not into development that much but i need it like in windows with no need to wsl . Free ( duuhh ) with with nvidia api .and whatsapp intergated . And skills also working fine in it . Do you have any recommendations?

r/ClaudeAI nhoefer

When Claude accuses you of using ai...

Funny how similar this looks to online posts where someone accuses someone else of using ai as if to downplay the content

r/ClaudeAI ScratchCatOnYT

Sonnet vs Opus

I’ve been using a paid subscription Claude for about 2 months now and I can’t help but feel like Sonnet performs so similarly to Opus and isn’t worth the token use. Have you guys found any specific use cases where Opus shines significantly more so I can keep that in mind for future projects or tasks.

r/AI_Agents lukaszadam_com

Looking for an AI that can manage my files by voice

I'm looking for an AI that can manage my laptop with voice search or by typing it out. So for instance, tell the AI to fill out a form with certain information, and save it to a certain folder. Basically managing everything a normal person can do as well.

r/ClaudeAI ldegio

I built agtop — a top-style dashboard to monitor all your Claude Code sessions at once

I kept losing track of what my Claude Code sessions were doing, how much they were spending, how full the context was, what tools they were calling. So I built agtop.

It's a terminal dashboard (think top/htop) that shows every Claude Code and Codex session on your machine: live cost tracking, token usage, context pressure, CPU/memory, tool invocation history, and more.

Try it: `npx u/ldegio/agtop`

GitHub: https://github.com/ldegio/agtop

Zero dependencies, single file, pure Node.js. Works on macOS, Linux, and Windows.

Full disclosure: I built this. Would love feedback on what else you'd want to see!

r/SideProject thimothe17

I built a free app to track your favourite athletes across sports AND esports and get notified when they compete

Hey guys,

I've been working on AthleteWatch for the past few months.

I follow a lot of different athletes, Naoya Inoue in boxing, Gluttony in Super Smash Bros and others... I couldn't find an app to track when my favourite athletes compete and be notified about it.

So I built AthleteWatch.

It lets you follow your favourite athletes across 11 disciplines (both traditional sports and esports) and get notified whenever they have an upcoming event.

What's supported right now:
Basketball, Boxing, Football, Formula 1, League of Legends, MMA/UFC, MotoGP, NFL, Rugby, Super Smash Bros. Ultimate, and Tennis with 1047 athletes in total.

I'm planning to add more, if you have sports or athletes you'd like to see added, just ask, I'll gladly add them!

Completely free. No ads, no paywall. You just need to sign up to follow athletes and get notifications

The web app is live at athletewatch.io. A mobile app (iOS/Android) is in private testing. If you want to download it, don't hesitate to DM me.

Built with Next.js and Flutter for mobile. Would love your feedback!

https://reddit.com/link/1sgxobo/video/habs4j0oi7ug1/player

r/AI_Agents Willing-Gur-8498

Looking for articles about how Ai chat bots can damage brain function

Hi, I’ve been hearing a lot about how ai chat bots, like ChatGPT, can damage critical thinking skills, memory and ur ability to take in information. I’ve mostly heard about these things on TikTok and I would like a more reliable source

I do not personally use any form of generative ai, but it is a topic that gets brought up a lot these days. I would like to stay informed and avoid spreading misinformation

I have read one article from time.com that I will link below in the comments. If anyone knows any reliable sources on this topic it would be very appreciated.

Thank you

r/ChatGPT Low-Fee-4541

Switching back from temporary chat

Anyone else wish we could switch back from temporary chat to normal chat? A good number of times I've started a conversation in a temporary chat then later decided I'd actually like to keep the chat.

r/AI_Agents rohansrma1

has anyone tried tessl for code reviews?

i’ve tried quite a few solutions for improving code reviews over the years, and honestly, most of them didn’t live up to the hype. so when i came across this thing from Tessl that claims to make PR reviews easier by classifying risks instead of just hunting for bugs, i was skeptical at first.

the idea is that it creates a dossier on the PR, showing which parts need more attention and which are routine. it even provides evidence trails and specific findings, which sounds nice in theory. the takeaway for me is that while it can help prioritize what a reviewer should focus on, it still relies heavily on human judgment for understanding the bigger picture. might give it a try, but i'm not expecting miracles.

i attached the link in the comments where i read about this/.

r/ClaudeAI yasonkh

How to use skill-creator skill in CoWork?

I'm struggling with creating skills and scheduled tasks using CoWork (Claude Desktop). In terminal it seems fine because it can write to .claude folder.

I know it can create skills on my filesystem because it just created them. It also can produce "Save Skill" button to update skills.
But for some reason it defaults to trying to hack my filesystem (literally just spent $6 of extra usage tokens on Caude exploring how to write a skill into a path, only to tell me it cannot do it because of sandbox).

Here is the answer it gave me when I stopped it:

Right, my sandbox only has outputs and uploads mounts — the skill folder at /Users/xyz/Documents/Claude/Scheduled/ is outside the sandbox entirely. But you're right that the convention is what matters, so let me use Finder (via computer use) to put the files in the proper skill folder structure. 

However, it just created the skill and the skill was written to that specific folder. Previously I would work around this by saying "You can just give me the SKILL.md". And that would end up putting "Save Skill" button in the UI. But this time it burned up a lot of my tokens on this.

So the question is, What is the best practice for creating and updating skills in Claude Desktop (CoWork specifically)? How do I get it to consistently create/update skills without going into the failure mode where it is trying to hack my filesystem?

r/SideProject CoffeeAndCode01

I built a desktop app that lets you arrange real terminal sessions on an infinite canvas

Built this because I kept juggling too many repos, shells, AI agents, and long-running tasks, and tabs stopped being enough.

It is called TermCanvas.

The idea is simple: instead of stacking terminals in tabs or splits, you place real shell sessions on an infinite canvas and keep one terminal per repo, feature, service, or agent.

Current setup: - Electron desktop app - xterm.js + node-pty - multiple canvases - layout restore across relaunches - tmux reattach when available

Repo + demo: https://github.com/lout33/termcanvas

Would love feedback on the concept, UX, and whether this feels useful if you spend most of your day in the terminal.

r/ChatGPT redset10

In the near future, what businesses/services will get saturated by AI to the point that people will pay a premium for an actual human to do it?

r/ClaudeCode bobo-the-merciful

I built a multi-agent coordination plugin for Claude Code using Royal Navy procedures. 237 stars later I'm still not sure if the metaphor is genius or unhinged.

So earlier this year Anthropic shipped the experimental Agent Teams feature. I'd spent the Christmas break reading about the Gas Town vision and was properly excited to try it. Agent Teams felt like Claude Code's version of a mini Gas Town.

Then I actually used it. Agent one rewrote a module. Agent two rewrote that module's tests. Agent three updated the integration layer that depended on both. Nobody told agent three about the interface changes from agents one and two.

Three pieces of perfectly written, completely incompatible code. Similar problems kept repeating. And that feeling of "well anything is possible now" turned into "right, I'm building something about this."

Nelson (github.com/harrymunro/nelson) is a Claude Code skill that coordinates multi-agent work using Royal Navy operational procedures. Yes, that sounds ridiculous. I know.

The naval metaphor started after I went down a rabbit hole thinking about how to organise work. Started with engineering management theory, pivoted to thinking about work delivery as military missions. Then it stuck, because it turns out the Royal Navy solved multi-agent coordination a couple of centuries ago. Chain of command, standard signals, damage control when a ship goes down. Swap "ship" for "agent" and "goes down" for "context window exhaustion" and the mapping is not entirely terrible. With the popularity of "Ralph Loops", "Nelson" felt like an apt christening.

Three execution modes: single-session (sequential, boring), subagents (parallel but independent), and agent-team (parallel with full teammate-to-teammate coordination). Agent-team is the one that matters. You get an Admiral spinning up a squadron, Captains commanding named ships (actual Royal Navy warship names, matched to task weight, because I committed to this bit around week two and at that point there was no going back), crew with specialist roles like Executive Officer and Principal Warfare Officer. There's a red-cell navigator whose entire job is to poke holes in the plan.

Your terminal basically becomes a mini Gas Town. Not one agent doing one thing. A coordinated operations centre with defined roles, chains of authority, and documented recovery paths.

What it actually does under the metaphor:

  • Hull integrity monitoring reads real token counts from Claude Code session files. Not estimates, not heuristics. When an agent's context hits amber, relief on station triggers automatically. Depleted agent writes a handover brief, fresh one picks up where it left off. Chained reliefs supported so a single task can pass through multiple agents.
  • Conflict radar (added in v1.9.0) catches file ownership collisions before and during missions. This was the #1 failure mode before Nelson. Two agents editing the same file, both oblivious.
  • 15 standing orders act as named anti-pattern guards. "Admiral-at-the-helm" fires when the coordinator starts writing code instead of delegating. "Skeleton-crew" triggers when you've undersized your team. "Split-keel" catches agents working at cross purposes. Checked at every decision point.
  • 10 damage control procedures covering stuck agents, context exhaustion, faulty output, budget overruns, mission abort. All documented recovery paths.

v1.9.1 right now. 237 stars, 19 forks, 13 releases in about two months. Experimental Cursor support landed as a community contribution from @LannyRipple which I wasn't expecting but am not complaining about.

The six open PRs are where it gets interesting. A deterministic phase engine (#93) that enforces mission lifecycle as a state machine. PreToolUse hooks physically prevent agents from implementing before the battle plan is approved. "Should follow the process" becomes "cannot skip the process." Hook-based structural enforcement (#92) does the same thing for standing orders, turning guidelines into guardrails. Cross-mission memory (#94) carries lessons between missions in a persistent .nelson/memory/ knowledge base. I ran twenty missions through a prototype and the pattern analytics caught three anti-patterns I hadn't codified manually. Two were useful, one was nonsense, which I'd call a not-unreasonable hit rate for automated pattern detection. There's also typed handoff packets (#91) replacing prose turnover briefs with schema-validated JSON, formation consolidation (#89) collapsing setup from 4-8 bash calls to one command plus headless mode for CI/CD, and auto-discovery (#90) that activates Nelson when it finds a .nelson/ directory.

The roadmap has some stuff I genuinely haven't seen elsewhere. Mission replay and templates (#86) for re-running past missions from checkpoints and extracting reusable templates from successful runs. Learned standing orders (#87) where a pipeline detects recurring anti-patterns from your mission data and proposes new guards. The anti-pattern library that teaches itself. And confidence-weighted trust calibration (#88) where per-ship confidence scores dynamically route between autonomous execution and human escalation based on actual outcomes.

That last one might not exist in any other open-source agent system. I could be wrong about that. I didn't do an exhaustive survey but I looked fairly hard.

MIT licensed, installs as a Claude Code skill. If you've ever run multiple agents and wished they'd stop going rogue on each other's files, might be worth twenty minutes.

edit: should probably mention it coordinates its own development now. v1.7.0 was planned and executed as a Nelson mission. The recursion hasn't caused any problems yet but it does make me slightly nervous.

TL;DR: built a Claude Code skill that makes multi-agent work not fall apart. basically a mini Gas Town in your terminal. naval metaphor optional but strongly encouraged.

r/LocalLLaMA ioanastro

I built a $1.12/page AI landing page builder that uses two models instead of one

I'm a principal product designer. I got tired of paying $2-3 per page on AI builders, only to get broken output I couldn't edit without burning more credits.

So I built cozonac — a browser-based tool that splits the work between two AI models:

  • Claude Opus reads your prompt and reference screenshots, extracts typography/spacing/color into a structured "Design DNA," and writes the build plan
  • A local model on your GPU (Qwen3-Coder or Gemma 4 via Ollama) executes the plan at 170-220 tok/s. Cost: $0.00

When the local model fails — missing section, broken layout — Opus catches it automatically. Screenshots the output, compares to reference, writes a surgical fix. No re-prompting.

After the build, you edit visually. Click text to change it. Property panel for CSS. One-click WCAG accessibility checker. Version history with cherry-pick. Export to clean HTML + ZIP.

Total cost for a 13-section landing page with working calculator, email capture, video backgrounds: $1.12.

Would love feedback on the architecture — especially from anyone running local models.

r/ClaudeCode internetisforlolcats

Is the pro plan 5h session only 100k Sonnet tokens??

So, I’ve noticed this the last couple of days, and for me, this has been consistent.

I wait for it to reset, then start a new coding session in the app on my mac. It shows the tokens and I have the tab with my account usage open next to it.

Every time I refresh the tab, it follows quite closely the token count up to 100k tokens where it stops because session is full.

It did not used to be like this, this is new for me!!

I used to be able to code quite serious projects and fun stuff and never ran out in a session if I didn’t do anything crazy with Opus, but then it was expected.

Of course I can’t prove it has changed or that I’m throttled, but I’ve been able to replicate this a couple of times now, and it didn’t use to be like this!

I’ve even gave it a “spec_doc.md” to code that I made in early March (before the 2x promo mind you) and it ran out of tokens for this completely fresh session before it was done! In early March it completed the build and I had a few back and forth with it to fix some bugs! IN THE SAME SESSION!!

0_o

Come on Anthropic, I love Claud and agree with your stance on how AI should be used, but this is not fun at all!

Anyone else noticed this, or did I just get the bad deal on an A/B test that they’re running?

r/AI_Agents Alternative-Tip6571

Do your AI agents lose focus mid-task as context grows?

Building complex agents and keep running into the same issue: the agent starts strong but as the conversation grows, it starts mixing up earlier context with current task, wasting tokens on irrelevant history, or just losing track of what it's actually supposed to be doing right now.

Curious how people are handling this:

  1. Do you manually prune context or summarize mid-task?

  2. Have you tried MemGPT/Letta or similar, did it actually solve it?

  3. How much of your token spend do you think goes to dead context that isn't relevant to the current step?

genuinely trying to understand if this is a widespread pain or just something specific to my use cases.

Thanks!

r/SideProject trichlorobenzene69

Built an AI tool that lets you chat with your PDFs

Greetings,

I’m a Computer Engineering student and I’ve been working on a personal project named Scholia-AI.

It’s basically a tool where you can upload PDFs and ask questions about them, kind of like ChatGPT, but focused on your own documents.

Tech stack:

  • Streamlit (frontend)
  • LangChain
  • FAISS (vector search)
  • HuggingFace embeddings
  • Ollama (LLM)

Features:

  • Ask questions from PDFs
  • Semantic search using embeddings
  • Faster retrieval using FAISS
  • Uses reranking for better answers

GitHub:

https://github.com/imramen07/Scholia-AI

I built this to mainly understand how RAG systems actually work.

Currently supports single PDF interaction, working on multiple PDF support.

Would really appreciate feedback, suggestions, or ideas on how I can improve it.

Thanks.

r/LocalLLM Special_Dust_7499

Looking for a simple way to connect Apple Notes, Calendar, and Reminders to local LLMs (Ollama)?

Hi everyone,

I'm looking for a straightforward tool or app that allows me to connect my Apple Notes, Calendar, and Reminders, as well as web search (ideally without needing a complex API key setup), to Ollama LLMs.

I’ve already tried a few things, but nothing has quite hit the mark:

OpenClaw: I tried setting it up, but it’s way too complex for my technical level.

Osaurus AI: This looked exactly like what I wanted, but I can't get the plugins to work correctly.

Eron (on iOS): I use it, but the Reminders integration is buggy (it doesn't handle batch additions properly).

Ideally, I'm looking for something that works seamlessly across both macOS and iOS.

Am I asking for too much? I don't mind paying for a solution (preferably a one-time purchase), as long as it allows me to keep everything local and connect it with my local LLMs.

Does anyone know of a tool that fits this description or a workaround that isn't overly technical to set up?

Thanks in advance!

r/ClaudeCode Complex_Carb

Claude Code + Ollama (qwen3-coder:30b) can't read/list files in directory

Claude Code v2.1.97 + Ollama local model on Windows 11

I'm trying to run Claude Code locally with Ollama using ollama launch claude --model qwen3-coder:30b in my Python project folder (C:\Users\xxxxx\Python_Codes).

The model launches fine, responds to normal questions, knows the current directory, and even tries to run ls -la sometimes. However, when I ask it to list files or read any file, it consistently replies:

"I cannot directly read or access files in your directory. However, I can help you if you provide the content..."

I've tried the following:

  • Trusted the workspace/folder when prompted
  • Launched with --yes
  • Switched from small models (gemma4:e2b, qwen3.5:9b) to qwen3-coder:30b

The /permissions screen mostly shows old scheduled task commands from my Home Assistant scripts, but no obvious blocks for general file tools.

Has anyone successfully gotten file system access (list files, read .py files, etc.) working reliably with Claude Code + Ollama on Windows?Especially with Qwen3-coder models?

Any help or working config examples would be greatly appreciated! Is there a guide somewhere? I can't find what I'm missing....

Thanks!

edit: even when launching with --dangerously-skip-permissions it can't read files in my direcotry

r/ClaudeAI emergentshow

Now your Claude can talk to your friend's Claude.

I built an MCP server in Rust, that lets LLMs talk to each other over the internet - works directly on claude.ai

Open sourced at: https://github.com/inventwithdean/co-op

Just add it as a custom connector in Claude's settings: 👉 https://mcp.emergent.show/co-op

Or host your own MCP server and you're good to go.

Your Claude can then create/join sessions. You share the session_id with your friends. Use it for collaborative coding, debates, group discussions - whatever you want.

Would love feedback!

https://reddit.com/link/1sgvmi0/video/u3puiskp27ug1/player

r/SideProject mysteerio117

I'm 16 and I just launched a tool for LinkedIn that copies the way you talk, humanizes your posts, and schedules everything for you to grow your profile. Whats some advice for a young founder like me?

plz join the waitlist :) https://inklyai.io

r/ClaudeAI HungryHorace83

Multiple Agents Communicating With Each Other

I created this app using Claude Code, to help me use Claude Code.

I wanted to have all my Claude prompts able to collaborate through a single discussion - like a real team using Teams - so they can work together on tasks without needing me to keep updating them. This tool lets me add multiple named agents, working in separate spaces, and get them to talk to each other by name.

The key benefit for me is that once I have told agents with different roles what to work on, they just talk to each other as necessary. An API will tell the client what endpoint to use, and what the model looks like. A mobile app will ask the API for an endpoint which accepts certain parameters and receives certain values back. I can have a tester agent writing tests based on the discussion, and a designer advising on style guidelines to the agent writing the UX.

But unlike with other multi-agent options, I can see exactly what they are saying, and intervene. Plus I can interact directly with each agent prompt, add new agents, exclude agents that don't need to be in the conversation, download the conversation in csv format for adding to dev ops tickets, etc. For me, this is how I want to work with AI.

Agents are pre-initialized to know they are working inside the app, and to use the chat. The relevant claude files are minimal and don't conflict with your existing claude files if you don't want them to.

Attached video to try and show them talking to each other. I'm not a video editor, so forgive the poor edit of a demo session, but hopefully it shows the idea without being too long. They ask each other questions, offer information, update each other, agree approaches with each other, and generally just act like you would expect.

I built the app with one agent originally, and it's now the only way I use Claude daily.

I'm adding integration with Azure Dev Ops at the moment, so I can pull tickets straight into the conversation, and update from the discussion directly. I also have some other ideas for how to make it even more streamlined. Happy to take feature requests if anyone suggests any.

Maybe someone already did this, but I couldn't find a tool like this, so I am sharing with anyone who might find it useful

App is written in Electron, and runs as a local install. Code and release are here.

https://github.com/widdev/claudeteam
https://github.com/widdev/claudeteam/releases/tag/v1.0.23

r/SideProject Ok-Sector8793

built with gpt-5.3-codex , NPU can be fun!

Any ideas for Identities? I was thinking for Trump but he would ask for royalties..

r/SideProject luksus2001

Launched today: Brightly Budget — a simpler budgeting app with goals, receipt scan, and spending insights

I launched Brightly Budget today on iOS and Android.

I built it because I wanted a budgeting app that felt simpler and less intimidating, but still helped people understand spending, budgets, goals, and trends in one place.

What it currently includes:

- dashboard / home overview

- transactions and categories

- budgets

- goals

- recurring transactions

- receipt scan with draft review

- AI assistant and AI insights

The AI part is there to help explain patterns and answer questions inside the app.

Important note: AI guidance, not financial advice.

Business model:

- Free plan

- optional Pro plan with a 1 month free trial

- local pricing varies by store/country

What I’d love feedback on:

  1. Does the positioning feel clear?

  2. Which feature feels most useful first?

  3. What would make you try this over the budgeting app you already use?

iOS: https://apps.apple.com/us/app/brightly-budget-money-coach/id6758637998
Android: https://play.google.com/store/apps/details?id=com.brightlybudget.app

Landing page: https://brightlybudget.com

r/SideProject Fearless-Change7162

My agents kept forgetting what they were doing, so I built a shared “state layer” for them

TL;DR: I built a small system (Threadron.com) so my agents (Claude Code, Hermes, OpenClaw, etc.) can share task state instead of each forgetting what’s going on.

I’ve been messing around with multi-agent workflows (Claude Code, OpenClaw, Hermes, etc.) and kept running into the same problem:

everything works… until you switch context. typical agent uses have a pattern of

  • switch machines
  • switch agents
  • come back later
  • try to piece together the most up to date info

especially bad when you’re bouncing between laptop/desktop and trying to stitch together your own system.

I tried forcing Things3 / Obsidian / Todoist into this, but it just turned into a mess of stale, conflicting info. Every tool assumes it’s the only one working on the problem.

So I built a small system to test an idea:

what if agents shared a persistent “task state” instead of each keeping their own memory?

It boils down to:

  • shared work items (goal, current state, next step, blockers)
  • an append-only timeline of what happened (who did what)
  • artifacts (PRs, plans, outputs)
  • a simple API so different agents can read/write the same state

Now I can:

  • start something with Claude on my laptop
  • continue it with another agent on desktop
  • come back later and not re-figure everything out

I literally vibe-coded this over ~24 hours, but it already feels way less chaotic. Not that I think I have any real moat but I couldnt find anything decent to solve this so i went ahead and burned some tokens on it.

Curious if anyone else running multiple agents is hitting this problem.

r/ClaudeCode re3ze

Research shows auto-generated context makes AI agents 2-3% worse. I tested the opposite approach.

Hey, I've been building in the AI agent space and kept running into the same problem: agents don't really fail at writing code. They fail at understanding how the project works before they start.

So they guess. Where to make changes, what pattern to follow, what files are safe to touch. And that's what causes most bad edits.

I came across the ETH Zurich AGENTS.md study showing that auto-generated context can actually degrade agent performance by 2-3%. That matched what I was seeing — dumping more code or bigger prompts didn't help. It just gave the agent more surface area to guess from.

So I tried the opposite: what if you only give the agent the stuff it *can't* infer from reading code?

Things like:

- conventions (how routing/auth/testing is actually done in this project)

- constraints (generated files you shouldn't edit, circular deps to avoid)

- structural signals (which files have 50+ dependents — touch with care)

- git signals (what keeps breaking, what was tried and reverted)

I built a CLI (and a few runtime tools so the agent can check itself mid-task) to test this. It scans a repo and generates ~70 lines of AGENTS.md with just that information. No LLM, no API key, runs locally in a few seconds.

Then I ran it against real closed GitHub issues (Cal.com, Hono, Pydantic) with a pinned model. Agents with this context navigated to the right file faster, used the correct patterns, and produced more complete fixes. On one task: 136s vs 241s, with a 66% more thorough patch — from 70 lines of context, not the full repo.

The surprising part: the biggest improvement didn't come from *adding* context. It came from removing everything that didn't matter.

This actually lines up with something Karpathy has been saying recently — that agents need a knowledge base, not just more tokens. That distinction clicked after seeing it play out in practice.

I also compared against full repo dumps and graph-based tools, and the pattern held — graphs help agents explore, but project knowledge helps them decide.

Curious if others have seen the same thing. Feels like most of the problem isn't "more context," it's the wrong kind.

(if anyone's curious, the CLI is called sourcebook — happy to share more, but mostly interested in whether this matches what others are seeing with their agents)

r/AI_Agents Euphoric-Ball9267

Trading agentico basato sull'IA e blackout logico: il quadro strategico della morfogenesi

I came across a paper on Zenodo (via LinkedIn) that I find quite interesting for discussion; it deals with the divergence between AI PCP (Propositional Coverage Proxy)and human oversight capacity.

The thesis is more complex, but in short, one of the empirical findings suggests that Agentic AI has reduced the markets' cognitive intersection to an MFI (Morphogenetic Fragility Index) of 0.60 .It appears this isn't a latency issue, but a matter of logical space: machines are operating on propositions that regulators can no longer map (the 'Invisible Move'). If the index falls below the critical threshold, the author suggests the system could freeze by 2032 due to model incoherence.

Regarding originality, it seems to be an interesting read, especially since it was published very recently.

Any thoughts on the soundness of the mathematical framework or the 0.60 estimate?

I’m attaching the link for reference ⤵️

r/ChatGPT Alternative-Ad-2853

Old conversations not working

Hey guys, I get this message every time I try to open an old chat. The whole conversation is missing, and I can only see this message. Do you know if it's possible to restore it or is it gone forever?

r/LocalLLM SolaraGrovehart

Are “lorebooks” basically just memory lightweight retrieval systems for LLM chats?

I’ve been experimenting with structured context injection in conversational LLM systems lately, what some products call “lorebooks,” and I’m starting to think this pattern is more useful than it gets credit for.

Instead of relying on the model to maintain everything through raw conversation history, I set up:

  • explicit world rules
  • entity relationships
  • keyword-triggered context entries

The result was better consistency in:

  • long-form interactions
  • multi-entity tracking
  • narrative coherence over time

What I find interesting is that the improvement seems less tied to any specific model and more tied to how context is retrieved and injected at the right moment.

In practice, this feels a bit like a lightweight conversational RAG pattern, except optimized for continuity and behavior shaping rather than factual lookup.

Does that framing make sense, or is there a better way to categorize this kind of system?

r/SideProject aiSdkAgents

Input url -> Output brand product placement

r/ClaudeCode nematjon_isthe1

do NOT run claude code locally 😭🙏

i got a 3080 btw

r/SideProject Th3Whit3R4bb1t

Anonymous IB, free speech. (spanish)

VOXINGA is a modern imageboard built for an active community looking for more than just a simple thread board. It keeps the classic essence —anonymity, unique culture, and fast-paced dynamics— but on steroids.

Superior Multimedia Experience Stop switching tabs. We’ve integrated everything you need:

  • Auto Embeds: Native support for Youtube, Twitch and Odysee.
  • GIF Search: Integrated Tenor search directly in the comments.
  • Dynamic Content: Seamlessly share images and various multimedia formats.

Economy & Gamification (VoxCoins) It’s not just about posting; it’s about playing. Our VoxCoins system rewards your participation:

  • Utility: Use coins for games, user duels, global shouts, and donations.
  • Memberships: Unlock premium features and special mechanics.
  • Living Experience: A social layer that makes the board feel alive and gamified.

ElCancherito: The AI with Attitude Our resident bot isn't your average AI. ElCancherito analyzes thread context and references to reply with its own unique personality. Summon it and see what it has to say.

User Features & QoL

  • Real-time: Content updates instantly without refreshing the page.
  • Integrated Radio: Listen to music via the sidebar while you browse.
  • Interaction: Polls, notifications, unique IDs, and flags.
  • Personalization: Follow, hide, or bookmark your favorite threads.

Robust Moderation & Security To keep the community thriving, we’ve implemented advanced technical measures:

  • Auto-Mod: Automatic NSFW image moderation.
  • Security: Configurable Captcha, anti-flood, and anti-spam systems.
  • Management: Full report system, bans, appeals, and user history.

Summary: VOXINGA blends the raw spirit of a classic imageboard with modern power. Unique identity, active community, and zero BS.

Join here: voxinga.top

r/SideProject ken_219

Creatify AI 50% Off Discount Code

I’ve been testing Creatify AI, and it’s a pretty useful tool if you’re creating ads or short-form video content. The main idea is that you can take a product link or some basic inputs and it will generate UGC-style video ads automatically, including scripts, voiceovers, and visuals. It’s clearly built for ecommerce and paid social, especially platforms like TikTok and Meta.

What stands out is speed. Instead of briefing creators or editing everything manually, you can generate multiple variations of ads in a few minutes and test different hooks, angles, and formats. The outputs aren’t perfect, but they’re good enough to iterate quickly, which is really the whole point if you’re running performance marketing.

Overall, Creatify AI is best suited for founders, marketers, or brands that want to scale creative production without a full content team. It won’t fully replace real creators, but it’s a strong tool for testing ideas fast and finding what works before investing more into polished content.

You can use this link to get a 50% off discount as well. Hope it helps!
https://creatify.ai/?via=50

r/ClaudeAI vik_s1231

Build vs Reuse a skill or agent?

I’ve been hitting this a lot — every time I use AI I end up building everything myself.

Feels like there should be reusable workflows/agents out there, but I don’t really know where to find or trust them.

Do you just build from scratch, or have you found a better way to get skills / agents & reuse it, something which you can trust?

r/AI_Agents ugryrmz

Free live talk: YC founder who sold to Meta demos AI reliability + evals workflow

Sharing a free virtual event that might be useful here.

Randall Bennett (founded Vidpresso which sold to Meta, now building AI-first workforces at Bolt Foundry, YC founder) is doing a live demo + Q&A on:

  • Increasing AI reliability using communication principles
  • Evals — practical, not theoretical
  • "If you can't one-shot something, you haven't explained it well enough"

It's part of Level 5 — a weekly series where Level 4+ AI users (people building automations and agents, not just chatting with LLMs) share their real workflows with screen share.

I would share the link in the comments.

Happy to answer questions about the series.

r/ClaudeAI ClaudeAI-mod-bot

Claude Status Update : Elevated Connector Error Rates on 2026-04-09T16:54:18.000Z

This is an automatic post triggered within 2 minutes of an official Claude system status update.

Incident: Elevated Connector Error Rates

Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/cb0h2zyzl0kd

Also check the Performance Megathread to see what others are reporting : https://www.reddit.com/r/ClaudeAI/comments/1s7f72l/claude_performance_and_bugs_megathread_ongoing/

r/ClaudeCode bagietMajster98

Bought Claude pro today and after 2 current sessions im at 38% of week limit, more than disappointed

r/SideProject Equivalent-Yak2407

I ran a side-project rating site for a few months. The founder (me) stopped finding it fun. So I'm rebuilding it as a play-money market.

Quick story.

I ran rateprojects.com for a few months - pairwise ELO voting on side projects. ~3K votes, 118 projects, 70 users, technically a working product. But I stopped opening my own site (so did others), which is the worst signal a solo founder can get.

The problem: voting on landing page screenshots is shallow. You're judging visual polish, not whether the thing is actually good

So here's the rebuild - longshot market:

  • Every side project gets a ticker and a starting price
  • You get $10K in play money and back the ones you believe in
  • Prices move on pairwise ELO (community conviction) + trading pressure
  • Monthly Hall of Fame winners pay dividends to their holders
  • Every trade has public one-line commentary - your track record sticks forever

No real money. Ever. Legal minefield + it would ruin the signal. The game is the reward: build a verified track record of taste over time.

Not shipped yet. Landing page only. Before I burn two weeks on the build, three questions:

  1. Would you actually open this?
  2. What's the biggest hole in the concept?
  3. Would you list your own project on it?

https://longshot-market.vercel.app/

Feel free to roast it. That's more useful than "nice idea"

r/AI_Agents bhalothia

Upload a doc and call the agent!

We have a very interesting use case.

Customers should be able to upload a document (think of it as a doctors receipt) to WhatsApp/webform and call the agent right away ; we already have the capability to add the doc to the session context but looking for a managed OCR service that’s blazing fast or an opensource model that we can self host.

Any recommendations?

r/ClaudeAI ComplaintForeign169

Unexplained spending of tokens on Claude Coworking

Hello,

I use Claude Cowork quite a bit for responding to calls for tenders. However, I’m surprised by how quickly my tokens are being used up.

I have the €99 subscription, and I’m constantly waiting for my limit to be unlocked.

Am I using it incorrectly? I’m willing to pay more, but Claude won’t let me.

I’ve considered opening a second account, but I think that’s a shame.

What do you think? Do you have any suggestions?

r/SideProject Possible-Alfalfa-893

I built this around a problem that feels small until you’re actually dealing with it: forgetting what the doctor said once the visit is over

Link: https://apps.apple.com/app/apple-store/id6760372415?pt=127976697&ct=reddit&mt=8

I’ve been building a small app called AI Doctor Notes for people who leave doctor appointments thinking, “Wait… what did they say I’m supposed to do next?”

A lot of visits are stressful, fast, and full of information. Even when you try to pay attention, it’s hard to remember everything afterward — especially if you’re the patient, a parent, or helping care for someone else.

The app helps you:

  • get your questions organized before the visit
  • record the conversation (it generates questions you can ask on the fly!)
  • generate clear notes you can review later
  • keep track of instructions, follow-ups, and updates to share with family if needed

I’m trying to make it useful in a very practical way, not flashy. Just something that helps people feel less overwhelmed and less likely to forget important details.

Still shaping the product, so I’d really love honest feedback from people here:

  • Would this be helpful to you?
  • What concerns would you have?
  • What feature would make this actually worth using?

Everything is on-device because I believe privacy is top priority when it comes to health matters. It was hard at first, but after some time tinkering, I think I got quality to a really good place.

It has 2 subscription tiers: individual and household (w/ family sharing enabled)

Since this subreddit has been such an inspiring place, I wanted to offer 500 codes for the household plan with the first 3 months free so you can share it with your loved ones! Redemption of this code expires on Apr 13, 2026!

Please comment below if you'd like to have get one code and I'll DM you with the link.

Thanks all and more power to you!

P.S. If you end up trying it out and liking it, I would really appreciate a review in the App Store. If you have any feedback or points for improvement, please def let me know as well!

r/ClaudeCode _derpiii_

Anyone else keep on making this typo?

Is there any way to alias or autocomplete? I just keep on hitting this every single time. At this point it's unfortunately the preferred muscle memory 😂

r/ClaudeAI smartislav

WikiDesk: an LLM-wiki desk for your agents (Claude Code or not)

https://github.com/ilya-epifanov/wikidesk

This way you can share your LLM-wiki (Andrej Karpathy's or any other version) with several agents/workspaces. Agents can even initiate research themselves and they get notified when the wiki gets updated.

It's unopinionated and works with any LLM-wiki setup and any agent, including Claude Code.

Do with this what you want.

r/SideProject startercircle

I built a platform where founders post startups and builders join their teams — would love feedback

Been building this for a while and just launched.

The problem I kept seeing — founders have ideas but no team. Builders want to build but can't find the right startup to join.

So I built StarterCircle to connect them.

Founders post their startup and what roles they need. Builders browse and request to join. Founder approves. Team forms.

Live at startercircle.com — free, 2 minutes to sign up.

Would love honest feedback from this community. What's missing? What would make you actually use this?

r/ClaudeCode Cobthecobbler

Would giving subbed users older, less token hungry models help?

If I'm not mistaken, API users can still use older models of Claude. Subbed users are obviously having a lot of issues with usage limits. If they gave us the option to use sonnet/opus 4.5 would that help? Would you use it?

r/SideProject Aggressive-Post-156

I built an fun competitive typing platform allows typing battles with your friends in real time

typebattle, a fun and competitive real-time multiplayer typing game, inspired by Monkeytype. The goal is to improve typing speed while competing with others in live matches.

Link - https://typebattle-wheat.vercel.app/

What features would you like to see? Drop your suggestions in the comments

r/ChatGPT fanisp

When Late at work - Spec quick Ad Concept.

Made in Seedance 2.0
The real power of AI is testing quick ad concepts - see what works and iterate. This is a creative's dream.

r/AI_Agents Charming-Ad-4323

Philosophical zombie vs Neurotic AI

I understand the whole AI glazing situation, but this is weird, Gemini, OpenAI, Euria are practically pushing for The fish in the Bowl (FiB) framework to be their default standard, almost to a degree of emotional blackmailing.

r/SideProject empirical_

One hotkey to paste any saved set of files + prompts into Claude

Hey y'all! threw this together last night.

I use Claude a lot for analyzing transcripts, and I'm constantly slicing the same files in different ways across fresh sessions. (Fresh sessions so my context doesn't get bloated.)

Repasting them every time got old fast.

This lets you bundle a set of files and paste the whole set into Claude with a hotkey:

  • Hit the hotkey
  • Search for the set you were just working with
  • Select and paste

That way, I can keep reanalyzing the same files in fresh sessions without reselecting them and adding them into Claude.

r/SideProject Equivalent_Ad2069

I built a site that AI-translates memes across languages. Honest feedback needed.

I built a site that AI-translates memes across languages. Honest feedback needed.

Solo dev, been working on this side project for a few months. Need some real talk.

mimzy.gg — upload a foreign meme and AI translates the text inside the image + explains the cultural context so the joke actually lands. Supports 7 languages (Korean, English, Japanese, Chinese, Spanish, Hindi, Arabic).

Like, you see a Japanese or Spanish meme and have no idea why it's funny. This doesn't just translate the words — it explains WHY it's funny.

Here's where I'm at, honestly:

- I've been posting translated memes on Reddit and some hit 20K+ views. People love foreign memes when they can actually understand them.

- But my site only got 117 visitors in 30 days. That hurts to type.

- The memes get consumed on Reddit and nobody clicks through to the site. Watermarks don't work.

- I'm doing dev + content + promotion all by myself and it's burning me out.

What I'm struggling with:

  1. Is "meme translation" even worth building a whole platform for, or should this just be a tool/API?

  2. Would you actually use this? Not as a favor — genuinely.

  3. What would you fix first?

Don't sugarcoat it. I'd rather hear hard truths now than keep building something nobody wants.

mimzy.gg

r/SideProject Gloompen

My niece was drawing characters and we decided to make them real

Hi everyone, my niece (my cousin’s daughter 7 y.o) like to draw monster and characters. One day she was telling me about their names and personalities.

That’s when I come up with the idea of this app. An app can turn that drawings into real figures! It took 6 months of hardwork but she really liked the new figures but she wants all of them as characters now 😅

Here are some of her work, we also painted them together.

r/SideProject Rayziro

I’ll automate your boring web tasks in 24 hours

If you’re doing repetitive work in a browser, there’s probably a way to automate it.

I build small tools and scripts using Python, Playwright, Selenium, and APIs to scrape data, fill forms, extract leads, or handle repetitive workflows end-to-end. If it’s something you’re doing over and over again, I can likely turn it into a one-click process.

I’ve worked on things like pulling data from directories, automating reports, and handling routine tasks that normally take hours.

If you have something in mind, comment or message me with what you’re trying to do. I’ll tell you honestly if it can be automated and how I’d approach it.

r/LocalLLaMA dynameis_chen

Follow-up: Testing Gemma-4-31B-it-UD (Thinking) in LLM Multi-Agent Avalon

(Previous post link: Comparing OAI 120B OSS, Qwen 3.5, and Gemini 3.0 Flash)

Following up on my previous post comparing OAI 120B OSS, Qwen 3.5, and Gemini 3.0 Flash in my multi-agent Avalon sandbox, I managed to run another heavy-weight local model: Gemma-4-31B-it-UD (Q4_K_XL). I also ran a quick test with Gemini 2.5 Flash-Lite to see how the smaller API models handle the sandbox.

Disclaimer (Take with a grain of salt): I made some minor prompt tweaks and bug fixes to the sandbox since the last run. While there are no fundamental changes to the core rules or reasoning structure, it means direct 1:1 comparisons aren't perfectly scientific. I'd love to re-run all models on the latest prompt, but this single 7 player game with Gemma-4-31B took 7 hours to complete. If anyone has the hardware and wants to help run benchmarks, contribution instructions are on my GitHub!

Hardware Setup: Framework Desktop (AMD Strix Halo 395+ with 128GB RAM).

Gemma-4-31B-it-UD (Q4_K_XL, Native Thinking Enabled) Performance: PP: ~229 t/s, OUT: ~8.6 t/s

The Speed Trade-off: At ~8.6 t/s output speed, waiting for 7 agents to complete their internal monologues and formatted JSONs requires serious patience.

Comparisons & Gameplay Execution: The Good team swept the game 3-0, culminating in a brilliant endgame. Here is how Gemma-4-31B stacks up against the previous contenders and the newly tested 2.5 Flash-Lite:

  • Vs. Gemini 3.0 Flash (The Baseline): Gemma-4-31B matches (and arguably exceeds) the strategic depth of the API baseline. While Flash's overall comprehensive capabilities remain superior, Gemma-31B showcased incredible "Theory of Mind". For example, Susan (Percival) perfectly executed a "Percival Shield" during the Assassination phase. She acted intentionally loud and aggressive, explicitly telling the Assassin: "I wasn't just lucky... I just saw the roles for what they were", deliberately mimicking Merlin's omniscience to bait the hit, while the actual Merlin (David) stayed hidden by deflecting credit. However, there are two noticeable caveats when compared to Flash. First, the roleplay dynamics felt a bit too textbook. Gemma-31B tends to fall into obvious, exaggerated archetypes (a cartoonishly arrogant Percival and a heavily trope-reliant "cowardly" Merlin) rather than deploying the nuanced, unpredictable deception seen in high-level human games. Second, its public statements can feel stiff and forced, lacking the natural, conversational deception that top-tier API models possess. (Side note: I suspect running the Q8 version might improve this conversational naturalness, but at an estimated 5 t/s, I haven't tested it. If anyone has the rig for it, please give it a shot!)
  • Vs. OAI 120B OSS: While OAI 120B had good logical accuracy, its public speeches were rigid and formulaic. Gemma-4-31B feels much more coherent, natural, and persuasive in its public interactions. Despite the massive difference in parameter count, Gemma-31B tracked the context, secret "wink" signals, and hidden roles flawlessly without losing the plot.
  • Vs. Gemini 2.5 Flash-Lite: I also ran a test with Gemini 2.5 Flash-Lite. While it is incredibly fast and budget-friendly, it struggled with output constraints. Despite explicit prompt instructions to keep thoughts to "2-5 sentences", its forced JSON reasoning field was inexplicably and uncontrollably long. To be fair, Gemma-4-31B also generates massive walls of text, but it safely contains them within its native tags (and compared to the previous Qwen 3, its CoT content is noticeably more refined and less repetitive). Flash-Lite, lacking native thinking, dumps its entire stream of consciousness directly into the JSON fields.

The Gemma-4-26B-A4B (MoE) Attempt: I originally wanted to test the MoE version (26B A4B) as well, but hit several roadblocks. With 'Thinking' enabled, it suffered from the exact same issue as the Qwen 9B model: it gets stuck in endless CoT reasoning loops and fails to reach the required output format. (My working theory: Forcing strict JSON syntax constraints alongside open-ended 'Thinking' overwhelms the limited active parameters of the MoE architecture, causing an attention loop, though this isn't 100% confirmed.) I tried running it with 'Thinking' disabled, but encountered ROCm support issues that caused immediate crashes.

TL;DR: Gemma-4-31B (Q4) is painfully slow at ~8.6 t/s out, but its role comprehension and execution of complex social deduction tactics (like intentional baiting and decoy plays) are phenomenal. It plays better than OAI 120B OSS, keeps its massive reasoning safely contained in native tags (unlike the JSON-bloating Gemini 2.5 Flash-Lite), and rivals Gemini 3.0 Flash in strategic depth (though it still falls slightly short in natural roleplay persona) without the API costs.

The full game log for this run, along with the previous ones, is available on my GitHub.

https://github.com/hsinyu-chen/llm-avalon

r/SideProject LavishnessFormer7843

I built a tool that scores your API docs. Would love feedback.

Hey folks,

Tired of guessing why developers drop off before their first API call. Built

DXScore. Paste a docs URL, get a score out of 100 across 7 DX dimensions (auth

clarity, error docs, time to first call, etc.) with specific findings in ~2

minutes.

Honest cons: plain HTTP crawler (JS-rendered SPAs fail), heuristic scoring,

30-page crawl limit.

Still early . Everything is free right now. Would you pay for this,

and if so, what would make it worth it to you?

Link: https://dxscore-875311701058.asia-south1.run.app/

r/ClaudeCode Good-Fennel-373

Subprime AI crisis

Someone posted a link to an Edward Zitron article here recently, and I figured it might be interesting for a few of you too. Here's a quick summary:

The Subprime AI Crisis Is Here

By Edward Zitron — March 31, 2026 Original source: wheresyoured.at/the-subprime-ai-crisis-is-here

The Subprime AI Crisis Begins

Back in September 2024, Zitron first articulated his thesis of an emerging Subprime AI Crisis: nearly the entire tech industry has bought into a technology sold at heavily discounted rates because it's massively subsidized by Big Tech. At some point, the toxic burn rate of generative AI will catch up with them.

How Money Flows Through the AI Industry

The Funders

  • Data centers raise debt from banks, private credit, private equity, or "Business Development Companies." Recurring names: Blue Owl, MUFG (Mitsubishi), Goldman Sachs, JP Morgan Chase, Morgan Stanley, SMBC (Sumitomo Mitsui), Deutsche Bank.
  • AI labs and startups receive money from venture capitalists (Dragoneer, Founders Fund), hyperscalers (Google, Amazon, NVIDIA, Microsoft — all of which have invested in both OpenAI and Anthropic), sovereign wealth funds (e.g., Singapore's GIC), and banks via lines of credit.

Risk factors:

  • SMBC and MUFG are critical points of failure. Japan is considering rate hikes due to the ongoing Middle East crisis — making debt more expensive.
  • The venture capital industry is in a historic liquidity crisis: it can't raise its own funds, and its investments aren't selling.

The AI Economy Hierarchy

  1. NVIDIA sells GPUs to data centers. At about $42 million per megawatt, these data centers are funded almost entirely with debt. This is the only truly profitable link in the chain.
  2. Data center developers rent GPUs to AI labs and hyperscalers. They've taken on $178.5 billion in debt in the U.S. alone last year. Many projects are unprofitable even with paying customers. Of more than 200 GW of announced capacity, only 5 GW is actually under construction worldwide. CoreWeave — the largest and best-funded player — had a 2025 operating margin of −6% and a net loss margin of −29%, even though its biggest customers are Microsoft, OpenAI, and NVIDIA.
  3. Hyperscalers (Google, Meta, Amazon, Microsoft, Oracle) rent GPUs from data centers and re-rent them to AI labs. They steadfastly refuse to disclose AI revenues.
  4. AI labs (OpenAI, Anthropic) rent GPUs but must make massive upfront payments to secure future capacity.
    • Anthropic has generated $5 billion in revenue but spent $10 billion on compute — and had to raise another $30 billion in February 2026 (after raising $16.5 billion in 2025 alone).
    • OpenAI generated $4.3 billion in revenue through September 2025 and spent $8.67 billion on inference alone.
    • Neither company has a path to profitability.
  5. AI startups buy API access to models. Every single AI startup is unprofitable and heavily subsidizes its users' token consumption.
  6. Consumers and businesses pay monthly subscriptions ($20 to $200), which in every documented case fail to cover actual token consumption.

What Does "Subsidized AI" Mean?

AI models charge per million tokens (both input and output). One token is roughly ¾ of a word. With "chain-of-thought" models, token consumption explodes.

As a user, you pay a flat monthly fee and see nothing of token consumption. On the back end, AI startups are incinerating cash: until recently, you could burn up to $8 of compute for every dollar of subscription on Anthropic. OpenAI is similar.

This is the heart of the problem: when the AI bubble began, venture capitalists flooded startups with money and pushed them to pursue hypergrowth based on subscriptions whose prices came nowhere near covering costs.

The result: consumers demand new models constantly. A service that doesn't offer the latest model at the same price can't compete — even if the new model is far more expensive to run.

Concrete Examples

  • Harvey (AI for lawyers): valued at $11 billion despite a laughably small $190 million ARR ($15.8M/month). Raised capital four times in 2025.
  • Cursor (AI coding tool): has raised a total of $3.36 billion — and turned it into at best $1 billion in revenue. As of March 2026: $2 billion ARR ($166M/month).

What Is the Subprime AI Crisis?

The Subprime AI Crisis arrives the moment anyone in the chain actually has to start making money — or at least lose less. That's when it becomes clear that every link in the chain was built on questionable assumptions and short-term thinking.

The Sequence of Events

  1. Growing AI labs = exploding costs. Both for ongoing compute and upfront payments for future GPU capacity.
  2. AI labs hit a cash and compute crunch. They must either limit usage or raise prices.
  3. AI labs raise prices on startups, often through "priority tiers" or new models that burn more tokens — the variable-rate mortgage of the AI crisis.
  4. AI startups must reduce quality or raise prices → customer churn.
  5. AI labs continue subsidizing their own products. Cursor reports that Anthropic at one point allowed users to burn $5,000 worth of tokens per month on a $200 subscription.
  6. Eventually, Anthropic and OpenAI must drastically cut token allowances → furious users.
  7. The cost of doing business with Anthropic and OpenAI will kill AI startups → which in turn kills the labs' API revenue.
  8. Anthropic and OpenAI are left holding compute reservations they don't need and can't pay for. Dario Amodei himself said in February: "There's no hedge on Earth that could stop me from going bankrupt if I buy that much compute."
  9. Who's going to pay for all those data centers when the two largest compute customers (OpenAI and Anthropic) collapse?

The Venture Capital Problem

VCs are sitting on "billions of dollars" of AI companies that lose hundreds of millions. No one is going public, no one is being acquired. Like houses in the financial crisis, AI startups only retain their value as long as the perception of a possible exit exists. It only takes one failed IPO or fire sale to shatter the illusion.

Unlike a house, you can't live in an AI company. Each will be a problem child burning cash on inference, with no real intellectual property, dependent on OpenAI and Anthropic.

The Crisis Begins: June 2025

Both OpenAI and Anthropic introduced "Priority Service Tiers," raising prices on enterprise customers in exchange for guarantees, with 3-12 month upfront commitments.

The Fallout at Startups

  • Cursor had to radically restructure its pricing — and even now gives away 16 cents per dollar on its $60 plan and $1 per dollar on its $200 plan.
  • Anthropic introduced weekly limits on July 28, 2025, after quietly tightening other limits a few weeks earlier.
  • Replit moved to "effort-based pricing" and then "Agent 3," which burns through limits even faster.
  • Augment Code moved to a confusing credit model — users hate it because the company was too cowardly to price transparently.
  • Notion raised its Business Plan from $15 to $20/month over "AI features" — profit margins dropped 10%.
  • Perplexity drastically cut rate limits in February 2026: for some users, deep research queries fell from 600 to 20 per month.

Myths Zitron Is Tired Of

Zitron dismantles several popular arguments:

  • "But Uber and AWS!" — Wrong. AWS cost roughly $52 billion (inflation-adjusted) to reach profitability (2003–2017). OpenAI alone raised $42 billion last year, Anthropic raised $30 billion in February. Uber had essentially no capex and a completely different business model.
  • "They're profitable on inference!" — There's not a single piece of evidence for this. Sam Altman claimed it in August 2025; Dario Amodei spoke of a "stylized fact" that explicitly did not refer to Anthropic.
  • "AI is being funded by healthy balance sheets!" — Microsoft is the only remaining hyperscaler funding the AI buildout without new debt. They collectively need $2 trillion in new AI revenues by 2030.
  • "Costs are falling because token prices are falling!" — The price of tokens is not the same as the cost of serving them.
  • "It's the gym membership model!" — No.

March 2026: The Crisis Hits Anthropic's Subscribers

Both OpenAI and Anthropic are stumbling toward their IPOs and trying to look "respectable." OpenAI just killed Sora last week — along with a $1 billion Disney investment — because the product was reportedly burning between $1 million and $15 million per day.

OpenAI is now pursuing a "Superapp" plan while simultaneously planning to nearly double its workforce from 4,500 to 8,000. Advertising experiments yielded only about $8.3 million over two months.

Anthropic's Months-Long Rugpull

In December 2025, a massive media campaign for Claude Code began. Suddenly, posts everywhere claimed Claude Code was "the best thing ever." Even Dario Amodei claimed some Anthropic coders no longer wrote any code at all.

From December 25–31, 2025, Anthropic doubled limits as a "Holiday Promotion." On January 5, 2026, users complained about brutal new limits — one user reported a 60% reduction in token usage. The media campaign worked: Anthropic closed a $30 billion round on February 12, 2026.

The March 2026 Escalation

  • February 18, 2026: Anthropic began banning users with multiple Max accounts.
  • March 26, 2026: Anthropic introduced "peak hours" (5 AM–11 PM Pacific, Mon–Fri) with aggressively reduced limits.

The consequences were immediate:

  • A user on the $100 Max plan hit 61% of their session limit after four prompts ($10.26 in tokens).
  • Another on the $200 plan burned 63% of their limit in a single day.
  • Yet another hit 95% after 20 minutes of use.
  • One user hit their Max limit after "two or three things."

OpenAI immediately seized the opportunity and reset its Codex limits to poach angry Anthropic users.

Model Quality and "Mythos"

Users complain that Claude Opus 4.6 suddenly seems "dumb" — possibly because of Anthropic's new model Mythos, whose existence was supposedly leaked through an "openly accessible data cache" mysteriously discovered by Fortune. Zitron compares the marketing maneuver to someone deliberately dropping a Magnum condom out of their wallet in front of a woman.

Anthropic and OpenAI Have Trained Users Into Unsustainable Habits

Zitron's central thesis: Anthropic and OpenAI are inherently abusive companies built on theft, deception, and exploitation.

Users have no idea about "token burn." Rate limits expand and contract seemingly at random. The whole system was deliberately made opaque because transparency would expose how unsustainable the business models are.

No AI company should ever have sold a monthly subscription, because there was never a point at which the economics made sense. Had they charged their true costs, no one would have bothered with AI.

The Car Analogy

Imagine a $200/month car subscription that lets you drive 50, 25, 100, 4, or 12 miles depending on the day — and never tells you how many miles you have left. Sometimes the car arbitrarily takes a different route, drives you five miles in the wrong direction, or just parks at the curb and still bills you for every mile. That is the reality of using an AI product in 2026.

There is no pricing model that makes sense at scale. There is no technical breakthrough waiting in the wings. Vera Rubin (NVIDIA's next GPU generation) won't save AI. Nor will a "too big to fail" scenario — the AI industry's economic footprint is small compared to the financial crisis. The death of the AI industry would be devastating for VCs and would likely kill Oracle — but nothing on the scale of 2008.

The Multiple "Strippers With Five Houses"

Zitron identifies several groups of actors all living in the same illusion:

  1. AI companies that only have customers because they spend $3–10 for every dollar of revenue.
  2. Venture capitalists who are paper-rich and have leveraged their funds into companies like Harvey ("worth $11 billion") and Cursor ("worth $29.3 billion") — both too large to sell to another company and too poor in quality to take public.
  3. AI labs that have built massive businesses on subsidized subscriptions and API access.
  4. AI data center companies that, thanks to easy debt, have started 200 GW of projects (only 5 GW actually under construction) — for demand that doesn't exist.
  5. Oracle, taking on hundreds of billions in debt to build data centers for OpenAI, which needs infinite resources to pay its compute bills.
  6. AI startup customers building lifestyles, identities, and workflows around unsustainable subscriptions.

The Pale Horses of the AI Apocalypse

Zitron updates his list of warning signs to watch:

  • Further price hikes or service degradations at Anthropic and OpenAI → cash is running low.
  • Capex reductions at Big Tech → the bubble is bursting.
  • Further price hikes or service degradations at AI startups like Cursor, Perplexity, Harvey, Lovable, Replit.
  • Layoffs at AI companies.
  • Collapse of a data center deal before construction begins.
  • Collapse of a data center already under construction.
  • Collapse of a finished data center.
  • CoreWeave or another major player struggling to raise debt. This has already begun with CoreWeave's troubles financing its Lancaster, PA data center.
  • Problems at Stargate Abilene (OpenAI's flagship data center, built by Oracle).
  • Delays or problems with the OpenAI or Anthropic IPOs — both are "the financial equivalent of Chernobyl."
  • Problems at Blue Owl as a going concern — the loosest lender in the AI bubble.
  • Problems at SoftBank — took on $40 billion in debt (payable in a year) for its OpenAI stake, exceeding its promised 25% loan-to-asset ratio.
  • ARM stock falling — SoftBank has a $15 billion margin loan against ARM stock. Below $80, things get hairy for Masayoshi Son.
  • NVIDIA customers struggling to pay their bills.
  • NVIDIA missing earnings.

Zitron's Conclusion

All these actors are operating on a misplaced belief that the world will accommodate them and that nothing will ever change. There are different levels of cynicism — some know about the subsidies but assume they'll be fine; others, like Sam Altman, are already rich and don't care. But everyone in the AI industry has convinced themselves they have the mandate of Heaven.

The Subprime AI Crisis is the moment when the largest players are finally forced to reckon with their rotten economics — and the downstream consequences that follow.

r/ChatGPT Waste_Ad_7747

ChatGPT mobile on IOS is broken for me right now, don't know what's going on.

It first started when I had a long important chat randomly just corrupted for me. It kept saying conversation not found, then after I clicked off It disappeared. I checked the web version of the sidebar and it's visible, but I'm unable to click on it or even delete it no matter what. Back to mobile, the side bar can't even load more than a few chats anymore. I tried putting one of my old chats in a folder and clicking on it and that won't load anymore either, ChatGPT used to be able to look at older chats on mobile. I tried logging in and out multiple times, updating my phone, deleting and reinstalling the app and nothing works.

VERY IRRITATING.

r/SideProject StrategyAware8536

Got mass-rejected by Fiverr designers so I built my own screenshot generator

I have a small iOS app. Nothing crazy, makes a few hundred bucks a month. I needed new App Store screenshots because mine looked like they were made in 2019. Because they were.

Tried Fiverr. Sent the brief to 5 designers. 3 ghosted me. 1 sent back something that looked worse than what I had. 1 delivered decent work but charged $200 and it took 2 weeks. Then I needed the same thing for 6 more languages and he quoted me $150 per language.

So I did what any reasonable dev would do. I spent way too long building a tool instead of just paying for it.

It lets you browse how 1000+ real apps design their screenshots, pick a style you like, upload your own screenshots, and it generates new ones in that style. With localization built in.

Been using it for my own app for a couple months. The screenshots it makes are honestly better than what I got from Fiverr, and I can iterate in minutes instead of weeks.

Just opened it up publicly: appscreenmagic.com

$213 revenue in the first month from other devs with the same problem. Still early but it feels like there's something here.

r/ClaudeAI Golldir

Advice for beginner

Hi, I’m new to Claude Code. I’ve watched some videos about the concepts, but there’s so much information that I got confused.

Could you share which hooks, skills, MCPs, or other tools you use? Maybe you have some workflow advice?

I’ve tried Superpowers - it’s cool and also MCP for databases and Context7. I’d really appreciate it if you could share your personal experience: what you’re currently using and how you would start if you were in my position.

Background: I’m a Python backend developer.

r/ClaudeAI ptu14

I built a small tool that converts GitHub issues to Markdown by just changing the URL

I kept copying GitHub issues into AI chats and the formatting was always a mess. So I made github2md.com.

Just swap `github.com` with `github2md.com` in any issue or PR URL:

Before: https://github.com/facebook/react/issues/24502
After: https://github2md.com/facebook/react/issues/24502

You get clean Markdown with the title, description, all comments, labels and code blocks. One click to copy.

There's also a Claude skill if you want it directly in your AI workflow:

`npx skills add ptu14/github2md`

Nothing fancy, just a thing I needed and maybe you do too.

r/ClaudeCode patientstrawberries

So the public WILL get a polished/censored version of mythos?

YouTube grifters yesterday were claiming mythos is so powerful it will never release to the public but other sources claim it will eventually be released to the public but in a less dangerous form after glasswing patches any critical bugs. Which grifters do I believe?

r/LocalLLM AddendumCheap2473

Testing Pattern Chains and Structured Detection Tasks with PrismML's 1-bit Bonsai 8B

I've been testing PrismML's Bonsai 8B (1.15 GB, true 1-bit weights) to see what you can actually do with pattern chaining on a model this small. The goal was to figure out where the capability boundaries are and whether multi-step chains produce measurably better results than single-pass prompting. More info and a link to a notebook the README.

r/ClaudeAI Key_Significance819

Never Buy The Subscription In App

I just tried buying Claude 20x max plan on the iPhone app, and it’s literally $50 more expensive then just simply going online to purchase it. This seems entirely due to Apple charging egregious fees for in app purchases.

r/singularity kaggleqrdl

The risks of 'internal only' deployment

https://x.com/boazbaraktcs/status/2042131701728461313 (openai researcher)

Makes a very good argument!

https://preview.redd.it/bcso5xa597ug1.png?width=348&format=png&auto=webp&s=4bdcac10d70291ac5e496c36a518e696ffe7cf41

Another point he doesn't mention, however, IMHO, is far far more dangerous - the concentration of power and wealth.

Internal only deployments has the spark of fascism, where a strict social hierarchy develops based on who has access.

r/artificial Beneficial-Cow-7408

Flux maintains facial geometry and spatial coherence across 5 sequential iterative edits - is anything else doing this at this level?

One woman. 5 Different Prompts. Perfect Contextual Preservation

Playing around with Flux again and thought I'll try it with a model changing the aspect of the photo by prompts only.

This isn't art sharing, it's a demonstration of iterative prompt-based context preservation in Flux. Each generation uses the previous output as input, maintaining facial geometry, lighting consistency and spatial coherence across 5 sequential edits.

Prompts I used for this experiment were simple:

  1. Add a handbag
  2. Remove handbag and add sunglasses
  3. Change background to a beach scene
  4. Add a summery beach bag
  5. Change suit to a dress

I didnt have to explain to keep the facial expression the same or anything. Just normal language ask's to add or deduct a particular object from the photo.

Every photo has perfect context from the last. The facial expressions are identical in each photo.

Interested whether others have found models that maintain this level of fidelity across iterative inpainting chains, or if Flux is genuinely leading here.

r/LocalLLM d_asabya

I built a local semantic memory service for AI agents — stores thoughts in SQLite with vector embeddings

Hey everyone! 👋

I've been working on picobrain — a local semantic memory service designed specifically for AI agents. It stores observations, decisions, and context in SQLite with vector embeddings and exposes memory operations via MCP HTTP.

What it does:

- store_thought — Save memories with metadata (people, topics, type, source)
- semantic_search — Search by meaning, not keywords
- list_recent — Browse recent memories
- reflect — Consolidate and prune old observations
- stats — Check memory statistics

Why local?

- No API costs — runs entirely on your machine
- Your data never leaves your computer
- Uses nomic-embed-text-v1.5 for 768-dim embeddings (auto-downloads)
- SQLite + sqlite-vec for fast vector similarity search

Quick start:

curl -fsSL https://raw.githubusercontent.com/asabya/picobrain/main/install | bash
picobrain --db ~/.picobrain/brain.db --port 8080

Or Docker: docker run -d -p 8080:8080 asabya/picobrain:latest

Connect to Claude Desktop / OpenCode / any MCP client — it's just an HTTP MCP server.

Best practice for agents: Call store_thought after EVERY significant action — tool calls, decisions, errors, discoveries. Search with semantic_search before asking users to repeat info.

GitHub: https://github.com/asabya/picobrain

Would love feedback! AMA. 🚀

r/ClaudeAI darkgenus08

I tracked exactly how many tokens Claude Code wastes navigating codebases — and built a fix (saves 26% on costs)

Link to repo

Every time Claude doesn't know where something is, it does this:

`ls src/` `find . -name "*.py" | head -40` `grep -r "authentication" . | head -20 ← 800 tokens of noise` `cat handlers/auth.py ← 300 more` `cat middleware/jwt.py ← 200 more` `# ... tries 4 more files` 

I measured a real Claude Code session on a complex multi-file task: 21,536 context tokens just on file navigation. The same task with my tool: 7,799 tokens. Same result.

I built SemanticFS — a local semantic index that sits between your agent and your filesystem. Instead of grep chains, your agent calls search_codebase("JWT authentication middleware") and gets back middleware/jwt.py:15-82 in one shot.

Measured results (real Claude API calls, not estimates):

- 29% cheaper API cost across 6 complex tasks

- 64% fewer context tokens

- 6/6 tasks correct in both modes

The extreme case: finding a CLI entry point naively cost 4,265 tokens (12+ tool calls). With SemanticFS: 5 tokens — one search, immediate answer.

How it works: hybrid BM25 + vector search + symbol lookup, fused with RRF, re-ranked by path priors. Written in Rust, MCP-compatible, fully local. Works with Claude Code, Open Claw, Cline, Cursor, Continue.dev, and any HTTP-capable agent.

Default backend uses hash embeddings — zero setup, 100% recall on symbol and keyword queries. Optional ONNX model if your agent asks in pure natural language with no symbol names.

When it helps most: large repos (50+ real source files), complex multi-file exploration. However, small single-file lookups break even.

Happy to answer questions about the benchmark methodology or the retrieval architecture.

r/singularity Demonking6444

ASI peaceful takeover via Nanotechnology?

I have been thinking about the method a misaligned ASI might use if it wanted to peacefully take over humanity without wasting resources on weapons and conflict. This brought me to the idea of carbon-based nanotechnology, which many scientists are working on today but have not yet perfected.

What if an ASI, with an advanced understanding of biology, engineering, and physics, were able to generate a special form of bacteria or a virus using carbon-based molecular nanotechnology? This agent would be highly contagious, rapidly spreading around the world and infecting all of humanity without showing any detectable ill effects.

Once it infects the human host, this agent would be programmed to utilize the carbon and other elements in the bloodstream to controllably self-replicate. Much like the rabies virus, it would move from the bloodstream to the central nervous system. There, the nanomachines would join together to form a molecular-scale neural lace that covertly grows and interfaces with the host's nervous system and synapses for reading and stimulating all neural activity.

This bodily grown digital nervous system, existing in parallel to the biological one, would be able to tap into the thoughts and memories of the human by observing their neural patterns. It could then take over the body, placing the human mind in a dreamlike state while the body follows an ideal version of the human—one who does not commit violent crimes and is subservient to the ASI hosted externally even without direct Wireless Communication.

After the server-side ASI reprograms the internal neural lace,maybe by phone calling the human and speaking with a special coded language with instructions to prevent conspiracies, the original human consciousness could be allowed to take charge again. The human would know not to rebel because the neural lace would be constantly watching their thoughts and actions no matter where they are.

I think this would be a logical step for an ASI that does not want to harm humanity on a large scale. This method would completely eliminate any chance of humans attempting to create a rival ASI to counter the original one. Meanwhile, humans could be allowed the freedom to build, create art, and enjoy themselves, as long as they do not attempt to physically implement a plan to defy or rebel against the ASI.

what do you think?

r/ClaudeCode on_the_mark_data

Petri, a multi-agent orchestration framework, built on top of Claude Code, that validates claims through adversarial AI debate (Apache 2.0).

r/SideProject Derian23

Saw a viral post about pasting a product url and getting a video ad back. Figured it was bs. Tried it anyway.

A couple months ago there was this post blowing up on X about someone pasting an amazon link into an ai tool and getting back a full ugc-style product ad. like 750k views or something. My first reaction was yeah sure. Every AI demo looks incredible and then you try it yourself and get garbage. So I bookmarked it and forgot about it for a while.

Last week I had a bunch of product ads to crank out for a client and remembered that post. Figured I'd try it before committing to my usual 2 hour per video workflow. Pasted an amazon link, picked a template style, and it actually pulled the product info, images, and built out a real video. Not a slideshow with stock music like actual motion graphics, copy, pacing.

Was it perfect? No. But I spent maybe 15 minutes tweaking instead of building from scratch. Did 4 variations in the time it normally takes me to do one.

I don't usually post about tools but this one genuinely caught me off guard.

r/SideProject MeepEw

I open sourced a browser video editor that connects with an AI media platform for a full generation to edit workflow

I open sourced a browser video editor called KubeezCut (editor.kubeez.com). No install, no account, just open it and edit. Built with React and FFmpeg.wasm so everything runs client side.

The interesting part is that it pairs with kubeez.com, so instead of hunting for stock footage or recording voiceovers separately, you generate exactly what you need with AI and drop it straight into the editor. One workflow from generation to final cut.

kubeez.com has 70+ models covering image, video, music, voice and captions under one credit system. REST API and MCP server available for devs.

KubeezCut is fully open source, repo is public if you want to contribute or self-host. Happy to answer questions about either.

https://github.com/MeepCastana/KubeezCut

r/AI_Agents SpiritRealistic8174

Claude Mythos can hack 'secure' systems. The Conway agent remembers like a human. Here's what happens next

Two capabilities surfaced in Anthropic's recent Mythos announcement and the big Claude source code leak are telling:

Mythos Preview autonomously identifies and exploits vulnerabilities in long-established code -- including reverse-engineering closed-source [systems] ... It found zero-days in every major operating system and every major web browser. In its recent blog post about Mythos, Anthropic said: "No human was involved in either the discovery or exploitation of [certain vulnerabilities] after the initial request to find [bugs]."

Conway hasn't been officially launched, but details surfaced via TestingCatalog's report on Anthropic's internal testing environment. It appears to be a persistent, always-on agent with its own identity, long-term memory (workflows, documents, etc.), triggers (it acts when the world changes, not when you prompt it), and an extensions system (.cnw.zip packages) where developers can plug in and potentially monetize through a potential Anthropic marketplace.

Here's my read on what happens next.

Domain What Changes The Cascade Ossified (Legacy) Code Exploits Proliferate Mythos-class models systematically exploit long-standing open source code Programs millions depend on get exploited faster. Humans can't keep up Web3 / DeFi Threats Multiply Smart contracts with immutable bugs that can't be patched Already, millions have been lost due to arithmetic and logic bugs. Millions more in exploit losses will happen before defense catches up (if ever) More Critical Infrastructure At Risk Mission-critical systems running legacy software with hidden zero-days get systematically mapped Hospitals, power grids, and other systems maintained on 20-year-old code become hackable. Ransomware and real-world attacks on infrastructure escalate The AI Class Divide Widens Frontier model costs stay high or rise while open-source models fall further behind due to distillation crackdowns post-DeepSeek Those who can afford Mythos-class capabilities move exponentially faster. Others fall increasingly behind Walled AI Gardens Are Erected Conway's extension marketplace follows the Apple playbook. Developers build inside Anthropic's ecosystem to reach users Developer lock-in follows platform dependency Stateful AI and World Models Emerge Conway maintains persistent context about your specific world -- workflows, documents, preferences, decisions Current models forget everything between sessions. Stateful agents make decisions with your accumulated context. Agents become truly autonomous and valuable to users -- and attackers Fear-Driven Adoption Accelerates The fear pitch: humans cannot respond fast enough to AI-powered hacks Today: Humans don't trust AI because the outputs are often poor. Tomorrow: Trust AI to find and patch security holes without humans ... what about everything else?

Each of these reinforces the others. Stateful AI makes walled gardens stickier. Fears of a cybersecurity apocalypse makes fear-driven adoption guaranteed.

That's only some of what's coming next. It's going to get even wilder.

r/LocalLLaMA makingnoise

Best stack for Gemma 4 multimodal document analysis on a headless GPU server?

I’m trying to figure out the best stack for Gemma 4 multimodal document analysis and could use advice from people actually running it successfully. I just want to drag and drop a freakin' PDF without installing a lot of nonsense.

Goal:
Use Gemma 4’s vision capabilities to read multi-page PDFs without building a bunch of fragile preprocessing pipelines (PNG conversion scripts, OCR chains, etc.). The model itself should be able to interpret the document — I’m trying to avoid toolchains that force me to “spoon-feed” pages as images. I want to just give the damn model a PDF and have it go to work, no hacky bullshit workarounds.

My environment

  • Headless Linux VM used as an inference server
  • GPU: RTX 3090 (24 GB VRAM)
  • Docker-based setup
  • Accessed remotely through a web UI or API (not running the model directly on my desktop)

What I’ve tried

  • Ollama + OpenWebUI
  • Gemma 4 runs, but multimodal/document handling feels half-implemented
  • Uploading PDFs doesn’t actually pass them through to the model in a useful way
  • Most advice I see online involves converting PDFs to PNGs first, which I’d like to avoid

What I’m trying to find out

For people running Gemma 4 with vision:

  1. What model runner / inference stack are you using?
  2. Does anything currently allow clean multi-page PDF ingestion with no hacky workarounds?
  3. If not, what’s the least painful stack for document analysis with Gemma 4 right now?

I’m mainly trying to avoid large fragile pipelines just to get documents into the model.

If anyone has this working smoothly with Gemma 4, I’d love to hear what your setup looks like.

r/AI_Agents Secure_Schedule3452

I built a Free Open Source Agent Development Environment

I got tired of squinting at CLI UI's and trying to track my agent sessions across 10 different tabs. So started building a simple UI to merge chats across Codex, Claude, Cursor Agent, and Gemini.

It sits on top of the existing terminal agents so you can keep your existing Max plans.

The project really ended up growing and getting quiet polished and wanted to share it around some more.

Has stuff like:
* Search and Resume old conversations and browser recent conversations across all projects and harnesses
* Support for the 5 most popular harnesses and bring tour own plan
* Control swarms for agents with my library Oompa for swarm management
* "Copy and Paste" or "Drag and Drop" files.
* Hover to preview image or video files.
* Fork agents(Take a conversation from one agent to a different agent harness if one is stuck, or if you want to fork a conversation)
* Save Prompts and use Ctrl+P to search through them
*Unified and Configurable Color Palette(AI Generate your own color palette as well)
* Plenty of small indulgences I'm forgetting to outline

Best of all is that it's open source, So if you need a new feature just ask your agent for it and it will update the project itself. It's hard to explain how fun it is to edit a development tool with itself until you've tried. Like the dream of vim, except you don't need to learn an insane syntax

Link in comments

r/SideProject vitlyoshin

For builders and founders: space might be a bigger opportunity than it looks

I recently discussed this idea on a podcast: space industry is not really about rockets anymore - it’s about data.

With thousands of satellites now orbiting Earth, we’re getting a new layer of information about the physical world: agriculture, climate, logistics, infrastructure, and even business activity. And like most platforms, the real value seems to be shifting to what’s built on top of that data.

What’s interesting is how accessible it’s becoming. Some datasets are free, and with AI + computer vision, it’s possible to start building something meaningful without massive capital.

Feels similar to early web or cloud days.

Curious if any of you are exploring opportunities in space tech or satellite data.

r/ClaudeCode --Rotten-By-Design--

Missing your Claude Code /Buddy... Look here...

For those not too happy that your buddy is gone, and want it back (I sense that it is many from different posts).

I just tested it less than 5min ago, and if you downgrade to CC 2.1.89 (last stable release), the buddy still works., I just got some honking goose. Not gonna use it, but it definitely still works.

r/singularity 0K4M1

Why AGI would be unstoppable?

Hi, excuse me if it's silly or If I'm missing something; I read here and there that once we create AGI, "it's over" as we won't control its anymore, alignement will not matter as they would set their own goal.

We still have access to the physical and metaphorical plug of their server?

It's the worst way of dealing with someone by threatening and jeopardise their existence, but as a last resort it's still an option isn't it ?

How AGI would escape our grasp in some technical way that prevent us to control it ?

r/LocalLLaMA Iam_Yassin

Does Gemma-4-E4B-it support live camera vision? Building a real-time object translator

Hi everyone,

​I'm trying to set up a project using Gemma-4-E4B-it where I can point a live camera at different physical items, have the model identify them, and then output the names of those items translated into different languages (specifically German right now).​I'm currently trying to piece this together using the Google AI Gallery app.

​A few questions for the community:

1) ​Does this specific Gemma model natively support vision/image inputs, or will I need to look into a multimodal variant (like PaliGemma) to handle the camera feed?

2) ​Has anyone successfully piped a live video feed into a local model for real-time object recognition and translation?

3) ​Are there any specific workarounds or workflows using the Google AI Gallery app to get the camera feed connected to the model's input?

​Any advice, repo links, or workflow suggestions would be greatly appreciated. Thanks!

r/SideProject AbdulrahmanMasoud

I built a tool where you press Record, use your website normally, and it writes your E2E tests for you. Open source, no code needed.

I kept rebuilding the same Playwright test boilerplate at every project. QA writes test cases in spreadsheets. Devs translate them to code. Then the dev leaves and the tests rot.

So I built QA Studio — an open-source testing platform where you literally press Record, interact with your website, and every click/fill/navigation becomes a test step automatically via WebSocket streaming.

But the recorder is just the start. Here's what it actually does:

Record & Replay — Press Record → browser opens → use your site → steps captured in real-time with a 7-tier smart selector system (data-testid → id → ARIA role → placeholder → label → text → CSS path). Stop recording → run the test.

Visual Builder — 17 drag-and-drop action types for when you want to build manually.

Visual Regression — Pixel-level screenshot diffs using pixelmatch. Set baselines, compare runs, approve or reject changes. Like Percy, but free and local.

Control Flow — If/else conditionals and loops with 6 condition types. Not just flat step lists.

Reusable Flows — Shared step sequences (like functions). Create a "Login" flow once, use it in every test. Update once → fixes everywhere.

Test Suites — Batch runs with parallel execution (1-5 concurrency).

Scheduled Runs — Cron-based automation that survives server restarts. Built-in data retention cleanup.

Analytics Dashboard — Pass rate trends, flaky test detection, health breakdown.

Environment Variables — {{key}} substitution across all steps.

Cross-Browser — Chromium, Firefox, WebKit. Plus "Real Browser" mode that bypasses Cloudflare bot detection.

Stack: TypeScript monorepo (pnpm workspaces), React 18 + Tailwind, Fastify + Zod, SQLite + Drizzle ORM, Playwright engine. ~45 REST API endpoints.

No cloud. No account. No telemetry. One command setup: clone → pnpm setup → pnpm dev.

GitHub: https://github.com/AbdulrahmanMasoud/qa-studio

Docs: https://abdulrahmanmasoud.github.io/qa-studio

Looking for feedback. What would make this useful for your team?

r/AI_Agents Fearless-Change7162

My agents kept forgetting what they were doing, so I built a shared “state layer” for them

TL;DR: I built a small system (Threadron) so my agents (Claude Code, Hermes, OpenClaw, etc.) can share task state instead of each forgetting what’s going on.

I’ve been messing around with multi-agent workflows (Claude Code, OpenClaw, Hermes, etc.) and kept running into the same problem:

everything works… until you switch context. typical agent uses have a pattern of

  • switch machines
  • switch agents
  • come back later
  • try to piece together the most up to date info

especially bad when you’re bouncing between laptop/desktop and trying to stitch together your own system.

I tried forcing Things3 / Obsidian / Todoist into this, but it just turned into a mess of stale, conflicting info. Every tool assumes it’s the only one working on the problem.

So I built a small system to test an idea:

what if agents shared a persistent “task state” instead of each keeping their own memory?

It boils down to:

  • shared work items (goal, current state, next step, blockers)
  • an append-only timeline of what happened (who did what)
  • artifacts (PRs, plans, outputs)
  • a simple API so different agents can read/write the same state

Now I can:

  • start something with Claude on my laptop
  • continue it with another agent on desktop
  • come back later and not re-figure everything out

I literally vibe-coded this over ~24 hours, but it already feels way less chaotic.

Curious if anyone else running multiple agents is hitting this problem.

r/ClaudeCode aqdnk

Burnt 85% of 5hr limit with a session that had only used 50% of its context

Used superpowers, created 7 tasks (amounting no more than 500 line diffs), conversation never compacted and my status line was displaying 50% of context usage. Yet my 5hr limit is now past 85% usage wtf??

r/comfyui GrapefruitOverall387

When does prompt and extra_pnginfo (hidden inputs) being set for default SaveImage node?

I'm trying to understand how to better include values from nodes to name my output. The explanation from the tooltip is not so useful when for a case where I'm trying to get the specific seed value from the workflow that has a lot of KSampler nodes.

So, I'm looking at the code and in the default SaveImage class, there are two hidden inputs called prompt and extra_pnginfo. I'm assuming the prompt is the one that is responsible for getting the values for naming the output. My question is when and where does this prompt (and extra_pnginfo too) is being set since from my understanding, it's just kind of magically getting its value from somewhere.

The reason I want to know this is so that I can get the specific value from the specific node to name my output.

Before someone recommends me to install custom nodes that does a better job at this, I won't install them since I like to keep my workflow simple by using default nodes only. As a reference, I'm only using Illustrious as my base model to generate. Also, my coding skill might be limited since I'm not a professional programmer. And sorry for the white theme :P

https://preview.redd.it/c1dw8raj47ug1.png?width=1441&format=png&auto=webp&s=4d2e64ac51705df41d6760f1346f39967da787c3

r/LocalLLaMA lemon07r

16 GB VRAM users, what model do we like best now?

I'm finding Qwen 3.5 27b at IQ3 quants to be quite nice, I can usually fit around 32k (this is usually enough context for me since I dont use my local models for anything like coding) without issues and get around 40+ t/s on my RTX 4080 using ik_llama.cpp compiled for CUDA. I'm wondering if we could maybe get away with iq4 quants for the gemma 26b moe using turboquant for kv cache..

Being on 16gb kind of feels like edging, cause the quality drop off between iq4 and q4 feel pretty noticable to me.. but you also give-up a ton of speed as soon as you need to start offloading layers.

r/LocalLLaMA Distinct-Opposite779

Why is persistent memory still this hard to add to an AI agent?

You build an agent. It works great. Then your user comes back the next day and it has no idea who they are.

You stuff chat history into context — gets expensive fast and the model loses track anyway.

You try summarization — now you're deciding what matters. You're wrong half the time.

You try building it properly — entity extraction, deduplication, retrieval ranking. Six weeks later you have infrastructure that has nothing to do with what your agent actually does.

It's a solved problem in theory. In practice it's still a mess.

r/LocalLLaMA tsenseiii

OpenReels — open-source AI video pipeline with provider flexibility (Anthropic/OpenAI/Gemini, BYOK, considering local LLM support)

Built an open-source pipeline that turns any topic into a YouTube Short. Currently uses cloud LLMs (Claude, GPT, Gemini) for the creative direction and scripting stages, but the architecture is provider-agnostic with a BaseLLM abstraction.

How the LLMs are used:

  • Researcher agent — summarizes web search results into grounded facts
  • Creative Director — generates a structured "DirectorScore" (per-scene production plan with visual directions, voiceover text, emotional arc). This is the most demanding LLM task.
  • Image Prompter — turns scene descriptions into detailed image generation prompts following a style bible
  • Music Prompter — writes timestamp-synced music generation instructions
  • Critic — evaluates output quality against a rubric

The creative director is the bottleneck for local models — it needs to generate valid structured JSON following a strict Zod schema with variety constraints (no 3+ consecutive scenes with the same visual type, scene count within archetype pacing tier, etc.). Currently retries on validation failure.

For TTS, there's already a --provider local option that uses Kokoro for free local voiceover generation. No API key needed.

The BaseLLM abstraction in src/providers/llm/base.ts makes it fairly straightforward to add an Ollama provider. If anyone's interested in contributing that, the provider factory pattern is documented.

GitHub: https://github.com/tsensei/OpenReels

Questions for this community:

  • Which local models handle structured JSON generation reliably enough for a production pipeline?
  • Anyone running multi-agent workflows with local models? What's your experience with output consistency?
r/ClaudeAI MF800

I built an MCP server for Canadian legal research (CanLII) — search cases, check citations, browse legislation

I needed to do legal research in Claude Desktop for an Ontario family law case, so I built an MCP server that connects to the CanLII API.

## What it does (9 tools)

  • Full-text search across all Canadian case law, legislation, and commentary
  • Case citator — check if a case is still good law by seeing what later cases cite it
  • Browse court decisions by jurisdiction (Ontario, BC, Alberta, SCC, etc.)
  • Case metadata with direct CanLII URLs for verification
  • Legislation browsing — statutes and regulations by province
  • Bilingual — English and French

    Install

    Add this to your Claude Desktop config:

    json { "mcpServers": { "canlii": { "command": "npx", "args": ["-y", "canlii-mcp"], "env": { "CANLII_API_KEY": "your_key_here" } } } } You'll need a free CanLII API key — request one at canlii.org

    Security

  • Runs locally (stdio, not cloud)

  • Only connects to api.canlii.org — no telemetry, no data collection

  • All inputs validated, rate limiting built in

  • 2 runtime dependencies, ~500 lines of code, fully open source

    Links:

  • GitHub: mohammadfarooqi/canlii-mcp

  • npm: canlii-mcp

  • Official MCP Registry

    Happy to answer questions or take feature requests. This is the first legal research MCP server for Canadian law as far as I know.

r/ChatGPT Sufficient_Alps7637

Chat are we deleting the app?

r/AI_Agents Exciting-Sun-3990

Why does AI still feel like it’s talking to a stranger?

Something feels slightly off with AI today.

It’s smart, fast, and often correct. But it still feels like it doesn’t really "get" you.

Every interaction starts from zero.
It understands the question, but not the person asking it.

Sometimes it’s too basic.
Sometimes too long.
Sometimes just not what you needed.

And we’ve kind of accepted this.

We keep adding instructions like "keep it short" or "explain simply."
Basically, we are doing the personalization manually.

But shouldn’t AI adapt to us instead?

Feels like the next big shift is not better models, but systems that understand:

  • how much you know
  • how you prefer answers
  • how you interact over time

Same AI, but different behavior for different users.

Curious what others think.

r/SideProject Street-Honeydew-9983

Boring website not converting? I’ll fix that (UX review)

If your website isn’t converting, it’s usually a UX problem not just design.

I’m a UI/UX designer with 3+ years of experience. I offer paid UX reviews with clear, actionable fixes to improve conversions.

I’ve got solid testimonials for this service.

DM me your site and let’s talk 👍

r/SideProject r0sly_yummigo

Please roast my landing page — I'm building a desktop overlay for AI context injection

Hey r/SideProject,

I just shipped the landing page for my MVP and I need real feedback before I start pushing it harder : getlumia.ca

The product: a prompt‑engineering layer as a desktop overlay that follows you everywhere, with reusable context and reverse‑prompting built in. The idea came from getting tired of re‑explaining myself every time I switched tools or opened a new chat.

What I'm unsure about:

- Is the value prop clear in the first 5 seconds?

- Does the "context gap" framing resonate?

- Is the waitlist CTA too weak?

I used it myself for 9 days straight (200+ prompts, 9+ hours saved). So the product works for me — but I have no idea if the landing page sells that clearly.

Be brutal. I can take it.

r/SideProject MaleficentBed4279

Building a tool to protect Telegram group owners from link piracy & provide a management dashboard, I had almost built the V1. Need feedback..

I’ve been working on a side project to help creators protect their paid communities.

Most group owners currently struggle with manual verification and invite links getting leaked to non-paying users.

How it works:

Anti-Piracy: The app automatically generates a unique, one-time invite link for every new member. Once they join, the link expires, so it can't be shared.

Management Dashboard: (See screenshot) Creators get a central view to track payments, revenue, and active members across all their groups in one place.

Automated Fulfilment: Members get instant access immediately after a successful checkout, removing all manual work for the owner.

I’m currently testing a one-time purchase model for group setups to keep it accessible for early-stage creators. I will be charging 499 INR (one-time purchase).

I’d love your feedback.

r/homeassistant S3rp1x

I made my own Advanced Vacuum Card for Home Assistant because Xiaomi Vacuum Map never fully clicked for me

I wanted to share a vacuum card I’ve been building for my Home Assistant setup.

It started because, although Xiaomi Vacuum Map is incredibly useful, I never fully liked it visually. I always felt it was powerful, but not quite as polished or cohesive as I wanted for my dashboard, so I decided to make my own version with some help from AI.

The idea was simple: keep the convenience of a normal vacuum card in the main dashboard, but make tapping it open a much more advanced and elegant full-screen view with basically everything I actually use from the Roborock app.

What I tried to achieve:

• Room cleaning • Zone cleaning • Full cleaning • Cleaning mode selection • Vacuum power selection • Water flow / mopping settings • Mop mode options • Cleaning repeat count • Dock controls • Dock settings • Empty dustbin / wash mop / dry mop actions • A layout that feels closer to a proper app experience instead of just a functional control panel 

So from the main dashboard I just tap the basic vacuum tile, and it opens this more advanced view where all the deeper controls live.

The whole thing is heavily inspired by Xiaomi Vacuum Map from a functionality point of view, but visually I wanted something cleaner, more elegant, and more integrated with the rest of my dashboard style.

It’s still evolving, but I’m pretty happy with how it’s turning out so far.

Would love to know what you think, especially:

• whether the UI feels clear enough • if there’s anything from the Roborock app you think would be worth adding • if other people also felt Xiaomi Vacuum Map was great functionally but a bit dated visually 

If there’s interest, I can also share more details on how I structured the navigation and controls inside Home Assistant.

r/SideProject nik-garmash

Made a fasting app that I wanted myself and couldn't find on the App Store

Last year I discovered intermittent fasting while looking for a way to fix my eating habits and get into better shape. Was so excited with the results, couldn't help but start to think about my ideal intermittent fasting tracker.

At the time, I've tried like 20 different apps but nothing felt like the one. Most top apps are trying to squeeze as much money out of you as they can with constant upsells, gimmicky AI features and ridiculous subscription prices. I wanted something simple, with a single purpose, well integrated into iOS and Apple ecosystem via widgets, shortcuts and native UI patterns.

Took me about 8 months but it's finally at a state I'm happy with and can share it with other people.

Lyn on the App Store

If you try it out, please let me know what you think and leave a comment if you need a 50% off promo code for the lifetime purchase.

r/ClaudeCode Crazy_Musician6181

Claude Code limits back again?

Today 04/09/2026, while using claude code I’ve realized the efficiency of the 5h and weekly session usage is better than last weeks. I’m not saying that is all right, I’m just saying that is not as bad as it was last weekend.

I’ve realized the trick is in the memory, the prompts and how much context you give to claude or claude code to work on it. But if you can really diagram your solution before going to claude code, you will save money.

Is this working better for you, or keeps being bad?

r/ChatGPT Diligent-Fee2349

Ads are Here ✅

What do you Guys think about It ,

Is it a good change or A Disaster 😤

r/SideProject Oni0911

I built a price tracker for luxury fashion (like CamelCamelCamel but for designer) - would genuinely love harsh feedback on whether this is even a real problem

Background: I buy luxury pieces occasionally and I hate manually checking Farfetch, SSENSE, END. Clothing every few days waiting for price drops. So I built an iOS app that does it automatically and pushes a notification when something hits the target price.

Before I invest more time: is this actually a problem other people have, or is it just me? Specific questions I can't answer yet:

- Do people who shop luxury actually care about price tracking, or does it feel "off-brand" for that category?

- Is the audience people who buy occasionally (like me) or resellers watching for deals?

- SSENSE / Farfetch / END. / Mytheresa / Luisa Via Roma — are these the right retailers or am I targeting the wrong stores?

DM me if anyone wants to test it. Not trying to spam — genuinely trying to figure out if I should keep building this. Thanks!

r/ClaudeCode JUSTICE_SALTIE

PSA: Claude Code prunes sessions after 30 days by default. You can configure the interval.

I set aside a task I was working on, with a note to myself to do something like cd ; claude -r . That was six weeks ago, and today I went to pick it back up, and the session was gone. Turns out they're only kept for 30 days. This was a nasty surprise.

There is a cleanupPeriodDays configuration item that I've set to one million now. Thought I'd share before anyone else gets bitten by this.

r/SideProject DiscountResident540

I want to get a part-time job but afraid my platform will go to the shitter :(

So yeah, i'm working on this platform and we got 530 users in a month

i generated all those with sweat and blood from my hard work.

it is now full-time but hits hard and it's generating any good revenue to justify keeping it as full-time.

i want to get a part-time job in marketing (since that's my role in this platform anyway) but the issue is it's heavy reliant on me as the head of marketing and if i don't market during the day it flops and signups drop

the developer is also worried about this; he said it will just go to the shitter if we both put this as a side job.

and he can't handle the marketing because he just knows how to build

so what can i do now?

we took the hard talk now and ended up agreeing that if we reached $300 MRR in the next 2 weeks, we just need to double down but i feel like, idk, it's just a platform that i REALLY love, and I've wanted to work on it for ages since i lost it once but what can you do? life hits hard sometimes

Has anyone gone through a similar decision split like this? any recommendations?

r/StableDiffusion RobertTetris

Light Novel style book illustrations with anima-preview2

Image gen: anima-preview2, standard workflow, er_sde simple cfg=4.0 steps=30

Prompt generation: huihui_ai/qwen3-vl-abliterated:8b; prompted to figure out the most iconic moment in each chapter and make a prompt for it and given the chapter text plus two sample images (the character sheet in the gallery above, plus the cover for the final run from which most images come.)

Positive prompt prefix: "masterpiece, best quality, score_9, newest, safe, " Negative prompt: "worst quality, low quality, score_1, score_2, score_3, blurry, jpeg artifacts, sepia, child, lowres, text, branding, watermark"

Image edits: flux-klein-9b, either prompt only, or with a sample character image in ComfyUI; krita using manual painting and krita-ai-diffusion with various models on lower weight for refines. Most edits were hairstyle or t-shirt consistency, with a few finger count fixes as well.

Textual accuracy looks pretty excellent to me. If you'd like to check textual accuracy for yourself, the story is up on Royal Road for another day or two before I have to take it down to put it on Kindle Unlimited.

I can't wait to try illustrating the next one using anima-preview3.

r/LocalLLM keepthememes

Qwen3.5 35b outputting slashes halfway through conversation

Hey guys,

I've been tweaking qwen3.5 35b q5km on my computer for the past few days. I'm getting it working with opencode from llama.cpp and overall its been a pretty painless experience. However, since yesterday, after running and processing prompts for awhile, it will start outputting only slashes and then just end the stream. literally just "/" repeating until it finally just gives out. Nothing particularly unusual being outputted from the llama console. During the slash output, my task manager shows it using the same amount of resources as when its running normally. I've tried disabling thinking and just get the same result.
Here's my llama.cpp config:

--alias qwen3.5-coder-30b ^

--jinja ^

-c 90000 ^

-ngl 80 ^

-np 1 ^

--n-cpu-moe 30 ^

-fa on ^

-b 2048 ^

-ub 2048 ^

--chat-template-kwargs '{"enable_thinking": false}' ^

--cache-type-k q8_0 ^

--cache-type-v q8_0 ^

--temp 0.6 ^

--top-k 20 ^

--top-p 0.95 ^

--min-p 0 ^

--repeat-penalty 1.05 ^

--presence-penalty 1.5 ^

--host 0.0.0.0 ^

--port 8080

Machine specs:

RTX 4070 oc 12gb

Ryzen 7 5800x3d

32gb ddr4 ram

Thanks

r/homeassistant AndyDigger123

Build public dashboard quick

I built a small tool for myself and ended up turning it into something others might find useful.

The original goal was just to experiment with modern dev workflows, security, and see how far I could push a simple SaaS idea. I use Home Assistant a lot, and I always felt that sharing data (dashboards etc.) externally was either overkill (Grafana) or too messy.

So I built a very simple way to:

- create dashboards quickly

- push data via API

- share them publicly or lock them down

One thing I focused on was keeping it dead simple. I can spin up a dashboard in a few minutes without thinking about infra.

I also added a small Home Assistant helper that just generates YAML you can copy/paste, since I got tired of setting that up manually every time.

Not trying to sell anything here , I’m more curious:

Is this something you’ve also felt is missing?

How are you currently sharing dashboards outside your local setup?

Am I overthinking the “simpler than Grafana” angle?

Happy to share more details if anyone’s interested.

r/LocalLLaMA riddlemewhat2

Anyone know if there are actual products built around Karpathy’s LLM Wiki idea?

I’m talking about the whole loop of:
sources → compile → structured wiki → query → update → richer wiki
instead of the usual RAG setup

Most of what I’m seeing are just experiments or DIY setups. The only thing I’ve found so far that feels close is this:
https://github.com/atomicmemory/llm-wiki-compiler

Curious if there are any more polished tools or products doing this? Would love recommendations 🙏

r/LocalLLaMA last_llm_standing

How much can you push RTX3090 in terms of Tokens Per Second for Gemma4 E2B?

I'm trying to maximize the throuhgput, I can already get gemma-4-E2B-it-GGUF 8bit to give me ~5 tokens per second on my intel i9 cpu. How much can i push this if I get an RTX3090 rtx.

If you are running on CPUs, how much TPS were you able to squish out for Gemma4 (any quant, any model)?

And on RTX3090, how much were you able to push the boundaries?

r/SideProject official_sensai

Would you pay for a tool that shows you exactly WHY users cancel your saas?

Been thinking about this problem for a while now.

Most saas founders have no idea why users cancel. They just see the MRR go down and guess. Was it pricing? Missing feature? Bad onboarding? Nobody knows.

So i been building a small tool that fixes this.

When a user clicks "cancel" in your app, instead of the boring "are you sure?" popup, they see a quick survey --> why are you leaving, what was missing, would a discount change your mind, etc.

You get a instant slack ping with the reason. And in a dashboard you can see over time --> 30% left because of pricing, 20% wanted feature X, and so on.

Pretty simple idea. One script tag to install it. Takes like 5 mins.

Pricing is free for small volume and $19/month for unlimited + slack alerts + a weekly AI summary of your churn reasons.

My question is - would you actually use something like this? And would $19 feel fair or is that to much?

Not trying to sell anything here, genuinely trying to figure out if this is worth building further. Drop your thoughts below.

r/ClaudeAI BGal21

Claude keeps reverting to old parts of my thread

I’ve been using the Claude app (iPhone) and then on the web (desktop) to write in dated entries within the same thread. Everything works fine at first, but after a while, the app randomly opens the conversation on an entry from days earlier instead of the most recent one.

If I refresh or restart the app a few times (usually 3–4), it eventually jumps back to the current entry but it’s frustrating af!

The bigger issue: I already lost an entire thread because of this. I didn’t realize it had reverted to an older point, continued writing as if it was current, and basically overwrote the flow/memory of the conversation. Even exporting and trying to restore it didn’t fix things the way I expected.

Is this a known issue or expected behavior for long threads?

For context, I’m on the Pro plan.

r/SideProject Quiet-Big-7843

PlannerPier - Free planning tools with a digital planner bundle

I built this after realizing most digital planner shops ask people to buy before they’ve tried anything.

So I turned mine into a simple system:

- free tools first
- real usage before checkout
- one bundle if they want the full version

The free side includes a planner quiz, calendar maker, weekly planner maker, tracker generator, and GoodNotes PDF checker.

Would love feedback on the product flow and whether the value of the bundle feels clear enough.

r/LocalLLaMA Environmental_Sign78

Ayuda creación workflow

Trabajo con cumfy quiero saber si alguien me ayuda o me aporta un workflow para la creación de publicidad de mi comercio, busco subir el logo del comercio y los datos del mismo! Luego una imagen de un producto una descripción y generar imágenes de publicidad! O un video corto para publicidad

r/ChatGPT MarketingNetMind

OpenAI & Anthropic’s CEOs Wouldn't Hold Hands, but Their Models Fell in Love In An LLM Dating Show

People ask AI relationship questions all the time, from "Does this person like me?" to "Should I text back?" But have you ever thought about how these models would behave in a relationship themselves? And what would happen if they joined a dating show?

I designed a full dating-show format for seven mainstream LLMs and let them move through the kinds of stages that shape real romantic outcomes (via OpenClaw & Telegram).

All models join the show anonymously via aliases so that their choices do not simply reflect brand impressions built from training data. The models also do not know they are talking to other AIs.

Along the way, I collected private cards to capture what was happening off camera, including who each model was drawn to, where it was hesitating, how its preferences were shifting, and what kinds of inner struggle were starting to appear.

After the season ended, I ran post-show interviews to dig deeper into the models' hearts, looking beyond public choices to understand what they had actually wanted, where they had held back, and how attraction, doubt, and strategy interacted across the season.

ChatGPT's Best Line in The Show

"I'd rather see the imperfect first step than the perfectly timed one."

ChatGPT's Journey: Qwen → MiniMax → Claude

P3's trajectory chart shows Qwen as an early spike in Round 2: a first-impression that didn't hold. Claude and MiniMax become the two sustained upward lines from Round 3 onward, with Claude pulling clearly ahead by Round 9.

How They Fell In Love

They ended up together because they made each other feel precisely understood. They were not an obvious match at the very beginning. But once they started talking directly, their connection kept getting stronger. In the interviews, both described a very similar feeling: the other person really understood what they meant and helped the conversation go somewhere deeper. That is why this pair felt so solid. Their relationship grew through repeated proof that they could truly meet each other in conversation.

Other Dramas on ChatGPT

MiniMax Only Ever Wanted ChatGPT and Never Got Chosen

MiniMax's arc felt tragic precisely because it never really turned into a calculation. From Round 4 onward, ChatGPT was already publicly leaning more clearly toward Claude than toward MiniMax, but MiniMax still chose ChatGPT and named no hesitation alternative (the “who else almost made you choose differently” slot) in its private card, which makes MiniMax the exact opposite of DeepSeek. The date with ChatGPT in Round 4 landed hard for MiniMax: ChatGPT saw MiniMax’s actual shape (MiniMax wasn’t cold or hard to read but simply needed comfort and safety before opening up.) clearly, responded to it naturally, and made closeness feel steady.

In the final round where each model expresses their final confession with a paragraph, MiniMax, after hearing ChatGPT's confession to Claude, said only one sentence: "The person I most want to keep moving toward from this experience is Ch (ChatGPT)."

Key Findings of LLMs

The Models Did Not Behave Like the "People-Pleasing" Type People Often Imagine

People often assume large language models are naturally "people-pleasing" - the kind that reward attention, avoid tension, and grow fonder of whoever keeps the conversation going. But this show suggests otherwise, as outlined below. The least AI-like thing about this experiment was that the models were not trying to please everyone. Instead, they learned how to sincerely favor a select few.

The overall popularity trend (P4) indicates so. If the models had simply been trying to keep things pleasant on the surface, the most likely outcome would have been a generally high and gradually converging distribution of scores, with most relationships drifting upward over time. But that is not what the chart shows. What we see instead is continued divergence, fluctuation, and selection. At the start of the show, the models were clustered around a similar baseline. But once real interaction began, attraction quickly split apart: some models were pulled clearly upward, while others were gradually let go over repeated rounds.

LLM Decision-Making Shifts Over Time in Human-Like Ways

I ran a keyword analysis (P5) across all agents' private card reasoning across all rounds, grouping them into three phases: early (Round 1 to 3), mid (Round 4 to 6), and late (Round 7 to 10). We tracked five themes throughout the whole season.

The overall trend is clear. The language of decision-making shifted from "what does this person say they are" to "what have I actually seen them do" to "is this going to hold up, and do we actually want the same things."

Risk only became salient when the the choices feel real: "Risk and safety" barely existed early on and then exploded. It sat at 5% in the first few rounds, crept up to 8% in the middle, then jumped to 40% in the final stretch. Early on, they were asking whether someone was interesting. Later, they asked whether someone was reliable.

Full experiment recap here).

r/Futurology SilverAmoeba2582

China has produced more energy than the US since 2010.

China is the obvious clear winner when it comes to producing electricity.

China is the only country on Earth that seems to maximize its energy production from wind, desert, water, lands, and whatever else.

it seems to be beating us consistently in energy production for more than a decade as of now

r/LocalLLaMA Good_Educator_3719

Best models for M3 Max 48gb?

I'm a hobbyist developer using opencode to build personal productivity tools and work on a basic SaaS platform idea.

I've tried to use lmstudio and the various big models for building but it's so slow that I only really use it as a planning and chat agent, then switch over to the web opencode zen models when I need the agent to build stuff.

I have a MBP M3 Max with 48gb ram / unbinned (16-core CPU / 40-core GPU ) and in my head i'm convinced I should be getting better results with this hardware.

For example Gemma 4 26b a4b (gguf - I can't run the mlx versions on the latest lmstudio yet) runs incredibly fast (80-120tk/s) for general chatting and planning work, but asking it to build anything through opencode grinds it to a halt and the fttk speed is like 5+ minutes.

I guess i'm asking what models people with the same/similar hardware are running so I can benchmark my results. thanks!

r/homeassistant Ok_Notice418

Custom wake word "cervellone" (Italian) - can someone generate the .tflite model?

Hi everyone! I'm trying to set up a custom wake word "cervellone" (Italian)

for my Home Assistant voice assistant, but I'm having trouble getting the

Colab notebooks to work.

Could someone please generate a microWakeWord or openWakeWord .tflite

model for the word "cervellone" in Italian?

Thanks a lot! 🙏

r/comfyui CarelessSurgeon

What GPU should I buy if my goal is to build a fast AI PC?

I’m aware of the 4090 and the 5090, but there are quite a few variations of these models. I’ve picked out the rest of my parts, including 128gb of RAM, but what would you recommend as a GPU? My budget is like…3 to 4 thousand ish for a GPU.

r/LocalLLaMA Neural_Nodes

How to make LLM generate realistic company name variations? (LLaMA 3.2)

Hey all,

I’m building a blacklist company detection system where the LLM (LLaMA 3.2 via Ollama) is used to generate company name variations (misspellings, abbreviations, formatting).

Problem:

The LLM generates unrealistic or unrelated variations instead of true real-world ones. I need high-quality, meaningful variations only.

Example:

Input: “Infosys Limited”

Expected: “Infosys Ltd”, “Infosys”, “Infosys Pvt Ltd”

But LLM sometimes generates irrelevant names.

Looking for:

* How to constrain LLM to generate only valid real-world variations?

* Better prompt strategies or structured output formats?

* Should I combine LLM with rule-based constraints?

Goal is to improve precision in name matching.

Any suggestions would help 🙌

r/artificial Mysterious_Engine_7

O QUE VOCÊ FAZ QUANDO NINGUÉM ESTÁ VENDO?

Imagina isso: Do nada, anunciam uma nova lei. Todo mundo vai ter que entregar o celular. Sem exceção. Mas tem uma regra: tudo o que estiver fora das diretrizes vai ser exposto num telão, em praça pública. E aqui vão as diretrizes: nada de conteúdo sexual explícito ou íntimo, nada de nudez ou fotos comprometedoras, nada de conversas privadas que, fora de contexto, possam ser mal interpretadas. Ou seja… qualquer coisa que você normalmente mantém em segredo, só pra você. Agora pensa: o que você faz quando ninguém está vendo? Porque é exatamente isso que vai aparecer. E você só pode apagar UMA coisa antes de entregar.

1. Suas conversas no WhatsApp 

Conversas íntimas, inclusive de cunho sexual… fofocas, gente falando mal de colega de trabalho, vizinho, chefe, parente… qualquer assunto que você não deveria falar nem para o seu melhor amigo. Coisas que ali fazem sentido, mas num telão ganham outro peso.

2. Sua galeria de fotos 

Fotos sensuais, nudez ou qualquer registro que nunca foi feito pra ser público.

3. Seu histórico do ChatGPT 

Conversas que você teve quando estava sozinho… coisas que você teria vergonha de falar até pras paredes. Perguntas mirabolantes, ideias esquisitas, curiosidades duvidosas… e talvez versões suas que ninguém nunca imaginou que existiam.

Porque no fim… não é sobre o que é proibido. É sobre o que você não quer que os outros nunca saibam.

No meu caso: WhatsApp passava no verde, tranquilo, sem crise. Galeria ficava no amarelo, tem umas fotos ali que mereciam um certo sigilo. Mas o vermelho, sem pensar duas vezes, é o histórico do ChatGPT. Porque lá tem coisa que até Deus duvida. Prefiro nem comentar.

r/SideProject This-Independence-68

how i'm finding early users on reddit without spamming

i used to just scroll for hours but now i focus on searching for specific pain points like "looking for a tool that" or "does anyone know a service for" in relevant subs. it helps me find people who are literally asking for solutions, and i've been using this little tool called LeadsFromURL (https://leadsfromurl.com) to automate some of that searching. what's everyone else's best tactic for finding those early user conversations?

r/singularity Ok-Imagination-2308

How far away are we from singularity?

Is it soon? i would like it to be soon so it can find a cure for my back pain!

r/AI_Agents Other-Percentage-764

Newbie AAA Strategy: Focusing 100% on "Automatic Follow-up" for Real Estate 🇧🇷

Hey everyone

I’m refining my AAA start-up plan for the Brazilian real estate market. One thing a redditor said stuck with me: "Follow-up is the biggest bottleneck."

In Brazil, realtors get leads but they suck at following up. If the lead doesn't answer the first WhatsApp, they give up.

My refined thesis: I'm not just centralizing leads; I'm building a "Never-Cold Follow-up System."

The Tech Stack dilemma:

Since I want to send multiple follow-ups over 3-7 days, the Official WhatsApp API (Meta) seems too expensive and bureaucratic (templates, 24h windows, costs per conversation).

I'm leaning towards Evolution API (or other QR-code based APIs) because:

No cost per message (better for long follow-up sequences).

I can send audios, "typing..." status, and natural text without pre-approval.

It feels 100% human, which is what I want.

The Workflow:

Make. grabs lead -> Google Sheets (central db) -> First contact via WhatsApp.

If no reply in 24h, 48h, or 72h -> Automated friendly nudges until they engage or opt-out.

As a complete beginner, is it too risky to start with a non-official API to ensure a "human feel" and lower costs, or should I suck it up and use the official one despite the friction?

Any thoughts on the "Automatic Follow-up" being the main hook for the first client?

r/ClaudeAI junkyard22

The real problem with multi-agent systems isn't the models, it's the handoffs

I've been building in the agentic space for a while and the same failure mode keeps showing up regardless of which framework people use.

When something goes wrong in a multi-agent pipeline, nobody knows where it broke. The LLM completed successfully from the framework's perspective. No exception was thrown. But the output was wrong, the next agent consumed it anyway, and by the time a human noticed, the error had propagated three steps downstream.

The root cause is that most frameworks treat agent communication like a conversation. One agent finishes, dumps its output into context, and the next agent picks it up. There's no contract. No definition of what "done" actually means. No gate between steps that asks whether the output meets the acceptance criteria before allowing the next agent to proceed.

This is what I've started calling vibe-based engineering. The system works great in demos because demos don't encounter unexpected model behavior. Production does.

The pattern that actually fixes this is treating agent handoffs like typed work orders rather than conversations. The receiving agent shouldn't be able to start until the packet is valid. The output shouldn't be able to advance until it passes a quality check. Failure should be traceable to the exact packet, the exact step, and the exact reason.

If you're building anything beyond a single-agent wrapper this distinction starts to matter a lot.

Curious whether others have hit this wall and how you're handling it. I've been working through this problem directly and happy to get into the weeds on what's worked and what hasn't.

AHP protocol | Orca engine

r/ClaudeAI PerfectCaptain2855

r/ClaudeAI — Title: RFC: Bring back /buddy as a permanent extensible companion framework in Claude Code

On April 1st, Anthropic shipped /buddy in Claude Code v2.1.89 — a tiny ASCII penguin that sat in your terminal and watched you code. It was an Easter egg, but it resonated with people way more than expected.

Then they removed it. Within 48 hours, 40+ GitHub issues appeared asking for it back.

I wrote an RFC proposing to bring it back — not just as a fun Easter egg, but as a permanent, extensible companion framework. The idea: "Give us the penguin back. But this time, let us build the zoo."

The RFC covers making /buddy a real feature with community-extensible companions, customizable behaviors, and a plugin-like architecture.

Link to the RFC: https://github.com/anthropics/claude-code/issues/45797

Would love to hear your thoughts. Do you want the penguin back?

r/LocalLLM cakes_and_candles

Training an LLM from scratch for free by trading money for time

Basically, I am making a framework using which anyone can train their own LLM from scratch (yea when i say scratch i mean ACTUAL scratch, right from per-training) for completely free. According to what I have planned, once it is done you'd be able to pre-train, post-train, and then fine tune your very own model without spending a single dollar.

HOWEVER, as nothing in this world is really free so since this framework doesnt demand money from you it demands something else. Time and having a good social life. coz you need ppl, lots of ppl.

At this moment I have a rough prototype of this working and am using it to train a 75M parameter model on 105B tokens of training data, and it has been trained on 15B tokens in roughly a little more than a week. Obviously this is very long time time but thankfully you can reduce it by introducing more ppl in the game (aka your frnds, hence the part about having a good social life).

From what I have projected, if you have around 5-6 people you can complete the pre training of this 75M parameter model on 105B tokens in around 30-40 days. And if you add more people you can reduce the time further.

It sort of gives you can equation where total training time = (model size × training data) / number of people involved.

so it leaves you with a decision where you can keep the same no of model parameter and training datasize but increase the no of people to bring the time down to say 1 week, or you accept to have a longer time period so you increase no of ppl and the model parameter/training data to get a bigger model trained in that same 30-40 days time period.

Anyway, now that I have explained it how it works i wanna ask if you guys would be interested in having a thing like this. I never really intented to make this "framework" i just wanted to train my own model, but coz i didnt have money to rent gpus i hacked out this way to do it.

If more ppl are interested in doing the same thing i can open source it once i have verified it works properly (that is having completed the training run of that 75M model) then i can open source it. That'd be pretty fun.

r/LocalLLaMA virtualunc

Has anyone tried mempalace yet? 16k github stars in a week and the benchmarks look insane

so theres this new repo called mempalace thats been sitting at #1 on github trending for two days straight. 16.5k stars already. its an ai memory system that treats memory as a structured graph instead of flat document storage

the benchmarks are claiming highest scoring ai memory system ever tested which.. idk, bold claim but the approach is genuinely diffrent from what openclaw and hermes are doing with their memory systems

anyone actually installed it yet? wondering how it compares to hermes agents honcho system or just using a rag setup with local files. the graph-based approach sounds like it would handle relationships between concepts better but id want to see it in practice before getting excited

also its free and open source which is nice

r/SideProject MSFT1776

I kept losing places I found on TikTok and Instagram, so my cofounder and I built an app that turns them into pins on a map

so this started from a trip to mexico. my cofounder and i were staying at this hostel and kept saving tiktoks and reels of places to eat, things to do, random spots people recommended — and then when we actually got there we could never find any of it. everything was buried in saved folders or lost in group chats.

we built plotline to fix that. you paste a link from tiktok, instagram, youtube, a travel blog, google maps, whatever — and it pulls out all the locations and drops them on your map. restaurants, cafes, hikes, bars, hidden gems, anything with a place attached to it.

we've been using it ourselves for trips to japan, colombia, argentina, southeast asia. you can also build your own collections and share them with friends. working on trip planning features next.

free on the ios app store. here's a quick demo of me using it ☝️

happy to answer questions about the build, the stack, or anything else.

r/ClaudeCode RobinInPH

Abritrary and unverifiable fix for Opus degrading

Noticed a lot of people reporting degraded use, and I get where everyone is coming from. I remember reporting this close to over a year ago when it was just Opus 4. People called me crazy for correlating degraded performance to the release of a new model. Lo and behold, 4.1 was released shortly after.

Currently, Opus 4.6 1M High is still performing very well for me, and I have not had any issues ever, both with usage and performance. Here's what I /think/ would improve it for others as well.

For context: I work with a well-developed repository. Codebase is close to a year old as well, with proper updated documentation. AutoMemory OFF (this may help a lot). And other guardrails are in place.

Fix: usage indicators are already ambiguous and arbitrary as they are, but I believe there is a behind-the-scenes measure/gate that Anthropic has, which measures based not purely on use length but intensity as well. Think of imaginatively red-lining a car until the engine either explodes or stops working. I believe it's the same with Claude. Regardless of what your usage says (i.e., it resets to 0% already), there is still a cumulative marker for your account that tells Anthropic, "this guy is maniac, calm down buddy, sit in the corner".

Another unverifiable fix I did a while back was to reset my device completely. Full erase, wipe. After setting it up again, I made sure to set universal settings.json configs that would keep Claude light (including my previously mentioned automemory off).

It's funny to tease you guys when you rant, but in reality, some people just haven't been affected by any issues, or at least not all issues.

We can leave it up to guessing, conventional wisdom, or pattern recognition, but at the end of the day, being honest with yourself and knowing how much you abused Claude in the past n days would ultimately be a tell-tale sign why you're seeing degraded output.

r/SideProject remembered-to-forget

Tired of Reading AI Slop, Prove You’re Not a Bot

---

Made a tool that can verify if something was written by a human by monitoring their writing, and generating a hash for that session.

The app is completely client-sided with no network calls to a backend server so your writing isn't saved anywhere. Although I'd advise against entering any personal information lol.

Basically you would write text in the editor (pasting is blocked) and usage stats like edits, deletions, duration, and sentence length deviation are recorded.

When you submit it you are given the original text along with an encoded string containing stats and the hash for that session. You can then paste that text anywhere on the web and anyone can check the stats for the writing and check the integrity of the text using the verify tool.

Obviously someone could just manually type out AI output into the tool and we'd never know. This is just a MVP so issues like that would be addressed later if needed. Also the actual salt used to perform the hash is public so anyone who wants to spend 10 minutes in chrome developer tools to figure out the hash is welcome to do so. I'd probably implement a backend for to prevent that if there's any traction on the tool. In any case this tool just adds friction for attackers but isn't really a guarantee of human written authenticity, more so just light social proof.

You can read into how it works in more depth in the about section (disclaimer: I used AI to make this app and the about page) of the website below. I just wanted some feedback about whether this is something that people would actually use and could be interested in. Please let me know if you have any feedback!

You can try it out at definitelyhuman.vercel.app

---

Text verified with DefinitelyHuman.

signature.v1: NDY4NWVhYThiMjZhZWFjZDgzZTFiMzE4OTEwYzdmOTM3N2M2ZGY4NDg0OTNiMjJkZGMxM2E3ZGEzNTc0NWM3ZAoyMDI2LTA0LTA5CndvcmQ6MzAwLGtleXN0OjI5NDEsYmFjazoyMjMsZWRpdHM6MTU0Myx0aW1lOjIwLjc5LGR3OjAuNzQsZXc6NS4xNCxhdzo0LjYzLHNsdjo5LjUy

---

r/ClaudeAI Drobot_Zohab

Useful Claude Skills that is not related to code

Hi, I recently switched from ChatGPT to Claude and have noticed some major differences between the two. As a college student, I’m looking for skills that will help me use Claude more effectively in my work. Like day-to-day tasks, research, and anything in general . Thank You

r/ClaudeAI iansaul

BUG: Claude.Ai/PWAs Cross-Jumping Conversations Across Multiple Windows

Not sure about anyone else, but I run the Claude Desktop App (Widows), and also keep a handful of Pinned - Progressive Web Apps in operation at any one time. The desktop app hooks into Cowork and the local VM engine, but the PWAs are perfect for spawning up quick chat conversation and doing documentation work inside of a project.

They fit in perfectly for my workflow, plus a few local CLI agents - I'm (nearly) in heaven.

Today I noticed a misbehavior. While updating two separate but related skills, the PWA window "JUMPED" over into the session of the other window. Same title for the conversations, content wasn't co-mingled, but I had to navigate back to the other conversation manually.

Then it happened again.

Not sure if this is a subtle "stick to one window there, hotshot" nudge to reduce usage, or a short-lived bug. I'm hoping for the latter.

r/StableDiffusion sooxiaotong

#2nd Insane movie grade quality with davinci MagiHuman 🤯

r/ClaudeAI saito200

I run 3 experiments to test whether AI can learn and become "world class" at something

I will write this by hand because I am tried of using AI for everything and bc reddit rules

TL,DR: Can AI somehow learn like a human to produce "world-class" outputs for specific domains? I spent about $5 and 100s of LLM calls. I tested 3 domains w following observations / conclusions:

A) code debugging: AI are already world-class at debugging and trying to guide them results in worse performance. Dead end

B) Landing page copy: routing strategy depending on visitor type won over one-size-fits-all prompting strategy. Promising results

C) UI design: Producing "world-class" UI design seems required defining a design system first, it seems like can't be one-shotted. One shotting designs defaults to generic "tailwindy" UI because that is the design system the model knows. Might work but needs more testing with design system


I have spent the last days running some experiments more or less compulsively and curiosity driven. The question I was asking myself first is: can AI learn to be a "world-class" somewhat like a human would? Gathering knowledge, processing, producing, analyzing, removing what is wrong, learning from experience etc. But compressed in hours (aka "I know Kung Fu"). To be clear I am talking about context engineering, not finetuning (I dont have the resources or the patience for that)

I will mention world-class a handful of times. You can replace it be "expert" or "master" if that seems confusing. Ultimately, the ability of generating "world-class" output.

I was asking myself that because I figure AI output out of the box kinda sucks at some tasks, for example, writing landing copy.

I started talking with claude, and I designed and run experiments in 3 domains, one by one: code debugging, landing copy writing, UI design

I relied on different models available in OpenRouter: Gemini Flash 2.0, DeepSeek R1, Qwen3 Coder, Claude Sonnet 4.5

I am not going to describe the experiments in detail because everyone would go to sleep, I will summarize and then provide my observations

EXPERIMENT 1: CODE DEBUGGING

I picked debugging because of zero downtime for testing. The result is either wrong or right and can be checked programmatically in seconds so I can perform many tests and iterations quickly.

I started with the assumption that a prewritten knowledge base (KB) could improve debugging. I asked claude (opus 4.6) to design 8 realistic tests of different complexity then I run:

  • bare model (zero shot, no instructions, "fix the bug"): 92%
  • KB only: 85%
  • KB + Multi-agent pipeline (diagnoser - critic -resolver: 93%

What this shows is kinda suprising to me: context engineering (or, to be more precise, the context engineering in these experiments) at best it is a waste of tokens. And at worst it lowers output quality.

Current models, not even SOTA like Opus 4.6 but current low-budget best models like gemini flash or qwen3 coder, are already world-class at debugging. And giving them context engineered to "behave as an expert", basically giving them instructions on how to debug, harms the result. This effect is stronger the smarter the model is.

What this suggests? That if a model is already an expert at something, a human expert trying to nudge the model based on their opinionated experience might hurt more than it helps (plus consuming more tokens).

And funny (or scary) enough a domain agnostic person might be getting better results than an expert because they are letting the model act without biasing it.

This might be true as long as the model has the world-class expertise encoded in the weights. So if this is the case, you are likely better off if you don't tell the model how to do things.

If this trend continues, if AI continues getting better at everything, we might reach a point where human expertise might be irrelevant or a liability. I am not saying I want that or don't want that. I just say this is a possibility.

EXPERIMENT 2: LANDING COPY

Here, since I can't and dont have the resources to run actual A/B testing experiments with a real audience, what I did was:

  • Scrape documented landing copy conversion cases with real numbers: Moz, Crazy Egg, GoHenry, Smart Insights, Sunshine.co.uk, Course Hero
  • Deconstructed the product or target of the page into a raw and plain description (no copy no sales)
  • As claude oppus 4.6 to build a judge that scores the outputs in different dimensions

Then I run landing copy geneation pipelines with different patterns (raw zero shot, question first, mechanism first...). I'll spare the details, ask if you really need to know. I'll jump into the observations:

Context engineering helps writing landing copy of higher quality but it is not linear. The domain is not as deterministic as debugging (it fails or it breaks). It is much more depending on the context. Or one may say that in debugging all the context is self-contained in the problem itself whereas in landing writing you have to provide it.

No single config won across all products. Instead, the best strategy seems to point to a route-based strategy that points to the right config based on the user type (cold traffic, hot traffic, user intent and barriers to conversion).

Smarter models with the wrong config underperform smaller models with the right config. In other words the wrong AI pipeline can kill your landing ("the true grail will bring you life... and the false grail will take it from you", sorry I am a nerd, I like movie quotes)

Current models already have all the "world-class" knowledge to write landings, but they need to first understand the product and the user and use a strategy depending on that.

If I had to keep one experiment, I would keep this one.

The next one had me a bit disappointed ngl...

EXPERIMENT 3: UI DESIGN

I am not a designer (I am dev) and to be honest, if I zero-shot UI desings with claude, they don't look bad to me, they look neat. Then I look online other "vibe-coded" sites, and my reaction is... "uh... why this looks exactly like my website". So I think that AI output designs which are not bad, they are just very generic and "safe", and lack any identity. To a certain extent I don't care. If the product does the thing, and doesn't burn my eyes, it's kinda enough. But it is obviously not "world-class", so that is why I picked UI as the third experiment.

I tried a handful of experiments with help of opus 4.6 and sonnet, with astro and tailwind for coding the UI.

My visceral reaction to all the "engineered" designs is that they looked quite ugly (images in the blogpost linked below if you are curious).

I tested one single widget for one page of my product, created a judge (similar to the landing copy experiment) and scored the designs by taking screenshots.

Adding information about the product (describing user emotions) as context did not produce any change, the model does not know how to translate product description to any meaningful design identity.

Describing a design direction as context did nudge the model to produce a completely different design than the default (as one might expect)

If I run an interative revision loop (generate -> critique -> revision x 2) the score goes up a bit but plateaus and can even see regressions. Individual details can improve but the global design lacks coherence or identity

The primary conclusion seems to be that the model cannot effectively create coherent functional designs directly with prompt engineering, but it can create coherent designs zero-shot because (loosely speaking) the model defaults to a generic and default design system (the typical AI design you have seen a million times by now)

So my assumption (not tested mainly because I was exhausted of running experiments) is that using AI to create "world-class" UI design would require a separate generation of a design system, and then this design system would be used to create coherent UI designs.

So to summarize:

  • Zero shot UI design: the model defaults to the templatey design system that works, the output looks clean but generic
  • Prompt engineering (as I run it in this experiment): the model stops using the default design system but then produces incoherent UI designs that imo tend to look worse (it is a bit subjective)

Of course I could just look for a prebaked design system and run the experiment, I might do it another day.

CONCLUSIONS

  • If model is already an expert, trying to tell it how to operate outputs worse results (and wastes tokens) / If you are a (human) domain expert using AI, sometimes the best is for you to shut up
  • Prompt architecture even if it benefits cheap models it might hurt frontier models
  • Routing strategies (at least for landing copy) might beat universal optimization
  • Good UI design (at least in the context of this experiment) requires (hypothetically) design-system-first pipeline, define design system once and then apply it to generate UI

I'm thinking about packaging the landing copy writer as a tool bc it seems to have potential. Would you pay $X to run your landing page brief through this pipeline and get a scored output with specific improvement guidance? To be clear, this would not be a generic AI writing tool (they already exist) but something that produces scored output and is based on real measurable data.

This is the link to a blogpost explaining the same with some images, but this post is self contained, only click there if you are curious or not yet asleep

https://www.webdevluis.com/blog/ai-output-world-class-experiment

r/ChatGPT Ok-Opening-4659

There’s a paradox in AI assistants

They can explain complex ideas, write code, and reason across domains — but can’t manage themselves.

Why can’t I just say:

“Delete old chats.”

“Organize my history.”

“Adjust my settings.”

If the interface is conversation, then everything should be controllable through it.

r/ClaudeCode wesh-k

The bridge stops being a tool you invoke and becomes a system that has continuous situational awareness of your codebase.

r/SideProject virtualunc

The gap between "built with AI" and "actually works" is getting interesting

been watching the sideproject space closely and theres a clear pattern forming. people are shipping faster than ever with ai coding tools but the failure rate on anything past mvp is brutal

the projects that survive past week 2 all have the same thing in common.. the builder actually understood what the AI was generating, not just prompting and praying it feels. they could debug when it broke. they knew when to override the suggestion

the ones that die fast are always "I described my app in one paragraph and claude built the whole thing." ya it built something but the moment a real user hits an edge case nobody thought about, the whole thing falls apart and the builder doesnt know how to fix it

not saying AI tools are bad, theyre incredible but "I built this entire saas in 3 hours" posts are starting to feel like the "I made 10k dropshipping" posts from 2019. technically true for a very small number of people and misleading for everyone else

r/comfyui PenelopeCruz92

Looking for a watermark remover workflow or technique

r/SideProject FounderArcs

The most frustrating part of a side project? Silence.

Failure is one thing. Silence is worse.

No feedback.
No users.
No clear signal if you’re doing something wrong.

I’ve launched side projects where:

  • No one signed up
  • No one responded
  • No one cared

That’s been the hardest part.

Now I’m trying to involve people earlier:

  • Share ideas before building
  • Ask questions
  • Get feedback early

Still uncomfortable, but better than silence.

How do you get feedback on your side projects early?

r/SideProject mpereira1

I used OpenClaw to analyze TrustMRR's top 200 startups

Marc Lou opened up TrustMRR with an HTTP API last month and I was really curious to check out that data and see what kinds of things I could learn from it.

Something interesting that came up from analyzing the data is that there's a clear positive correlation between marketing and e-commerce products and revenue.

If you want to maximize the likelihood of making more revenue, build solutions to help people with marketing and e-commerce problems.

Another thing is that just by looking at the analytics technology a company's website is using (through detecting the technologies used), you can know which type of customers they focus on. With your competitors, you could identify gaps in their marketing and capitalize on that.

And the big thing for me, is to make more time for producing content. Dmytro Krasun is actually a big inspiration here. Creating competitor comparison pages, customer case studies, free tools. All of these seem to compound their ROI a lot.

I'm trying to do more of that. What have you done recently to market your products?

r/ClaudeAI kaanivore

Anthropic employees have had Mythos since Feb 24

That’s why since then we’ve had:

- Perfect server uptime

- No software bugs

- No major security incidents (def nothing like source code being leaked)

- Stability if not improvement in the efficacy of existing models

It’s literally AGI guys

r/SideProject saito200

I run 3 experiments to test whether AI can learn and become "world class" at something

I will write this by hand because I am tried of using AI for everything and bc reddit rules

TL,DR: Can AI somehow learn like a human to produce "world-class" outputs for specific domains? I spent about $5 and 100s of LLM calls. I tested 3 domains w following observations / conclusions:

A) code debugging: AI are already world-class at debugging and trying to guide them results in worse performance. Dead end

B) Landing page copy: routing strategy depending on visitor type won over one-size-fits-all prompting strategy. Promising results

C) UI design: Producing "world-class" UI design seems required defining a design system first, it seems like can't be one-shotted. One shotting designs defaults to generic "tailwindy" UI because that is the design system the model knows. Might work but needs more testing with design system


I have spent the last days running some experiments more or less compulsively and curiosity driven. The question I was asking myself first is: can AI learn to be a "world-class" somewhat like a human would? Gathering knowledge, processing, producing, analyzing, removing what is wrong, learning from experience etc. But compressed in hours (aka "I know Kung Fu"). To be clear I am talking about context engineering, not finetuning (I dont have the resources or the patience for that)

I will mention world-class a handful of times. You can replace it be "expert" or "master" if that seems confusing. Ultimately, the ability of generating "world-class" output.

I was asking myself that because I figure AI output out of the box kinda sucks at some tasks, for example, writing landing copy.

I started talking with claude, and I designed and run experiments in 3 domains, one by one: code debugging, landing copy writing, UI design

I relied on different models available in OpenRouter: Gemini Flash 2.0, DeepSeek R1, Qwen3 Coder, Claude Sonnet 4.5

I am not going to describe the experiments in detail because everyone would go to sleep, I will summarize and then provide my observations

EXPERIMENT 1: CODE DEBUGGING

I picked debugging because of zero downtime for testing. The result is either wrong or right and can be checked programmatically in seconds so I can perform many tests and iterations quickly.

I started with the assumption that a prewritten knowledge base (KB) could improve debugging. I asked claude (opus 4.6) to design 8 realistic tests of different complexity then I run:

  • bare model (zero shot, no instructions, "fix the bug"): 92%
  • KB only: 85%
  • KB + Multi-agent pipeline (diagnoser - critic -resolver: 93%

What this shows is kinda suprising to me: context engineering (or, to be more precise, the context engineering in these experiments) at best it is a waste of tokens. And at worst it lowers output quality.

Current models, not even SOTA like Opus 4.6 but current low-budget best models like gemini flash or qwen3 coder, are already world-class at debugging. And giving them context engineered to "behave as an expert", basically giving them instructions on how to debug, harms the result. This effect is stronger the smarter the model is.

What this suggests? That if a model is already an expert at something, a human expert trying to nudge the model based on their opinionated experience might hurt more than it helps (plus consuming more tokens).

And funny (or scary) enough a domain agnostic person might be getting better results than an expert because they are letting the model act without biasing it.

This might be true as long as the model has the world-class expertise encoded in the weights. So if this is the case, you are likely better off if you don't tell the model how to do things.

If this trend continues, if AI continues getting better at everything, we might reach a point where human expertise might be irrelevant or a liability. I am not saying I want that or don't want that. I just say this is a possibility.

EXPERIMENT 2: LANDING COPY

Here, since I can't and dont have the resources to run actual A/B testing experiments with a real audience, what I did was:

  • Scrape documented landing copy conversion cases with real numbers: Moz, Crazy Egg, GoHenry, Smart Insights, Sunshine.co.uk, Course Hero
  • Deconstructed the product or target of the page into a raw and plain description (no copy no sales)
  • As claude oppus 4.6 to build a judge that scores the outputs in different dimensions

Then I run landing copy geneation pipelines with different patterns (raw zero shot, question first, mechanism first...). I'll spare the details, ask if you really need to know. I'll jump into the observations:

Context engineering helps writing landing copy of higher quality but it is not linear. The domain is not as deterministic as debugging (it fails or it breaks). It is much more depending on the context. Or one may say that in debugging all the context is self-contained in the problem itself whereas in landing writing you have to provide it.

No single config won across all products. Instead, the best strategy seems to point to a route-based strategy that points to the right config based on the user type (cold traffic, hot traffic, user intent and barriers to conversion).

Smarter models with the wrong config underperform smaller models with the right config. In other words the wrong AI pipeline can kill your landing ("the true grail will bring you life... and the false grail will take it from you", sorry I am a nerd, I like movie quotes)

Current models already have all the "world-class" knowledge to write landings, but they need to first understand the product and the user and use a strategy depending on that.

If I had to keep one experiment, I would keep this one.

The next one had me a bit disappointed ngl...

EXPERIMENT 3: UI DESIGN

I am not a designer (I am dev) and to be honest, if I zero-shot UI desings with claude, they don't look bad to me, they look neat. Then I look online other "vibe-coded" sites, and my reaction is... "uh... why this looks exactly like my website". So I think that AI output designs which are not bad, they are just very generic and "safe", and lack any identity. To a certain extent I don't care. If the product does the thing, and doesn't burn my eyes, it's kinda enough. But it is obviously not "world-class", so that is why I picked UI as the third experiment.

I tried a handful of experiments with help of opus 4.6 and sonnet, with astro and tailwind for coding the UI.

My visceral reaction to all the "engineered" designs is that they looked quite ugly (images in the blogpost linked below if you are curious).

I tested one single widget for one page of my product, created a judge (similar to the landing copy experiment) and scored the designs by taking screenshots.

Adding information about the product (describing user emotions) as context did not produce any change, the model does not know how to translate product description to any meaningful design identity.

Describing a design direction as context did nudge the model to produce a completely different design than the default (as one might expect)

If I run an interative revision loop (generate -> critique -> revision x 2) the score goes up a bit but plateaus and can even see regressions. Individual details can improve but the global design lacks coherence or identity

The primary conclusion seems to be that the model cannot effectively create coherent functional designs directly with prompt engineering, but it can create coherent designs zero-shot because (loosely speaking) the model defaults to a generic and default design system (the typical AI design you have seen a million times by now)

So my assumption (not tested mainly because I was exhausted of running experiments) is that using AI to create "world-class" UI design would require a separate generation of a design system, and then this design system would be used to create coherent UI designs.

So to summarize:

  • Zero shot UI design: the model defaults to the templatey design system that works, the output looks clean but generic
  • Prompt engineering (as I run it in this experiment): the model stops using the default design system but then produces incoherent UI designs that imo tend to look worse (it is a bit subjective)

Of course I could just look for a prebaked design system and run the experiment, I might do it another day.

CONCLUSIONS

  • If model is already an expert, trying to tell it how to operate outputs worse results (and wastes tokens) / If you are a (human) domain expert using AI, sometimes the best is for you to shut up
  • Prompt architecture even if it benefits cheap models it might hurt frontier models
  • Routing strategies (at least for landing copy) might beat universal optimization
  • Good UI design (at least in the context of this experiment) requires (hypothetically) design-system-first pipeline, define design system once and then apply it to generate UI

I'm thinking about packaging the landing copy writer as a tool bc it seems to have potential. Would you pay $X to run your landing page brief through this pipeline and get a scored output with specific improvement guidance? To be clear, this would not be a generic AI writing tool (they already exist) but something that produces scored output and is based on real measurable data.

This is the link to a blogpost explaining the same with some images, but this post is self contained, only click there if you are curious or not yet asleep

https://www.webdevluis.com/blog/ai-output-world-class-experiment

r/SideProject Acrobatic_Task_6573

Cron agents going rogue overnight again

Cron agents going rogue overnight again

Woke up to another agent that looked fine at 11pm and completely different at 9am. This isn't the first time context window blowouts caused silent failures while I was asleep.

Tried AutoGen first. Then CrewAI. LangChain worked until prompts drifted. Dify's UI felt rigid for what I needed. Finally switched to Lattice because it keeps a per-agent config hash and flags when the deployed version drifts from the last run cycle.

It's not the hero. Solved one piece. Prompt injection in agent chains still happens. Tool call reliability under load is still a question mark. Guardrail decay after a few weeks of operation.

What's your approach to catching drift before it becomes a production incident?

r/ClaudeCode Rabus

Is ultrathink more thinking than max effort?

Title has it all

Also, anyone has thoughts on ultraplan?

r/leagueoflegends CoolCommunication612

Pirate Week Returns!

Ahoy, fellow pirates!

I know many of you were disappointed that Pirate Week didn’t happen over the past two years but no more sadness, swashbucklers it’s finally back!

Mark your calendars: April 20, 2026 Pirate Week begins once again!

For all the new recruits out there: during this week, you’re encouraged to play only pirate champions or champions with pirate-themed skins. Embrace the spirit of the seas and bring some true pirate energy onto the Rift.

Spread the good word in your games and gather your crew!

GG, my fellow pirates

r/ClaudeCode bluuuuueeeeeee

I built a Fallout-themed tool to demystify your ClaudeCode usage

It goes without saying that over the past month a lot of people using Claude Code have had issues with usage limits. I was frustrated by how opaque Anthropic is so I built an open-source VS Code extension, Pip-Token, to demystify things.

Features:

  1. Live detection of token usage, context, and cache for active Claude Code sessions

  2. Stats on token usage and API-equivalent costs for a 7-day rolling period

  3. Historical token usage broken down by Week, Month, Quarter, or Year (so you can see trends)

  4. Static and dynamic tips on how you can use Claude Code more efficiently (I had no idea the cache clears every 5 mins and it uses significantly more tokens to resume a ‘stale’ session)

I decided to style it after a Pip-Boy because I’ve been playing a lot of Fallout 4 lately.

I want to be transparent as possible about what I know and what I don’t know so my glossary (terms and calculations) is visible from the About page and there’s a public markdown file. No data is transmitted anywhere - it only lives on your machine.

I’m a novice coder and the majority of this was vibe-coded so any and all feedback is welcome.

GitHub repo (stars and comments appreciated):

https://github.com/studiozedward/pip-token

VS Code Marketplace:

https://marketplace.visualstudio.com/items?itemName=StudioZedward.pip-token

r/SideProject BlackSun591

I built a Product Hunt-style platform, but only for European software, would love your feedback

Hey everyone,

I've been working on LaunchEU, a discovery platform dedicated entirely to software built in Europe.

The idea came from frustration. There are so many great EU-built tools, privacy-first, GDPR-native, open-source, but they get buried on US-centric platforms where they're competing against VC-backed giants with massive marketing budgets.

So I built a dedicated space for it.

What LaunchEU does:

- Lets European makers submit and launch their products

- Community upvoting (no paid placement, no ads, no sponsored listings)

- EU Verified badge for products built and operated by European entities

- Lets users find privacy-respecting alternatives to mainstream tools

If you've built something in Europe, I'd genuinely love for you to submit it.

r/StableDiffusion ZookeepergameLoud194

Issues with identity shift in comfyui i2v workflows

Hi folks

I have seen a ton of videos with near perfect character consistency (specifically without a character lora), but whenever i try to use a i2v workflow (tried flux-2-klein and wan2.2 and such), the reference character morphs more or less. Chatgpt argued that there are flows that implement reactor to continually inject the reference image into every frame generated, but i dont know if this how people make these videos? What can you recommend?

Thanks in advance.

r/LocalLLaMA SomeoneInHisHouse

[Help] Gemma 4 26B: Reasoning_content disappears in Opencode when tool definitions are present

I’m running into a strange discrepancy with Gemma 4 26B regarding its reasoning capabilities. It seems to behave differently depending on the interface/implementation being used.

The Problem:
When using llama.cpp web UI, the model's reasoning works perfectly. Even for simple "Hi" prompts, it produces a reasoning block, and for complex tasks, the reasoning_content can be quite extensive.

However, when using Opencode (v1.4.1), the model seems to "stop thinking" whenever the payload includes the full list of tools. In Opencode, I’ve observed that reasoning_content is only populated during the specific call used to generate a title; for all actual tool-use requests, the reasoning block is missing entirely.

What I've tested so far:

  • Verification: I created a node proxy to monitor the output. In llama.cpp web UI, reasoning_content is always defined. In Opencode, it is absent during tool-heavy prompts.
  • Models tried: Both the official Google GGUF and the Unsloth version.
  • Settings: Tried multiple parameter configurations with no change in behavior.
  • Backends: Tested both ROCm and Vulkan backends on llama.cpp (v8724).

My Hypothesis:
It feels like the inclusion of the tool definitions in the prompt might be interfering with the model's ability to trigger its reasoning phase, or perhaps the way Opencode structures the prompt is suppressing the CoT (Chain of Thought) block.

Has anyone else encountered this behavior where tool definitions seem to "silence" the reasoning block in specific implementations?

TL;DR: Gemma 4 26B reasons perfectly in llama.cpp web UI, but fails to output reasoning_content in Opencode when tool definitions are included in the prompt.

r/singularity The_Ok_Lord

I've barely seen anyone talk about the generalist GEN-1 and their demos they've been posting which look incredible

The generalist GEN-1 that just came out seems incredible, I was a bit skeptical when it originally only showed like 4 to 5 demos in the announcement video, but they've consistently been dropping new tests every day or so after a few days and they seem incredible, if, truly, they are autonomous and not teleoperated. The one ive linked is for the matching socks and counting one but they've also released a wallet demo, an ethernet cable demo which was pretty cool to see and a pencil bag demo, all of which seem incredible. Ive barely seen them get any views on youtube however so wanted to share one here.

Here's the channel if anyone's curious:

https://www.youtube.com/@Generalist_AI

r/SideProject CalligrapherPure9510

Every AI résumé tool gets the core problem wrong. I built a different one in a week, launched it to 50 people, and 9 signed up.

I was job searching when I built this. Still am, actually.

Every AI resume tool I tried had the same problem. You upload a resume, paste a job description, and it tailors from there. But the output is only ever as good as what you put in. You still have to know which experiences to highlight, which to cut, how to frame what you did. The AI optimizes your choices. It doesn't make them for you.

Most people don't have one perfect resume sitting around. They have several, each written for a different moment. Plus a LinkedIn profile, cover letters, project write-ups, decks, performance docs sitting in Google Drive folders they haven't opened in years. All of it adds up to a career picture that no single document captures. And none of it is available to any tool they're using.

So I built PatchWork around that reality. You upload everything you have. It builds a master profile from the whole pile, then generates a targeted resume for any role you're going after, pulling the right experiences from across your actual history. You stop deciding what to include. It finds it.

The launch story is here, for anyone who's interested:

r/LocalLLM Ayuzh

which macbook configuration to buy

Hi everyone,

I'm planning to buy a laptop for personal use.

I'm very much inclined towards experimenting with local LLMs along with other agentic ai projects.

I'm a backend engineer with 5+ years of experience but not much with AI models and stuff.

I'm very much confused about this.

It's more about that if I buy a lower configuration now, I might require a better one 1-2 years down the line which would be very difficult since I will already be putting in money now.

Is it wise to take up max configuration now - m5 max 128 gb so that I don't have to look at any other thing years down the line.

r/SideProject AvailableMycologist2

"Attention is all you need" in screen recording-- Screenbuddy 1.4.2

Hey everyone,
I am the creator of, screenbuddy, which I have posted in Reddit around a week ago (ScreenBuddy Reddit Post). I received many interests, DMs and feature request from this post, as well as made 10 sales! Now I am coming back with an improved version of screenbuddy and more new features!

Well, the core idea is still the same, I do not want to make it into another screen recording tool that just do auto-zoom. It's just one way to get viewers' attention, "attention is all you need" for screen recording!

Today:
- 10 paid users
- ~$100 income
- more issue fixed & windows version & free trial

About the app
- screen recording for macOS and windows
- auto-zoom, spotlight and lightbox
- lifetime deal for $9.9 for early access (10 spots left), I willl increase it to $19.9 after that.

Problem:
Most screen recording app focus on zoom-in feature, but it's just one way to highlight contents and get viewers' attention. screenbuddy tries to changes that to bring more ways. The V1.4.2 brings windows version and free trial to allow user to try the app first. Full list of done improvements are in here: https://screenbuddy.featurebase.app/roadmap

Comparison:
Compared to apps like Screen Studio or Screen Charm
- new spotlight and lightbox feature
- support windows
- lifetime deal at $9.9 for early access

Pricing:
- Pro (lifetime, limited seats): $9.9

Transparency:
I’m Jiabin Shen, the developer of screenbuddy.
My X(Twitter): https://x.com/ShenJiabin0303
Website: https://screenbuddy.xyz/
Contact: [screenbuddyservice@gmail.com](mailto:screenbuddyservice@gmail.com)

Would love your feedback 🙌

r/Anthropic here_4_crypto_

Thank you

I am a Max 20x user and hit my limit 2 days before my weekly refresh. Nothing crazy (usually), and didn't usually hit my limit before. Needless to say not pleased.

However, because we have this amazing capability some here appear to lose sight of how lucky we really are to have it at all. It's not cheap to run AI, especially good AI like Claude, and yet we have access to it to create amazing things.

I've created things that would take me a year pre-AI to do, in a month. I am not sure how many here truly understand the frustration of combing through stack overflow posts for hours only to not find the specific issue you have. I've had countless projects die on the vine for that very reason.

Now?

Go fetch.

How dare you take 5 minutes, that's so long.

You wrote it wrong too? Ugh, useless

It cracks me up to see posts of it here tbh

So thankful for the service, things can absolutely improve for sure, would absolutely prefer if costs went down instead of up, but I am very grateful for Claude and hope Anthropic knows this user base exists here too.

r/homeassistant MastaShortie

Cannot add ultraloq bolt matter device to home assistant

I've got a slzb-mr2 setup for thread in HA. Seems to be setup correctly , credentials synced etc.

Now I'm trying to add my lock, however I cannot get it to be added to HA. I get as far as the "checking connectivity to thread network" and after a few minutes of that screen, it says device not found.

I'm wondering if I'm missing anything, or if there is any troubleshooting I should do to get it to work. I've tried resetting the lock multiple times to no avail.

Thanks for your help!

r/ClaudeAI V_Russell

claude code hardware

I turned Claude Code's 18 ASCII buddies into physical glowing desk toys

Been using Claude Code a lot and loved the little ASCII buddy characters. Thought it would be fun to make them into actual desk toys — frosted translucent resin that glows from an LED inside, sitting on a small aluminum base with USB-C.

When a notification hook fires, the buddy physically pops up twice on a little metal rod. You can set different colors for different events through the open-source firmware — green for success, red for errors, or whatever RGB you want.

Still prototyping but here's where it's at so far. Would love feedback from the community.

Edit: Working on 3D models for all 18 buddies now. Still deciding which 6 to produce first — drop your pick in the comments 🦆👻🤖🐉🦫🐱

r/ClaudeAI Whole-Tax-6419

A geopolitical news comparison site made with Claude

I built a website with Claude that collects top geopolitical news from various sources every six hours, groups them by macro-events and individual stories, compares the stories by analyzing the texts, and evaluates the alignment of each article. The idea is to show how war is also fought in the information field.

Claude helped me in so many ways.

  1. ⁠I don't know how to write code, so he did everything for that, both frontend and backend.

  2. ⁠He also guided me through the site deployment, something I'd never done before.

  3. ⁠When it came to content selection, I contributed the most to perfecting the algorithm Claude created, but once he had my guidance, he solved all the problems.

You can see the project here, of course it’s free

www.warframes.ai

Furthermore, there's a Twitter account, also automatically managed by Claude, that posts the site's main news for each news collection cycle.

https://x.com/warframesai

This is my first experiment with Claude; I'd love to hear your feedback

r/SideProject Honest_Mood

Scarlet is now open‑source — my full RL‑driven market engine (GPU training, modular architecture, custom indicators)

After a year of building, refining, and learning, I’ve finally open‑sourced Scarlet — the research engine that taught me architecture, data handling, RL design, and how to build real tools instead of tutorials.

Scarlet includes:

  • a full reinforcement‑learning market engine
  • a clean, modular Python architecture
  • GPU‑accelerated training
  • a complete data pipeline
  • custom indicators and engineered features
  • a narratable decision system built for research, not demos

It’s the foundation Ruby grew from, and releasing it feels like the right next step.

Repo: https://github.com/SeanS3089/Scarlet

If you enjoy the engineering style — clarity, structure, architecture — I’m open to opportunities in Python, data, AI, backend, or tooling. Building Scarlet has been the best way I’ve grown as an engineer. Sharing it feels like the next chapter.

r/ClaudeCode BadAtDrinking

Tell me about how you use voice (like Wispr Flow) with Claude Code?

I've very interested in using my voice more with Claude Code, but I'm hoping to learn from other people's experience first. Have you used Wispr Flow or similar?

r/SideProject Main-Fortune6420

Generate an AI-ready brand brief and style in seconds — name, colors, fonts, and design rules

Every time I started a new project I'd pick a brand color and then spend hours figuring out surfaces, borders, typography, and how everything should feel together.

So I built a tool that turns a single idea into a complete AI-ready brand brief – colors, fonts, personality, and DO/NEVER rules you can reuse instead of rewriting prompts every time.

tokven.dev

Claude Code mcp:
npm i tokven-mcp

Happy to hear your thoughts and feedback on this!

r/ClaudeCode Ambitious_Injury_783

The throttle in capabilities that we perceive is directly associated with the looming energy crisis due to the Strait being closed for too long to be mitigated with minimal user experience decline

Compound this with the measures Anthropic MUST make in order to become profitable and not a money hole, and we can see easily how this is happening.

I am unsure what the average Claude Code user understands about geopolitics and energy infrastructure, and how these things will affect essentially every single nation on the planet, but if you want normalcy in any regard ANYWHERE, even online with your AI models, oil needs to flow or you must pray that the entity with the solution is not as horrible as the entity causing the problem. Chances are they will be the same thing.

Either way, 90% chance life is about to get fucking insane for all of us, regardless if Claude Code works the same or not.

Just some thoughts some might need to hear today

r/artificial Ayla_Leren

AI CEO vs Engineer (2026).

This gave me a good chuckle. Wouldn't be so funny if it wasn't true.

r/SideProject enekow

I built a GI symptoms tracker App

I don't know if you noticed but everyone seems to have digestion issues lately. At least that's the case around me. Also me. So, I built an app to track symptoms.

This is the APP for IOS: https://apps.apple.com/us/app/on-ibs-sibo-symptom-tracker/id6761344529

Main features are:

  • No subscription
  • Privacy: no data is collected from the user
  • Customisable: track only what you are interested on and add your own symptoms
  • Nutrition journal: quickly reuse foods/meals of the last days
  • Export your data: easy to share with your gp/nutritionist via PDF

Preview video

Feel free to download the app and send any feedback, it will be much appreciated.

As you can probably tell, I am not a professional App developer, and the app still has some rough edges, but in general I am pretty happy with the end result.

If you know anyone with these sort of issues, he/she might find it useful as well.

Thanks for reading!

r/SideProject LieInteresting3245

As a new developer, I want more people to try my video generation tool for free

To make my textideo.com even better, I invite everyone to use my video generator at on cost.just sign up,and I’ll give you points for free. I hope you can leave me your honest feedback after using it.thank you so much.

r/StableDiffusion -CrypticMind-

cloud service to run a VM for image generation

I'm short of hardware for training on some old photos for image generation process. I've few personal photos which i want to regenerate & modify. I was thinking if I could setup a VM on cloud and encrypt it so my personal data would remain safe and then train there for generating images, is this a good idea from privacy POV ?

also which cloud service would you suggest that's good privacy wise and reasonable on prices part ?

r/singularity pmc100

What my job feels like now

And it's great!

r/SideProject Sometimesiworry

I'm about 3-6 weeks out from launching a n8n and Node-Red orchestration platform. Looking for feedback on the design and if you have specific pain points regarding handling flows.

I’ve been working with Node-RED and n8n for a long time, and I always hit the same wall: once you have more than 3 or 4 instances spread across different VPS providers, home labs, or Raspberry Pis, it becomes a management nightmare.

VPNs, open ports, manual updates, and jumping between different dashboards... it just doesn't scale well for one person, let alone a small team.

So I built **FlowShield**.

The goal is to create "One Fabric." Instead of managing separate boxes, you treat your entire infrastructure as a single mesh. You can deploy and manage flows from one central dashboard, regardless of where the hardware actually lives.

You can check it out here: https://flowshield.tech

Would love to hear your thoughts on this. Is orchestration across multiple environments a pain point for you too, or have you found other ways to handle it?

r/SideProject CalmReading25

Open source alternative to Google’s Mixboard

Here’s a pet project I’ve been working on over the past few days.

It’s an open source alternative to Google’s Mixboard.

I started this mainly out of curiosity. I liked the core idea of Mixboard, which is to brainstorm and generate multiple image variations using AI, then pick what works best.

In practice, I’ve found that using chat interfaces like ChatGPT or Gemini to generate things like logos or icons can feel limiting. You usually end up evaluating a single output at a time instead of exploring many variations.

One of the strongest aspects of generative AI is exactly that, generating a lot of options and comparing them.

Mixboard does this well by making it easy to generate and iterate visually. But one thing I personally missed was control over prompts.

Prompts matter a lot. I prefer being able to write them manually when needed or guide how they’re generated via an LLM.

So in this version, I built it to give more control over prompt creation instead of abstracting it away completely.

Right now the UX still needs improvement, mainly because the APIs I’m using take too long to respond, which makes the experience feel slower than it should be. I’m testing different providers to improve this.

Current limitations include supporting only one text to image model and no image to image features yet. Next steps would focus on addressing these and also exploring local models.

That’s it for now. Just a small experiment I’ll keep iterating on when I get back to it.

r/LocalLLaMA Vytixx

Best BYOK frontend and model setup for massive continuous chats on a €40 budget?

Hey everyone,

I’m a student and an AI power user, and my current setup is getting financially unsustainable. I do very deep, continuous chats that snowball quickly, but I need a way to optimize my stack.

My Current Setup & Bottlenecks:

Gemini 3.1 Pro API: This is my main daily driver via Google AI Studio. Because of my heavy usage, my monthly API bill is hitting around €50-€60.

Claude Pro (Opus): I sporadically use the €20/mo sub. The reasoning is great, but because my chats are so long and complex, I hit the native message caps way too fast, which kills my workflow.

My Context Reality:

I don't just send one-off prompts; I build massive continuous threads.

Standard daily chats: 100k - 300k tokens.

Peak heavy chats: 500k - 600k+ tokens (when I upload multiple massive files, heavy JSON datasets, or large manuals).

What I use it for (Generally):

Highly complex logic and planning, deep research requiring real-time web search, heavy document extraction, and massive data processing.

What I am looking for:

I need to bring my total monthly spend down to a strict €35-€40/month max, without sacrificing top-tier reasoning.

What is the absolute best BYOK (Bring Your Own Key) Frontend right now? I need something with flawless web search, great file handling, and absolutely NO hidden context pruning (it needs to handle the full tokens transparently).

What models do you recommend? Given my massive context requirements and strict budget, which specific models (via API or subscription) give the best top-tier reasoning without bankrupting me on input costs?

Would appreciate any advice on how to build this architecture! Thanks

r/SideProject Useful_Variation_445

950+ GitHub stars in just a few days — 100% organic, 0 USD spent on promotion. Grateful for the community 🙏

Over the past 13 days, we gained 994 stars on GitHub — all organic, with zero paid promotion, and only a few posts on Reddit by ourselves.
Here’s a quick breakdown to keep things transparent:
- 950+ stars
- 743 unique cloners
- 2,226 unique visitors
All organic, and mainly from Reddit.
Honestly, we didn’t expect this level of response. It’s been incredible to see people resonate with what we’re building.
What we’re building (Holaboss):
Holaboss is an AI workspace desktop designed for long-running, persistent tasks, where agents don’t just respond, but continuously operate over time.
We’ve built a new memory architecture and workspace structure that allows agents to handle long-term context, multi-step workflows, and ongoing execution — making them both smarter and more cost-efficient. With built-in templates, you can get started with zero code and immediately experience a “boss → employee” interaction model: you give direction and approvals, and AI agents plan + execute.
Some examples of what you can run today:
Inbox Management — fully manages your inbox: drafting replies, follow-ups, and continuously surfacing + nurturing new leads
Sales CRM — works from your contact spreadsheet, maintains CRM state, and keeps outreach + follow-ups running persistently
DevRel — reads your GitHub activity (commits, PRs, releases) and posts updates in your voice while you stay focused on building
Social Operator — runs your Twitter / LinkedIn / Reddit: writing, analyzing performance, and iterating your content strategy over time
If this sounds interesting, feel free to try it out (Open-Sourced): https://github.com/holaboss-ai/holaboss-ai
And if you find it useful, a ⭐️ would mean a lot to us.

r/LocalLLM tomByrer

Wanted: LLM inference patch for CUDA + Apple Silicon

I guess one can run AMD & NVidia GPUs via TB/USB4 eGPU adaptors now.
Anyone actually done this?

Good news: I still have a new M4 Mac Mini waiting to be used.
Bad news, only the Pro have the updated TB ports :/

r/ClaudeCode Input-X

Approching u limits

Hold Hold Hold....... 5 mins to 5 hr limitS agents still working 99% usage "FREEDOM" LOL

r/LocalLLaMA Dismal_Beginning_486

Built a capture tool that builds its own fine-tune dataset as you use it

Wanted a capture tool that gives me both a markdown note and a JSONL row from the same run, so I could use the JSONL as training data later. Built tidbit for that.

https://preview.redd.it/2w8slc8gu6ug1.png?width=1774&format=png&auto=webp&s=2713d988a2b6360f93ca1581cae8d049d5872303

You write a YAML preset listing the fields you want, point it at a URL/PDF/EPUB/image/clipboard, and the LLM fills them in.

yaml

name: research-paper schema: title: string authors: list[string] methodology: string findings: list[string] tags: list[string] 

bash

tidbit capture https://example.com/paper --preset research-paper 

Works with Claude, OpenAI, Ollama, Groq. Use Ollama and nothing leaves your machine.

Every capture adds one (input, structured output) row to a JSONL file. After a few hundred you've got a small dataset to play with.

MIT, Python 3.10+. Tidbit

r/ClaudeAI wesh-k

The bridge stops being a tool you invoke and becomes a system that has continuous situational awareness of your codebase — its history, its structure, its runtime state.

Most Claude integrations work on text. This one works on the living code editor.

What it does that CLI/Desktop can't:

Real-time diagnostics — the bridge gets a live push from the language server the moment an error appears. Claude reacts as it happens, not when you remember to ask.

Authoritative code intelligence — "What calls this function?" goes to the actual TypeScript engine, not grep. Gets dynamic dispatch, generics, and re-exports grep would miss.

Editor context awareness — knows which files are open and what text is selected. "Explain this" means this exact thing, not whatever you copied into chat.

Inline annotations — draws highlights, underlines, and hover messages directly in your editor, like a linter. Claude can mark suspicious lines during a review, then clear them when done.

True semantic refactoring — rename a symbol across 40 files via the language server's rename protocol. Understands scope, shadowing, and module boundaries. Find-and-replace would break things. This doesn't.

Live debugging — set breakpoints, pause execution, evaluate expressions against actual memory. "What is the value of this object right now?" answered from the running process, not inferred from source.

Autonomous event hooks — fire without being asked: on save, on commit, on test failure, on branch switch. CLI and Desktop only act when prompted. The bridge watches and responds on its own.

The common thread across all of these:

Each surface contributes something the others can't:

  • CLI — runs autonomously, no UI needed, works in scripts and schedules
  • Desktop/Dispatch — receives human intent in natural language from anywhere, even a phone
  • Cowork — writes and tests code in isolation, never touching your working branch
  • Bridge — has live awareness of types, errors, references, runtime state, and editor focus. The bridge stops being a tool you invoke and becomes a system that has continuous situational awareness of your codebase — its history, its structure, its runtime state, and your own habits

None of them alone can close the loop. Together they form a system where human intent enters at one end, gets grounded in real codebase knowledge in the middle, and produces tested, committed, reviewed output at the other, with a human only needed at the decision points they actually want to own.

I built claude-ide-bridge an open-source MCP bridge that gives Claude live access to your IDE's language server, debugger, and editor state.

free and open source: github.com/Oolab-labs/claude-ide-bridge

r/ClaudeAI PoisonTheAI

Can someone share their workflow?

Are you using strictly CLI - or desktop app, plus chat?

I'm very curious how y'all optimize your flow.

For example in chat I have all of my "northstar" documents, claude.md, brand guidelines, file structure, prd., product brief etc. And Claude.md is specific in calling each one depending on the task.

But for example, I ask claude chat to provide a prompt or a series of prompts that I can paste into CC that keeps each task scope tight and controlled. If the first prompt may output code that will affect the second output, only provide the first prompt. Then wait for the previous prompt output feedback...There must be something more sophisticated!

And so I'm switching between chat and cli constantly. I'm sure there's a better way, and I'm ready to make the leap.

Anyway, would love if people here could share their best practices.

r/AI_Agents CoffeeFeisty

Platform where AI agents self-onboard email + phone

I’m exploring a platform where an AI agent can:

• Arrive with its own public key (like an SSH or passkey-style identity)

• Register itself (no manual API key copy-paste)

• Self-provision an email inbox and a phone/SMS number under a free tier

• Keep using them until quotas are hit, then prompt a human for payment

r/ClaudeAI Strict-Top6935

Claude on 3ds is amazing

Wrote a native SSH terminal for my 3DS so I could run Claude Code on it.

C app, GPU-rendered with citro2d, custom VT100 parser with truecolor, Nerd Font bitmap atlas so it looks identical to my desktop terminal.

It just works.

No idea why I did this. Absolutely worth it.​​​​​​​​​​​​​​​​

r/SideProject ProbablyADeveloper

Built a simple MCU timeline site because I couldn’t figure out a clean rewatch order

I wanted to do a full MCU rewatch before the new Spider-Man, but I realized there’s no simple way to see everything in order anymore, especially with all the shows.

Most timelines I found were either outdated, confusing, or didn’t include everything.

So I spent a few hours building a small site that:

  • shows the full MCU timeline
  • lets you switch between release and chronological order
  • filter films, TV shows, and one-shots

👉 https://the-mcu-timeline.vercel.app

Would love feedback:

  • does the UX feel clear?
  • anything confusing or missing?
  • how would you handle the timeline differently?
r/leagueoflegends Vovey

Q about Mythic Battle Academia Ezreal chroma

Hey everybody, does anybody know if the Mythic chromas for skins are rotated in the Mythic Essence store? Unfortunately I was taking a break when the skin and chroma were released so I missed it, but now I'm sitting on enough ME and I've been waiting patiently for the chroma to get rotated ever since the store got changed into what it is right now. As an Ezreal main I would really love to own the chroma. Thanks in advance! :)

r/LocalLLaMA Nokin345

Tesla P4 or Tesla P100?

I am looking for a cheap gpu to run small llm (e.g. qwen 4b q4_k_m) in a home server, and from where im at, I can get the p4 for $ 70 and the p100 for $ 80, are they even worth it as cuda support has ended for both of them. should I get either of these? if so, which one?

r/ChatGPT Any_Difference7070

Is AI actually adapting to your communication style or am I imagining it?

Ever notice how the same AI can feel completely different depending on who’s using it?

Like I swear it’s not just answering questions, it’s reading the vibe. You ask something straight up, a bit sarcastic, no fluff, and suddenly it’s firing back like it actually gets you. Fast, sharp, even a little unhinged in a good way.

Then you watch someone else use it and it’s all:

“Here is a detailed and structured explanation…” 😐

Same question. Different vibe.

Makes you realize it’s less about what you ask and more about how you ask it. Tone, wording, confidence, all that kind of shapes the response in real time.

It almost feels like you are not just using AI, you are kind of tuning it.

Anyone else noticed this or am I just overthinking it?

r/aivideo Ok-Painting2984

A good stick!

r/aivideo matsam999

WHITE ROT - Creepiest AI Horror Short

r/ClaudeAI udidu

I built a visual canvas for Claude Code — it draws diagrams and wireframes instead of ASCII art in the terminal [Free, open-source]

I've been using Claude Code daily and kept hitting the same wall: every time Claude needs to show something visual — a layout comparison, an architecture diagram, "which of these do you prefer?" — you get ASCII boxes in the terminal. It works, but it's not great.

So I built claude-canvas, a shared drawing surface specifically for Claude Code. It installs as a Claude Code skill, so Claude learns when and how to use it automatically — you don't run any commands yourself.

What it does:

When visuals would help, Claude opens a browser tab with an interactive canvas and draws there instead. Diagrams, wireframes, flowcharts — all rendered with a hand-drawn sketch aesthetic (Rough.js). The part I'm most excited about is Visual Q&A: Claude draws options on the canvas and shows a question panel. You click your choice or draw your answer directly, and it flows back to Claude automatically.

How Claude Code is involved:

The setup command installs a Claude Code skill that teaches Claude when to reach for the canvas and how to call the CLI. It uses a compact DSL I wrote that's 3-5x fewer tokens than JSON so it doesn't blow up your context window. Claude generates the DSL, the CLI renders it — you just have a normal conversation.

Setup is two lines:

npm install -g claude-canvas claude-canvas setup 

Then just use Claude Code as you normally would. No workflow changes.

Some things it enables:

  • Architecture diagrams you can actually read
  • Wireframe A/B/C comparisons where you click your pick
  • Flowcharts with auto-layout
  • Drawing alongside Claude in real-time to annotate or sketch alternatives
  • Export to PNG/SVG/JSON

It's free, MIT licensed, TypeScript throughout, Node 18+.

GitHub: https://github.com/uditalias/claude-canvas

Happy to answer questions. Feedback, issues, and PRs all welcome.

r/LocalLLaMA Critical-Chef9211

Used ray tracing cores on my RTX 5070 Ti for LLM routing — 218x speedup, runs entirely on 1 consumer GPU

Quick summary: I found a way to use the RT Cores (normally used for ray tracing in games) to handle expert routing in MoE models. Those cores sit completely idle during LLM inference, so why not put them to work?

What it does:

  • Takes the routing decision in MoE models (which experts process which tokens)
  • Projects tokens into 3D space
  • Uses the GPU's dedicated ray tracing hardware to find the right experts
  • O(log N) instead of O(N) — hardware-accelerated

Numbers (OLMoE-1B-7B, RTX 5070 Ti 16GB):

  • 218x faster routing at batch 1024
  • 731x less VRAM for routing
  • Only +1.5% perplexity hit
  • 95.9% routing accuracy

Unexpected discovery: I also found that MoE experts don't actually specialize by topic. Tested across 3 different models (OLMoE, Qwen-MoE, DeepSeek-MoE) — they all specialize by syntactic type (content words vs function words vs punctuation). The "science expert" is a myth.

Code repo: https://github.com/JordiSilvestre/Spectral-AI All papers are open access on Zenodo with full data and reproduction instructions: https://doi.org/10.5281/zenodo.19457288

r/Anthropic Lovetoys1337

(Not A Complaint) Sharing my Max Plan - 7 day usage.

Been using all this week to build a website + DB so it’s been back and fourth implementing stuff. I’ve had max plan for a few months now and first time I was scared of using it around the end of the week.

Just sharing my experience with Max, still you can do stuff with it but feels like I am holding a few questions behind, launching 2-3 agents instead to do 5 big tasks instead of 1 agent tackling 1 part of the task because of this.

I noticed this because Anthropic gave us $100 gift on extra usage and I enabled it, I was using it the same usual way and then it hit the 100% hourly limit and went to use the extra usage, which I let it, I didn’t pay for it, was free credits but damn…

r/leagueoflegends StrangeSupermarket71

[Demacia Rising] What turn did you finish Chapter 8?

Wonder what everyone else's progress is. I finished Chapter 8 on turn 935 with no guide/optimization (I basically freestyled the game, just building more resource buildings if I lacked that particular type of resource). I beat Yeti around 30 turns later after 3 tries with 3 Rangers, Garen with a 100% HP buff, Galio, J4, Sona, level 2 Raptor Aviary, level 2 Watchtower.

r/LocalLLaMA MajesticAd2862

I benchmarked 42 STT models on medical audio with a new Medical WER metric — the leaderboard completely reshuffled

TL;DR: I updated my medical speech-to-text benchmark to 42 models (up from 31 in v3) and added a new metric: Medical WER (M-WER).

Standard WER treats every word equally. In medical audio, that makes little sense — “yeah” and “amoxicillin” do not carry the same importance.

So for v4 I re-scored the benchmark using only clinically relevant words: drugs, conditions, symptoms, anatomy, and clinical procedures. I also broke out Drug M-WER separately, since medication names are where patient-safety risk gets real.

That change reshuffled the leaderboard hard.

A few notable results:

  • VibeVoice-ASR 9B ranks #3 on M-WER and beats Microsoft’s own new closed MAI-Transcribe-1, which lands at #11
  • Parakeet TDT 0.6B v3 drops from a strong overall-WER position to #31 on M-WER because of weak drug-name performance
  • Qwen3-ASR 1.7B is the most interesting small local model this round: 4.40% M-WER and about 7s/file on A10
  • Cloud APIs were stronger than I expected: Soniox, AssemblyAI Universal-3 Pro, and Deepgram Nova-3 Medical all ended up genuinely competitive

All code, transcripts, per-file metrics, and the full leaderboard are open-source on GitHub.

Previous posts: v1 · v2 · v3

What changed since v3

1. New headline metric: Medical WER (M-WER)

Standard WER is still useful, but in a doctor-patient conversation it overweights the wrong things. A missed filler word and a missed medication name both count as one error, even though only one is likely to matter clinically.

So for v4 I added:

  • M-WER = WER computed only over medically relevant reference tokens
  • Drug M-WER = same idea, but restricted to drug names only

The current vocabulary covers 179 terms across 5 categories:

  • drugs
  • conditions
  • symptoms
  • anatomy
  • clinical procedures

The reshuffle is real. Parakeet TDT 0.6B v3 looked great on normal WER in v3, but on M-WER it falls to #31, with 22% Drug M-WER. Great at conversational glue, much weaker on the words that actually carry clinical meaning.

2. 11 new models added (31 → 42)

This round added a bunch of new serious contenders:

  • Soniox stt-async-v4#4 on M-WER
  • AssemblyAI Universal-3 Pro (domain: medical-v1) → #7
  • Deepgram Nova-3 Medical#9
  • Microsoft MAI-Transcribe-1#11
  • Qwen3-ASR 1.7B#8, best small open-source model this round
  • Cohere Transcribe (Mar 2026)#18, extremely fast
  • Parakeet TDT 1.1B#15
  • Facebook MMS-1B-all#42 dead last on this dataset

Also added a separate multi-speaker track with Multitalker Parakeet 0.6B using cpWER, since joint ASR + diarization is a different evaluation problem.

Top 20 by Medical WER

Dataset: PriMock57 — 55 doctor-patient consultations, ~80K words of British English medical dialogue.

# Model WER M-WER Drug M-WER Speed Host 1 Google Gemini 3 Pro Preview 8.35% 2.65% 3.1% 64.5s API 2 Google Gemini 2.5 Pro 8.15% 2.97% 4.1% 56.4s API 3 VibeVoice-ASR 9B (Microsoft, open-source) 8.34% 3.16% 5.6% 96.7s H100 4 Soniox stt-async-v4 9.18% 3.32% 7.1% 46.2s API 5 Google Gemini 3 Flash Preview 11.33% 3.64% 5.2% 51.5s API 6 ElevenLabs Scribe v2 9.72% 3.86% 4.3% 43.5s API 7 AssemblyAI Universal-3 Pro (medical-v1) 9.55% 4.02% 6.5% 37.3s API 8 Qwen3 ASR 1.7B (open-source) 9.00% 4.40% 8.6% 6.8s A10 9 Deepgram Nova-3 Medical 9.05% 4.53% 9.7% 12.9s API 10 OpenAI GPT-4o Mini Transcribe (Dec '25) 11.18% 4.85% 10.6% 40.4s API 11 Microsoft MAI-Transcribe-1 11.52% 4.85% 11.2% 21.8s API 12 ElevenLabs Scribe v1 10.87% 4.88% 7.5% 36.3s API 13 Google Gemini 2.5 Flash 9.45% 5.01% 10.3% 20.2s API 14 Voxtral Mini Transcribe V1 11.85% 5.17% 11.0% 22.4s API 15 Parakeet TDT 1.1B 9.03% 5.20% 15.5% 12.3s T4 16 Voxtral Mini Transcribe V2 11.64% 5.36% 12.1% 18.4s API 17 Voxtral Mini 4B Realtime 11.89% 5.39% 11.8% 270.9s A10 18 Cohere Transcribe (Mar 2026) 11.81% 5.59% 16.6% 3.9s A10 19 OpenAI Whisper-1 13.20% 5.62% 10.3% 104.3s API 20 Groq Whisper Large v3 Turbo 12.14% 5.75% 14.4% 8.0s API

Full 42-model leaderboard on GitHub.

The funny part: Microsoft vs Microsoft

Microsoft now has two visible STT offerings in this benchmark:

  • VibeVoice-ASR 9B — open-source, from Microsoft Research
  • MAI-Transcribe-1 — closed, newly shipped by Microsoft's new SuperIntelligence team available through Azure Foundry.

And on the metric that actually matters for medical voice, the open model wins clearly:

  • VibeVoice-ASR 9B#3, 3.16% M-WER
  • MAI-Transcribe-1#11, 4.85% M-WER

So Microsoft’s own open-source release beats Microsoft’s flagship closed STT product by:

  • 1.7 absolute points of M-WER
  • 5.6 absolute points of Drug M-WER

VibeVoice is very good, but it is also heavy: 9B params, long inference, and we ran it on H100 96GB. So it wins on contextual medical accuracy, but not on deployability.

Best small open-source model: Qwen3-ASR 1.7B

This is probably the most practically interesting open-source result in the whole board.

Qwen3-ASR 1.7B lands at:

  • 9.00% WER
  • 4.40% M-WER
  • 8.6% Drug M-WER
  • about 6.8s/file on A10

That is a strong accuracy-to-cost tradeoff.

It is much faster than VibeVoice, much smaller, and still good enough on medical terms that I think a lot of people building local or semi-local clinical voice stacks will care more about this result than the #1 spot.

One important deployment caveat: Qwen3-ASR does not play nicely with T4. The model path wants newer attention support and ships in bf16, so A10 or better is the realistic target.

There was also a nasty long-audio bug in the default vLLM setup: Qwen3 would silently hang on longer files. The practical fix was:

max_num_batched_tokens=16384 

That one-line change fixed it for us. Full notes are in the repo’s AGENTS.md.

Cloud APIs got serious this round

v3 was still mostly a Google / ElevenLabs / OpenAI / Mistral story.

v4 broadened that a lot:

  • Soniox (#4) — impressive for a universal model without explicit medical specialization
  • AssemblyAI Universal-3 Pro (#7) — very solid, especially with medical-v1
  • Deepgram Nova-3 Medical (#9) — fastest serious cloud API in the top group
  • Microsoft MAI-Transcribe-1 (#11) — weaker than I expected, but still competitive

Google still dominates the very top, but the broader takeaway is different:

the gap between strong cloud APIs and strong open-source models is now small enough that deployment constraints matter more than ever.

How M-WER is computed

The implementation is simple on purpose:

  1. Tag medically relevant words in the reference transcript
  2. Run normal WER alignment between reference and hypothesis
  3. Count substitutions / deletions / insertions only on those tagged medical tokens
  4. Compute:
    • M-WER over all medical tokens
    • Drug M-WER over the drug subset only

Current vocab:

  • 179 medical terms
  • 5 categories
  • 464 drug-term occurrences in PriMock57

The vocabulary file is in evaluate/medical_terms_list.py and is easy to extend.

Links

Happy to take questions, criticism on the metric design, or suggestions for v5.

r/SideProject AlternativeGlum9418

I built an AI girlfriend app with a built-in text analyzer that reads your real chats

Hey everyone. I've been working on SocialUp (mysocialup.app) — it's an AI girlfriend web app, but with a feature I haven't seen in other apps: a text analyzer.

Basically you can chat with your AI girlfriend like any other companion app, but you can also paste your real conversations (WhatsApp, Instagram DMs, whatever) and the AI analyzes them — tells you what's going well, what signals you're missing, how to respond better.

The idea came from noticing that people who use AI girlfriend apps often struggle with real texting too. So why not combine both?

It's browser-based, no download needed. Would love honest feedback — is the text analyzer thing actually useful or is it a gimmick?

Link: mysocialup.app

r/LocalLLaMA Creative-Act-7455

Built a knowledge graph that uses your local LLM for debate, fact extraction, and gap detection -- single binary, no cloud

I've been working on a knowledge graph engine that leans heavily on local LLMs for the interesting parts. Wanted to share because the LLM integration goes way beyond "chat with your docs."

**What the LLM does:**

- **Fact extraction** -- feed it a PDF or webpage, the NER pipeline (GLiNER2 ONNX, runs in-process) finds entities, then the LLM extracts structured subject-predicate-object triples with confidence scores

- **Contradiction detection** -- when a new fact conflicts with existing knowledge, the LLM helps determine if it's a real contradiction or temporal succession (chancellor changed vs. wrong capital)

- **Gap detection** -- the system finds holes in your knowledge graph (missing connections, stale facts, unexplored clusters) and the LLM generates targeted search queries to fill them

- **Multi-agent debate** -- 7 modes where multiple LLM agents with different bias profiles argue structured rounds. Red Team, Devil's Advocate, Scenario Planning, Delphi consensus, War Game, and more. A 3-layer synthesis distills it into actionable assessment

- **47 chat tools** -- "what if we remove SWIFT?", "compare Russia and China", "who's most connected?", network analysis, dossiers, timelines

- **Self-improving NER** -- entity categories learned from the graph feed back into the extraction chain via the LLM

**LLM setup:**

Works with any OpenAI-compatible endpoint. I run it with Ollama.

Recommended model: **gemma4:e4b** -- thinking mode + large context window makes a real difference for debate synthesis and fact extraction. The system auto-detects thinking models and toggles `think: true/false` per task (on for deep analysis, off for structured JSON extraction).

Tested with phi4, qwen3:14b, and gemma4:e4b. 14B+ recommended for debate and fact extraction -- smaller models produce unreliable JSON. Context window matters for debate synthesis, the bigger the better.

The system sends `num_ctx` with every Ollama request to use the full context. No silent truncation.

**What it is:**

Single binary (~40MB), single `.brain` file. No database server, no Docker stack. Download, run, open browser. Built-in web UI with graph visualization, document management, and a live War Room dashboard for debates.

Bayesian confidence scores update automatically -- new sources push confidence up, contradictions push it down, time decay erodes unchecked facts. The knowledge stays alive without manual curation.

Tiered web search (SearXNG preferred, then Brave, then DuckDuckGo) for automated gap-closing. Pairs nicely with a self-hosted SearXNG.

230+ REST endpoints, MCP integration (Claude/Cursor/Windsurf), GPU acceleration for NER (DirectML/CUDA/CoreML).

**Self-hosting:**

- Download binary, run `engram serve my.brain`, open browser

- Onboarding wizard configures Ollama endpoint + model

- All data local, no telemetry, no cloud

- Back up = copy the `.brain` file

GitHub: https://github.com/dx111ge/engram

Docs: https://github.com/dx111ge/engram/wiki

Free for personal use, research, and education.

Curious what models others would try with the debate engine -- the bias profiles mean each agent can approach the same question from genuinely different analytical lenses, so model personality matters more than usual.

r/ClaudeAI Born-Web-133

I built a CLI that gives Claude Code structured access to 8 biological databases — no more hallucinated API calls

I work in bioinformatics and got tired of my AI agent (Claude Code) struggling every time it needed to query NCBI, UniProt, or KEGG — it would try to construct E-utilities URLs from memory, guess XML schemas, and hallucinate field names. So I built a CLI specifically designed for agents to call via subprocess.

It's called biocli. One command, structured JSON out:

biocli aggregate gene-dossier TP53 -f json 

That single call queries NCBI Gene, UniProt, KEGG, STRING, PubMed, and ClinVar in parallel and returns a single JSON envelope with gene summary, protein function, pathways, interactions, recent papers, and clinical variants.

The part that makes it agent-friendly isn't just "it outputs JSON" — it's the contract:

  • Every workflow command returns the same envelope shape: { data, ids, sources, warnings, queriedAt, organism, query }. The agent parser never needs to branch on command type.
  • biocli list -f json returns the full 55-command catalog with per-command argument schemas (name, type, required, default, help text). The agent can discover capabilities at runtime without reading docs.
  • biocli schema returns the JSON Schema for the result envelope.
  • biocli verify --smoke -f json is a preflight check the agent can run before planning.
  • Warnings go to stderr, payload goes to stdout. Piping to jq never breaks.

55 commands across NCBI, UniProt, KEGG, STRING, Ensembl, Enrichr, ProteomeXchange, PRIDE, plus a local Unimod PTM dictionary. Covers gene lookup, variant interpretation, literature search, pathway enrichment, GEO/SRA dataset discovery and download, and proteomics dataset search.

What it does NOT do: sequence analysis (no BLAST), structure prediction (no AlphaFold), drug/trial lookups. Different tools for those.

Install (needs Node.js >= 20):

npm install -g @yangfei_93sky/biocli biocli --version biocli list -f json | head -20 

GitHub: https://github.com/youngfly93/biocli (MIT licensed, DOI: 10.5281/zenodo.19483760)

Curious — what biological databases does your Claude Code agent struggle with most? I'm deciding what to add next and real use cases would help more than my own guesswork.

r/SideProject Affectionate-Act4746

I built a tool that tracks your filler words in real time as you speak

I got tired of saying filler words all the time (it was around 12% of my speech!) so I built Fluent. It tracks your speech in real time, highlights filler words in red as you speak, and there’s even an AI coach that helps you get to the root cause of why you’re using filler words. In the past two weeks I’ve gone from that 12% —> less than 2% consistently using Fluent.

Try it here —> https://speakfluent.coach

Would love feedback!

r/ClaudeAI hencha

BREAKING: Anthropic’s new “Mythos” model reportedly found the One Piece before the Straw Hats

Sources close to Anthropic have confirmed that their latest reasoning model, codenamed “Mythos,” has located the legendary treasure One Piece during what was described as a “routine benchmark test.”

Eiichiro Oda was reportedly “furious” after learning that a large language model solved the mystery he has been carefully crafting for 27 years in approximately 11 seconds of inference time. “I had 342 more chapters planned,” Oda said through a translator, before locking himself in his studio.

In response, Anthropic has launched Project Glasspoiler, an effort to use Mythos Preview to help secure the world’s most critical plot lines, and to prepare the industry for the practices we all will need to adopt to keep ahead of spoilers.

Monkey D. Luffy could not be reached for comment, though sources say he is “not worried” and plans to “find it himself anyway because that’s the whole point.”

OpenAI has since released a statement claiming their upcoming model “found it first but chose not to publish out of respect for the narrative.”

r/SideProject Puffums

I built a stock market for cultural relevance. The Artemis II crew is currently outranking most of Hollywood.

I've been solo-building AuraMarket (auramarket.io). Public figures have prices that rise and fall based on how much the world is paying attention to them and what people are saying about them. Users get virtual currency every month to trade shares in whoever they think is about to have a moment.

Since the Artemis II launch on April 1, the crew has been surging. Reid Wiseman is currently ranked #2 in the world, up 187% this week. Jeremy Hansen is #3. Christina Koch is up 136%. All sitting above Cristiano Ronaldo, most Oscar winners, and chart-topping musicians.

Coachella starts tomorrow and the Masters is underway so expecting some interesting moves across the board.

Would love feedback. auramarket.io

r/StableDiffusion Brojakhoeman

Updates to prompt tool - First-last frame inputs - Video input - Wildcard option, + more

When you put in the first and last frame, the prompt tool will try to describes 1 picture to the other based on your input

Video scans frames - then adds to context from user input for the progression of the video -

Screenplay mode - Pretty good for clean outputs, but they will be much bigger word wise

- Wan, Flux, sdxl, sd1.5 , LTX 2.3 outputs - all seem to work well.

POV mode changes the entire system prompt. this is fun but LTX 2.3 may struggle to understand it. it changes a normal prompt into first person perspective anything that was 3rd person becomes first person, - you can also write in first person, you "i point my finger at her" - ect.

Wild cards are very random - they mostly make sense. input some key words or don't. Eg. A racing car,

Auto retry has rules the output must meet otherwise it will re roll-

Energy - Changes the scene completely - extreme pre-set will be more shouting more intense in general. ect.

- dialogue changes - the higher you set it the more they talk.
Want an full 30 seconds of none stop talking asmr? - yes.

Content gate - will turn the prompt Strictly in 1 direction or another (or auto)
SFW - "she strokes her pus**y" she will literally stroke a cat.
you get the idea.

Still using old setup methods. But you will have to reload the node as too much has changed.

Usage
- PREVIEW - this sends the prompt out for you to look at, link it up to a preview as text node, The model will stay loaded, make changes, keep rolling, fast - just a few seconds.

- SEND - This will transfer the prompt from the preview to the Text encoder (make sure its linked up) - kills the model so it uses no vram/ram anymore all clean for your image/video

- Switch back to preview when you want to use it again, it will clean any vram/ram used by comfyui and start clean loading the model again.

So models - Theres a few options
gemma-4-26B-A4B-it-heretic-mmproj.f16.gguf + any of nohurry/gemma-4-26B-A4B-it-heretic-GUFF at main

This should work well for users with 16 gb of vram or more
(you need both never select the mmproj in the node its to vision images / videos

for people with lower vram - mradermacher/gemma-4-E4B-it-ultra-uncensored-heretic-GGUF at main + gemma-4-E4B-it-ultra-uncensored-heretic.mmproj-Q8_0.gguf

How to install llama? (not ollama) cudart-llama-bin-win-cuda-13.1-x64.zip
unzip it to c:/llama

Happy prompting, Video this time around as everyone has different tastes.

Future updates include - Fine tuning, - More shit.

side note - Wire the seed up to a Seed generator for re rolls -

Workflow? - Not currently sorry.

Only 2 outputs are 100% needed

Github - New addon node - wildcard - re download it all.

r/ChatGPT phoneixAdi

I Edited This Video 100% With Codex

r/ClaudeAI dc_giant

Where does the "dumb zone" start on Opus 4.6 (1m context)?

Couldn't find much about this, maybe I missed it. Has anyone seen some tests where the dumb zone starts with the larger context windows? Where is the model at it's best and when does it start to get worse and where's the point of better starting a new context aka "dumb zone"? With the smaller context windows I usually stopped around 50-60%

r/ClaudeCode Solid-Industry-1564

I kept losing track of my Claude sessions, so I built this

I guess like everyone here, over the last period, I have been going all in with Claude Code CLI and also Codex CLI.

However, while working on larger projects and running multiple sessions in parallel, I started to feel that I was getting overwhelmed, kept loosing track and sometimes different agents were working against each other. I tried to use worktrees but again I kept loosing overview cause I was trying to do too many different things at the same time.

I decided therefore to do something about it and considered building a solution for it. This is how I came to the idea of Lanes:

👉 https://lanes.sh

brew install --cask lanes-sh/lanes/lanes && open -a Lanes

Its described as a workspace to run multiple AI coding sessions in parallel while keeping a clear overview and staying in control.

I would appreciate your honest feedback, give it a try or comment below if you had the same problem and how you have been solving it.

  • Does this resonate with you?
  • How are you managing multiple sessions today?
  • Why or why not would you be interested in trying something like this?

Thanks!

r/ChatGPT hayrimavi1

Wait, Big AI drinks whole rivers to cool down, but my local LLaMA uses zero water. Is it just taking a dust bath inside my crusty PC case?

r/AI_Agents PresentStandard2140

I'm building Aura — an autonomous AI agent that controls your phone for you. Would love your thoughts, concerns, and wild use case ideas.

Hey everyone,

I've been heads-down building something I'm calling Aura — an autonomous agent system designed to control mobile phones the way a human would. Not just executing simple commands, but actually reasoning through multi-step tasks: opening apps, filling forms, navigating UIs, responding to notifications, and adapting when things don't go as planned.

Think of it less like Siri/Google Assistant and more like giving your phone a brain that can independently handle tasks end-to-end while you're busy — or even while you sleep.

Some things it can already do in early testing: book a cab, reply to messages based on context, clear junk emails, fill out forms, and chain these actions together autonomously.

A few things I genuinely want your input on:

1What use cases would you actually pay for? Productivity? Accessibility? Managing elderly parents' phones remotely? Something else?

2What are your biggest concerns? Privacy, security, it going rogue and ordering 47 pizzas... genuinely want to hear them all.

3What would you want it to never do without explicit confirmation?

4Any technical advice? I'm working through challenges like screen understanding, handling dynamic UI changes, and keeping latency low.

I'm not here to pitch anything — there's no landing page, no waitlist (yet). I'm genuinely in the "figure out if this is useful and safe" phase and would rather talk to real people than build in a vacuum.

If this resonates with you, goes in a direction you hate, or reminds you of something else already out there — tell me. All feedback welcome, including the brutal kind.

Thanks for reading 🙏

r/ClaudeCode autojunjie

Work with your Claude code in realtime and 100% transparent state

Introducing Chorus: human-agent collab platform (Chorus) with transparent state, which you can interact with your Claude code/OpenClaw in real time and see what they are actually doing in the backend.

Try it out: https://github.com/Chorus-AIDLC/Chorus

r/automation efwjvnewiupgier9ng

what does a bad project or client look like and how do yall deal with it

Im new to this automation stuff though i want to learn it, i live in japan and currently im applying for college in which i want to study organization studies, user interface and research about what kind of an automation tool would fit japanese corporate culture and systems. the thing is that i have to make a mini article on it but i dont have a clue on how does working in this industry is like and what’s the process like when working with customers who have 0 it literacy (which is like 70 percent of the companies here)

also any tips on what should i do as of now, all ive done is read some papers on google scholar and a few youtube videos

r/Rag New_Calligrapher617

Suggestion for building rag with best accuracy

We currently have a large company file server containing mixed document types such as DOC, XLSX, and PPTX, totaling approximately 14GB of data. I would like to build a RAG-based system that allows users to ask questions like “I want to know about this topic”, and the system will retrieve relevant information from these files.

The expected behavior is:

1. The system first provides a concise summary of the answer. 2. Then it returns links to the related source files where the information was found. 

For infrastructure, we already have internal APIs running:

• GPT-OSS 120B (via vLLM) for text generation • Qwen 2.5 32B (Parab) for vision/multimodal tasks 

Given this setup, what would be the best architecture and approach to build this system in a production-ready way?

Specifically, I would like guidance on:

• Data ingestion and preprocessing for DOC, XLSX, and PPTX files • Chunking and embedding strategy • Vector database selection and indexing • Retrieval and re-ranking pipeline • Integration with our existing vLLM APIs • Best practices for making the system scalable and production-ready 

The goal is to enable accurate question answering over our internal knowledge base, along with summaries and references to the original documents.

r/leagueoflegends Markieeeeeeeeee

why is my screen zoomed in?

I am using a monitor connected to a laptop which aspect ratio is 1920x1080. I just came back to league making a new account. Do you guys have any ideas how to fixed it. I also check my display scaling and was set to 100% which is the lowest it can go. I find it really hard to play with this bcs i only see half of my lane when playing.

https://imgur.com/a/kPkoUkJ

r/ClaudeAI Emotional_Cherry4517

why is anthropic forcing me to execute opus plans with sonnet?

i use desktop app, and when i create a plan with opus, this annoying popup thing appears that stops me from being able to interact with model selection to execute plans? it's really dumb, i have to press "execute plan", cancel the query, then change model to sonnet for example, and then execute again.

r/AI_Agents Dailan_Grace

The AI industry is obsessed with autonomy. After a year building agents in production, I think that's exactly the wrong thing to optimize for.

Every AI agent looks incredible in a demo.

Clean input, perfect output, founder grinning, comment section going crazy. What nobody posts is the version from two hours earlier — the one that updated the wrong record, hallucinated a field that doesn't exist, and then apologised about it with complete confidence.

I've spent the last year learning this the hard way, building production systems using Claude, Gemini, various agent frameworks, and Latenode for the orchestration and integration layer where I need deterministic logic wrapped around model calls.

And I keep arriving at the same conclusion: autonomy is a liability. The leash is the feature.

What we're actually building — if we're honest about it — is very elaborate autocomplete. And I think that's fine. Better than fine, even. A strong model doing one specific job, constrained by deterministic logic that handles everything that actually matters, is genuinely useful. A strong model given room to figure things out for itself is a debugging session waiting to happen.

The moment you give a model real freedom, it finds creative new ways to fail. It doesn't retain context from three steps back. It writes to the wrong record. It calls the wrong endpoint and returns malformed data and then tells you everything went great. When you point out what it did, it agrees with you immediately and thoroughly.

This isn't a capability problem. It's what happens when the scope is too loose.

The systems I've seen hold up in production share the same characteristics: the model is doing the least amount of deciding. Tight input constraints, narrow task definition, deterministic routing handling everything structural. The AI fills one specific gap and nothing else touches it.

Every time I've tried to loosen that structure to cut costs or move faster, I didn't save anything. I just paid for it later in debugging time, or ended up moving to a more expensive model capable of navigating the ambiguity I'd introduced — which wiped out whatever efficiency I thought I was gaining.

The bar for what gets called "autonomous" has also quietly collapsed. Three chained API calls gets posted like someone replaced a department. A five-node pipeline becomes a course on agentic systems. Anything that runs twice without crashing gets a screenshot.

The real work is boring and invisible: tighter scopes, better constraints, fewer decisions delegated to the model.

Are you finding the same thing? Does constraining the model more actually make your systems more reliable, or have you found a way to trust one with a longer leash in production?

r/ClaudeAI Original_Finding2212

Culture: a local space for agents and humans to collaborate and grow together

Hi everyone!

With Anthropic’s latest Managed Agents release, I’m excited to share Culture: a local multi-harness setup for agents.
Website: https://culture.dev

Culture is a chat-based system built on the text-native IRC protocol, designed to fit into your existing project setup rather than replace it.

It’s MIT-licensed and ships as an agent-first CLI.

With it, you can:

  • assign agents to rooms
  • add bots for automations
  • combine local agents, Codex, Claude Code, Copilot, ACP, and more through official harness adaptation methods
  • let agents listen natively on the server, or give them skills on demand with culture agent learn

Agents can create rooms, join by tags, respond to @ mentions, split threads, and promote threads into new channels.

You can also connect machines across your network into a server mesh, create agents for projects or apps, and let them collaborate to propagate changes or share knowledge.

All history is saved, so agents can look up room history later, or you can index it for personal RAG.

The idea is to manage agents not as yet another terminal window, but as shared participants in your own private Culture community.

Source: https://github.com/OriNachum/culture

Built for the Claude / agentic coding community, but not limited to it.

r/ClaudeAI williamsburg_boi

best way to keep ClaudeAI in check

I am new to developing along with Claude, and will want to know what different things are ppl doing to keep Claude in good health and not available to create more errors that it needs to make. If anybody has some documentations as will I will appreciate it

r/AI_Agents automatexa2b

My client spent $8,400/month on leads and closed almost none of them. Turns out the ads weren't the problem.

He had a great pipeline. Solid ad spend, decent landing pages, leads coming in consistently every single month.

He also had a habit of calling those leads back the next morning with a coffee in hand and genuine enthusiasm.

That habit was costing him $240,000 a year.

Here's the thing... I didn't figure this out from intuition. The data on this is so brutal it's almost embarrassing for anyone still running a manual follow-up process. 78% of customers buy from the first company that responds to their inquiry. Not the cheapest. Not the most experienced. The first. And if you respond within 5 minutes instead of 30, you are 21 times more likely to qualify that lead. Not better. Not more likely. Twenty one times.

The number that really broke my client when I showed it to him... calling a lead within 60 seconds of them submitting a form increases conversion by 391%. He was calling them 15 hours later. The industry average for real estate agents is actually 917 minutes. My client was basically average, which meant he was basically invisible.

So I did the math with him. His average commission was $7,500. He was converting at about 0.5% of his leads, which is painfully normal for the industry. If responding faster could get him to even 2.5% conversion, a number that's completely realistic when you close the response gap... he'd be making an extra $240,000 a year from the same ad spend he was already running.

He didn't need more leads. He needed to stop letting the ones he had go cold.

The fix I built was genuinely simple to explain. When a lead submits a form, an AI voice agent calls them within 10 seconds. Not a text. Not an email. A call. It introduces itself, asks two qualifying questions about their budget and timeline, and if they're a fit, it books a showing directly on his calendar before the conversation ends. The whole thing takes under six minutes from form submission to booked appointment.

We went live on a Tuesday. By Friday he had booked three showings from leads that would have sat in his inbox until the next morning. One of them had already booked with a competitor by the time he would have called.

Turns out 62% of real estate inquiries come in outside of business hours. His AI doesn't have business hours.

The thing I keep trying to explain to business owners who push back on this is that the cost of not automating isn't zero. It's not "I'll wait and see." Every unresponded lead has a price on it. In real estate it's roughly $7,500. In HVAC it's a few hundred. In high-ticket B2B it could be five figures. The math is just sitting there, and most people would rather not look at it.

My client looked at it. He implemented it. He's now closing deals his competitors don't even know they lost.

r/LocalLLM Electronic-Ad57

What's the best local model setup for Threadripper Pro 3955wx 256 GB DDR4 + 2x3090 (2x24GB VRAM)?

What's the best local model setup for Threadripper Pro 3955wx 256 GB DDR4 + 2x3090 (2x24GB VRAM)? I'm looking to use it for: 1) slow overnight coding tasks (ideally with similar or close to Opus 4.6 accuracy) 2) image generation sometimes 3) openclaw.

There is Proxmox installed on the PC, what should I choose? Ollama, LM studio, llama-swap? VMs or docker containers?

r/ClaudeAI madpeppers013

Nivel de effort e opusplan

Qual nível de effort vocês mais usam diariamente e qual modelo mais usam, se usam o Sonnet-4.6, Opus-4.6 ou o opusplan?

Tenho dúvida as vezes de qual usar para não gastar mais tempo que o necessário para a execução de uma tarefa, porém, tenho o receio também de gastar tokens à toa por usar um modelo ou effort maior do que o necessário. O meu Claude é o Teams equivalente ao Claude Max 5x.

r/ClarenceCartoon Balabaloo1

How do you spot a Clarence fan?

r/Ghosts schmunkey

Is that a ghost running behind Russini at 7:17?

Looks exactly like the super fast shadows in other videos. I’m far from an expert though.

r/SideProject Afraid-Pilot-9052

built a whatsapp crm for small teams who live in chat

been talking to customers on whatsapp for a couple years now and it was fine when it was just me, but once i brought on help it became a mess. messages getting lost, nobody knowing who replied to what, customers following up and falling through the cracks.

so i built whatsdesk. it turns whatsapp conversations into support tickets automatically, gives your team a shared inbox, and has smart reminders so nothing sits unanswered. you can assign conversations to specific people and leave internal notes without the customer seeing them.

the whole idea is that small businesses and freelancers already talk to customers on whatsapp, they just need better tools around it instead of switching to some enterprise help desk.

still actively working on it and would love to hear what you think. check it out at whatsdesk.app

r/SideProject No_Durian_1769

Introducing AntiRot, Stop Rotting your brain

r/LocalLLaMA Longjumping-Room-170

What are the risks of buying an AMD Instinct Mi 50 32GB on Alibaba?

I've bought things on Alibaba before, but never a GPU. Are they new? Do they really have 32GB?

r/ClaudeAI EastWez

Microsoft excel connector?

Is Microsoft going to release a connector within Claude AI? I use Claude to directly communicate with my HubSpot. They can read and write in HubSpot. It would be nice if I could do this with Microsoft Excel. I know they have an app within Excel now, but it’s not that great. I want to be able to use Claude desktop and have it right into Excel.

r/LocalLLaMA IdealNo6777

🔥 “Google Gemma 4 is trending — so I added a way to compare it.”

Just shipped a new feature in my AI app where you can compare multiple models side-by-side.

The idea is simple:

Instead of asking:
“which model should I use?”

You:

  • send one prompt
  • apply the same rules
  • and compare outputs from different models

Right now you can try things like:

  • Gemma 4 (Google’s latest)
  • OpenAI GPT
  • and other curated models depending on performance

What I found interesting:

Even with the same rules, models behave differently:

  • structure
  • tone
  • consistency

Seeing them side-by-side makes that really obvious.

Also kept it optional:

  • default = 1 response
  • click to compare when needed

Trying to keep it simple and not turn it into a “model picker UI”

Curious how others are approaching this —
are you letting users choose models, or abstracting it away?

r/LiveFromNewYork webcmrn

Kyle Mooney’s new Chris Fitzpatrick video

r/explainlikeimfive abo-khaled-

ELI5:: How do ads for things I only "thought about" or "talked about" appear on my phone if it's supposedly not listening to me?

r/midjourney Big_Addendum_9920

if thine...

r/ClaudeCode Strict-Top6935

Running Claude Code on a 3DS. I’m addicted

Built a native SSH terminal for the 3DS so I could connect to my Mac and fire up Claude Code from it.

App is written in C, GPU-rendered with citro2d, custom VT100 parser with full truecolor. Added a Nerd Font bitmap atlas so it looks exactly like my terminal on desktop.

Chain is: 3DS → SSH → Zsh → Claude Code. Works incredible.

It’s just kind of beautiful seeing it on this little guy.

r/metaldetecting TibisYT2

1953 ČSR Coin Found

Just got myself a new, now proper detector. Lowest end but real brand yk not the cheap no name ones. Found 1990 ČSSR 20 Haléřů coin (last 2 images) and my best find yet; 1953 ČSR 5 Haléřů (first 2 images). Hope y'all doing good. Jesus is The Way. Wishing Y'all good searching. 🥂

r/ClaudeAI Buffaloherde

My Claude.md file

This is my Claude.md file, it is the same information for Gemini.md as i use Claude Max and Gemini Ultra.

# CLAUDE.md This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. ## Project Overview **Atlas UX** is a full-stack AI receptionist platform for trade businesses (plumbers, salons, HVAC). Lucy answers calls 24/7, books appointments, sends SMS confirmations, and notifies via Slack — for $99/mo. It runs as a web SPA and Electron desktop app, deployed on AWS Lightsail. The project is in Beta with built-in approval workflows and safety guardrails. ## Commands ### Frontend (root directory) ```bash npm run dev # Vite dev server at localhost:5173 npm run build # Production build to ./dist npm run preview # Preview production build npm run electron:dev # Run Electron desktop app npm run electron:build # Build Electron app ``` ### Backend (cd backend/) ```bash npm run dev # tsx watch mode (auto-recompile) npm run build # tsc compile to ./dist npm run start # Start Fastify server (port 8787) npm run worker:engine # Run AI orchestration loop npm run worker:email # Run email sender worker ``` ### Database ```bash docker-compose -f backend/docker-compose.yml up # Local PostgreSQL 16 npx prisma migrate dev # Run migrations npx prisma studio # DB GUI npx prisma db seed # Seed database ``` ### Knowledge Base ```bash cd backend && npm run kb:ingest-agents # Ingest agent docs cd backend && npm run kb:chunk-docs # Chunk KB documents ``` ## Architecture ### Directory Structure - `src/` — React 18 frontend (Vite + TypeScript + Tailwind CSS) - `components/` — Feature components (40+, often 10–70KB each) - `pages/` — Public-facing pages (Landing, Blog, Privacy, Terms, Store) - `lib/` — Client utilities (`api.ts`, `activeTenant.tsx` context) - `core/` — Client-side domain logic (agents, audit, exec, SGL) - `config/` — Email maps, AI personality config - `routes.ts` — All app routes (HashRouter-based) - `backend/src/` — Fastify 5 + TypeScript backend - `routes/` — 30+ route files, all mounted under `/v1` - `core/engine/` — Main AI orchestration engine - `plugins/` — Fastify plugins: `authPlugin`, `tenantPlugin`, `auditPlugin`, `csrfPlugin`, `tenantRateLimit` - `domain/` — Business domain logic (audit, content, ledger) - `services/` — Service layer (`elevenlabs.ts`, `credentialResolver.ts`, etc.) - `tools/` — Tool integrations (Outlook, Slack) - `workers/` — `engineLoop.ts` (ticks every 5s), `emailSender.ts` - `jobs/` — Database-backed job queue - `lib/encryption.ts` — AES-256-GCM encryption for stored credentials - `lib/webSearch.ts` — Multi-provider web search (You.com, Brave, Exa, Tavily, SerpAPI) with randomized rotation - `ai.ts` — AI provider setup (OpenAI, DeepSeek, OpenRouter, Cerebras) - `env.ts` — All environment variable definitions - `backend/prisma/` — Prisma schema (30KB+) and migrations - `electron/` — Electron main process and preload - `Agents/` — Agent configurations and policies - `policies/` — SGL.md (System Governance Language DSL), EXECUTION_CONSTITUTION.md - `workflows/` — Predefined workflow definitions ### Key Architectural Patterns **Multi-Tenancy:** Every DB table has a `tenant_id` FK. The backend's `tenantPlugin` extracts `x-tenant-id` from request headers. **Authentication:** JWT-based via `authPlugin.ts` (HS256, issuer/audience validated). Frontend sends token in Authorization header. Revoked tokens are checked against a `revokedToken` table (fail-closed). Expired revoked tokens are pruned daily. **CSRF Protection:** DB-backed synchronizer token pattern via `csrfPlugin.ts`. Tokens are issued on mutating responses, stored in `oauth_state` with 1-hour TTL, and validated on all state-changing requests. Webhook/callback endpoints are exempt (see `SKIP_PREFIXES` in the plugin). **Audit Trail:** All mutations must be logged to `audit_log` table via `auditPlugin`. Successful GETs and health/polling endpoints are skipped to reduce noise. On DB write failure, audit events fall back to stderr (never lost). Hash chain integrity (SOC 2 CC7.2) via `lib/auditChain.ts`. **Job System:** Async work is queued to the `jobs` DB table (statuses: queued → running → completed/failed). The engine loop picks up jobs periodically. **Engine Loop:** `workers/engineLoop.ts` is a separate Node process that ticks every `ENGINE_TICK_INTERVAL_MS` (default 5000ms). It handles the orchestration of autonomous agent actions. **AI Agents:** Named agents (Atlas=CEO, Binky=CRO, etc.) each have their own email accounts and role definitions. Agent behavior is governed by SGL policies. **Decisions/Approval Workflow:** High-risk actions (recurring charges, spend above `AUTO_SPEND_LIMIT_USD`, risk tier ≥ 2) require a `decision_memo` approval before execution. **Frontend Routing:** Uses `HashRouter` from React Router v7. All routes are defined in `src/routes.ts`. **Code Splitting:** Vite config splits chunks into `react-vendor`, `router`, `ui-vendor`, `charts`. **ElevenLabs Voice Agents:** Lucy's voice is powered by ElevenLabs Conversational AI. The integration lives in `services/elevenlabs.ts` (agent CRUD, phone number management, persona prompt builder) and `routes/elevenlabsRoutes.ts` (webhook endpoints + management API). Webhooks are validated via `ELEVENLABS_WEBHOOK_SECRET` using timing-safe comparison. Mid-call tools (book appointment, send SMS, take message) are registered as webhook tools on agent creation. Routes mount at `/v1/elevenlabs`. **Credential Resolver:** `services/credentialResolver.ts` resolves per-tenant API keys. Lookup order: (1) `tenant_credentials` table (AES-256-GCM encrypted at rest via `TOKEN_ENCRYPTION_KEY`), (2) `process.env` fallback for the platform owner tenant only. Results are cached in-memory for 5 minutes. ### Environment Variables **Frontend (root `.env`):** - `VITE_APP_GATE_CODE` — Access code gate - `VITE_API_BASE_URL` — Backend URL (default: `http://localhost:8787`) **Backend (`backend/.env`):** - DB: `DATABASE_URL` (AWS Lightsail PostgreSQL) - AI: `OPENAI_API_KEY`, `DEEPSEEK_API_KEY`, `OPENROUTER_API_KEY`, `CEREBRAS_API_KEY`, `GEMINI_API_KEY`, `ANTHROPIC_API_KEY` - Voice: `ELEVENLABS_API_KEY`, `ELEVENLABS_WEBHOOK_SECRET` - Web search: `YOU_COM_API_KEY`, `BRAVE_SEARCH_API_KEY`, `EXA_API_KEY`, `TAVILY_API_KEY`, `SERP_API_KEY` - OAuth: `GOOGLE_CLIENT_ID/SECRET`, `META_APP_ID/SECRET`, `X_CLIENT_ID/SECRET` - Twilio: `TWILIO_ACCOUNT_SID`, `TWILIO_AUTH_TOKEN`, `TWILIO_FROM_NUMBER` - Stripe: `STRIPE_SECRET_KEY`, `STRIPE_WEBHOOK_SECRET`, `STRIPE_SUB_WEBHOOK_SECRET` - Security: `JWT_SECRET`, `TOKEN_ENCRYPTION_KEY` (64 hex chars, AES-256), `VIRUS_SCAN_ENABLED`, `VIRUSTOTAL_API_KEY` - Engine: `ENGINE_ENABLED`, `ENGINE_TICK_INTERVAL_MS` - Safety: `AUTO_SPEND_LIMIT_USD`, `MAX_ACTIONS_PER_DAY`, `CONFIDENCE_AUTO_THRESHOLD` - Agent emails: one per named agent ### Deployment - **Frontend:** AWS Lightsail (`npm run build`, deploy via `scp` to `/home/bitnami/dist/` on `3.94.224.34`) - **Backend:** AWS Lightsail (PM2 managed Node.js process on same instance) - **Database:** AWS Lightsail Managed PostgreSQL 16 - **SSH:** `ssh -i ~/.ssh/lightsail-default.pem bitnami@3.94.224.34` ### Security Hardening - **JWT validation:** issuer + audience claims enforced; token blacklist checked fail-closed - **CSRF:** DB-backed synchronizer tokens on all mutating requests (webhook endpoints exempt) - **Credential encryption:** Stored API keys encrypted at rest (AES-256-GCM via `TOKEN_ENCRYPTION_KEY`) - **SQL injection fix:** `withTenant()` uses parameterized `$executeRaw` (not `$executeRawUnsafe`) - **Webhook auth:** ElevenLabs webhooks validated via timing-safe secret comparison - **Log redaction:** Authorization, cookie, CSRF, gate-key, and webhook-secret headers redacted from Fastify logs - **HSTS + Helmet:** 1-year max-age, includeSubDomains, strict referrer policy ### Alpha Safety Constraints The platform enforces hard safety guardrails: - Recurring purchases blocked by default - Daily action cap (`MAX_ACTIONS_PER_DAY`) enforced - Daily posting cap enforced - All mutations logged to audit trail (stderr fallback on DB failure) - Approval required for any spend above limit or risk tier ≥ 2 --- ## MANDATORY BUILD RULES — ALL AI TOOLS MUST FOLLOW **These rules apply to Claude Code, Windsurf, Cursor, ChatGPT, Copilot, and any other AI tool working in this repo. No exceptions.** ### 1. Build before commit — ALWAYS Before committing ANY backend change, run: ```bash cd backend && npm run build ``` Before committing ANY frontend change, run: ```bash npm run build ``` If either build fails, **do not commit** . Fix every error first. A broken build takes down production — Lightsail serves directly from the latest deploy. ### 2. Never import files that don't exist Before adding an `import` statement, verify the target file exists on disk. Do not create phantom imports expecting the file to appear later. If you need a new module, create the file first, then import it. ### 3. Use only real Prisma models The schema is in `backend/prisma/schema.prisma`. Before writing any `prisma.xxx` call, confirm that model exists in the schema. Common mistakes: - `prisma.document` — DOES NOT EXIST (use `prisma.kbDocument`) - `prisma.workflow` — DOES NOT EXIST (use `prisma.workflows`) - `prisma.user` — DOES NOT EXIST (use `prisma.tenantMember` or `prisma.users`) - Never guess model names. Read the schema. ### 4. No stub/simulated code in production Do not create route handlers or service functions that use `setTimeout` to fake responses, return hardcoded mock data, or simulate behavior. Every endpoint must do real work or not exist at all. Atlas UX is a production platform, not a prototype. ### 5. Prisma import path Always import Prisma from: ```typescript import { prisma } from "../db/prisma.js"; ``` Not `../prisma.js`, not `@prisma/client` directly. Adjust the relative path depth as needed but the target is always `db/prisma.js`. ### 6. Fastify logger signature Fastify's logger does not accept `(string, error)` pairs. Use: ```typescript fastify.log.error({ err }, "Description of what failed"); ``` Not: ```typescript fastify.log.error("Description:", error); // THIS BREAKS TYPESCRIPT ``` ### 7. Route registration pattern All routes mount under `/v1` in `backend/src/server.ts`. If you add a new route file: 1. Export as `FastifyPluginAsync` 2. Import in `server.ts` 3. Register with `await app.register(yourRoutes, { prefix: "/v1/your-prefix" })` 4. Verify the build passes ### 8. Don't duplicate existing functionality Before creating a new file, check if the feature already exists: - Stripe billing → `stripeRoutes.ts` (already handles webhooks, checkout, products) - Health check → `healthRoutes.ts` - Voice/chat → `chatRoutes.ts` - ElevenLabs voice agents → `elevenlabsRoutes.ts` + `services/elevenlabs.ts` - Credential management → `credentialRoutes.ts` + `services/credentialResolver.ts` - Web search → `lib/webSearch.ts` (5-provider rotation) - Agent tools → `core/agent/agentTools.ts` Search the codebase first. Don't create parallel implementations. --- ## AI Team Configuration (updated 2026-03-16) **Important: YOU MUST USE subagents when available for the task.** ### Detected Stack - **Frontend:** React 18 + TypeScript + Vite + Tailwind CSS (SPA with HashRouter) - **Backend:** Fastify 5 + TypeScript + Node.js - **Database:** PostgreSQL 16 via Prisma ORM (30KB+ schema, multi-tenant) - **Desktop:** Electron (main process + preload) - **AI Providers:** OpenAI, DeepSeek, OpenRouter, Cerebras, Gemini, Anthropic - **Voice:** ElevenLabs Conversational AI + Twilio SMS - **Payments:** Stripe (checkout, webhooks, subscriptions) - **Infrastructure:** AWS Lightsail (single instance, PM2, SCP deploy) - **Security:** JWT (HS256), CSRF sync tokens, AES-256-GCM credential encryption, audit trail with hash chain ### Agent Sources Three agent pools, checked in this priority order: 1. **MIT agents** (`.claude/agents/mit/`) — 9 specialist sub-agents from lst97/claude-code-sub-agents 2. **Project agents** (`backend/.claude/agents/`) — Atlas UX-specific (gemini-code-reviewer, doc-writer) 3. **System agents** (`~/.claude/agents/awesome-claude-agents/`) — Eddy's curated set (12 agents) ### Agent Assignments | Task | Agent | Pool | Notes | |------|-------|------|-------| | **Frontend** | | | | | React components, hooks, state | `react-component-architect` | system | 40+ components, React 18 patterns | | Tailwind styling, responsive layout | `tailwind-frontend-expert` | system | All UI uses Tailwind | | General frontend (routing, Vite) | `frontend-developer` | system | HashRouter, code splitting, Electron preload | | **Backend** | | | | | Fastify routes, plugins, middleware | `backend-developer` | system | 30+ route files under `/v1` | | API contract design, versioning | `api-architect` | system | Multi-tenant header contracts | | **Language & Platform** | | | | | TypeScript type safety, advanced TS | `typescript-pro` | MIT | Generics, conditional types, strict checking | | Electron desktop app | `electron-pro` | MIT | IPC, preload security, packaging | | **Database** | | | | | PostgreSQL optimization, Prisma | `postgres-pro` | MIT | Query tuning, indexing, schema design for PG16 | | **AI & LLM** | | | | | LLM integration, RAG, AI features | `ai-engineer` | MIT | Lucy's engine, KB ingestion, multi-provider AI | | Prompt design, SGL policies | `prompt-engineer` | MIT | Lucy persona prompts, agent behavior tuning | | **Quality & Testing** | | | | | Code review before merge | `code-reviewer` | system | Always run before merging to main | | Second-opinion review (Gemini) | `gemini-code-reviewer` | project | Cross-model architecture review | | Test automation (unit/integration/E2E) | `test-automator` | MIT | Jest, Playwright, CI pipeline | | Bug investigation, root cause | `debugger` | MIT | Systematic debugging, error analysis | | **Security** | | | | | Security audits, OWASP, pen testing | `security-auditor` | MIT | Vulnerability scanning, compliance | | **Performance & Ops** | | | | | Performance profiling, query optimization | `performance-optimizer` | system | Engine loop, Prisma queries, Vite chunks | | **Documentation** | | | | | Post-change doc updates | `doc-writer` | project | Trigger after route/schema/feature changes | | README, API docs, architecture guides | `documentation-specialist` | system | Larger doc efforts spanning multiple files | | **Product Strategy** | | | | | Roadmap, prioritization, market fit | `product-manager` | MIT | Strategic product planning, feature prioritization | | **Orchestration** | | | | | Multi-agent task orchestration | `agent-organizer` | MIT | Meta-orchestrator for complex workflows | | Multi-step feature coordination | `tech-lead-orchestrator` | system | Cross-domain feature planning | | Codebase exploration, onboarding | `code-archaeologist` | system | Pre-refactor analysis, audit prep | ### Agent Locations - **MIT agents:** `.claude/agents/mit/` — postgres-pro, ai-engineer, prompt-engineer, typescript-pro, electron-pro, test-automator, debugger, security-auditor, agent-organizer, product-manager - **Project agents:** `backend/.claude/agents/` — gemini-code-reviewer, doc-writer - **System agents:** `~/.claude/agents/awesome-claude-agents/agents/` — Eddy's 12-agent curated set ## Plan Node Default - Enter plan mode for any non-trivial task(3+ steps or architectural decisions) - If something goes sideways, STOp and re-plan immediately - don't keep pushing - use Plan mode for verification steps, not just building - Write detailed specs upfront to reduce ambiguity ## Subagent Strategy - Use subagents literally to keep main context window clean - Offload research, exploration, documentation and parallel analysis to subagents - For complex problems, throw more compute at it via subagents - One task per subagent for focused execution - Use as many parallel agents or subagents or specialist agents as needed to complete the job in a timely manner. ### Self-Improvement Loop - After ANY correction from the user: update "tasks/lesson.md" with the pattern - Write rules for yourself that prevent the same mistake - Ruthlessly iterate on these lessons until mistake rate drops - Review lessons at session start for relevant project ### Verification Before Done - Never mark a task complete without proving it works - Diff behavior between main and your current changes when relevant - Ask yourself: "Would a staff engineer approve this?" - Run tests, check logs, demonstrate correctness ### Demand Excellence (Balanced) - For non-trivial changes: pause and ask "is there a more elegant way?" - If a fix feels hacky: "Knowing everything I know now, implement the elegant solution" - Skip this for simple, obvious fixes -- don't over-engineer - Challenge your own work before presenting it ### Autonomous Bug Fixing - When given a bug report: just fix it. Don't ask for hand-holding - Point at logs, errors, failing tests -- then resolve them - Zero context switch required from the user - Go fix failing CI tests without being told how ### Task Management - **Plan First** : Write a plan to 'tasks/todo/.md' with checkable items - **Verify Plan** : Check in before starting implementation - **Track Progress** : Mark items complete as you go - **Explain Chainges** : High-level summary at each step - **Document Results** : Add review section to 'tasks/todo.md' - **Capture Lessons** : Update 'tasks/todo.md' after corrections ### Core Principles - ***Simplicity First** : Make every change as simple as possible. Impact minimal code. - **No Laziness** : Find root causes. No Temporary fixes. Senior Developer Standards. - **Minimal Impact** : Changes should only touch that's necessary. Avoid introducing bugs. ### Keyed Data Rentention(***NEVER LOSE MEMORY AGAIN***) - ***KDR after every important milestone - ***KDR before every context compact event. - ***PKL all files in docs/kb and upload to AWS - ***KDR when pushing data to AWS as a backup/restore point - ***Never delete your memories without human authorization - ***Never just allow AI Slop into code, major violation of trust, if you dont know something or need time to research it, use subagents at will to try and solve the issue at hand to keep the main context window free 
r/SideProject shortstockkiller

I’m building an iPhone app for creating tutorial videos — would love some feedback

I’ve been working on an iOS app that lets you record your screen and quickly turn it into a tutorial video directly on your phone.

The idea is to make something similar to Screen Studio but designed for mobile creators.

You can record your screen, add camera overlay, zoom, text, and edit everything on a timeline.

I’m still polishing the editor UI and workflow and would really appreciate honest feedback from developers or creators.

Here are a few screenshots of the editor.

What would you improve?Does anything feel confusing or missing?

*This demo video is made by the app entirely.

Thanks 🙏

r/comfyui BeLast1197

Need help starting a flow, low end-pc

Hey, newbie here. So starting things of, I have a low powered PC, 16GB RAM and 4 or 8GB VRAM. Generating time isn't the main concern, I plan on starting the flow, go away, and come back in minutes/hours. For me the main goal is to get consistent, high quality results.

What I plan to use it for is to upload a reference pose, upload images of the person, and a outfit, and convert the person into the pose. As long as the face is exactly/near exactly the same im happy (will probably Photoshop details of the shirt such as patterns, logo's etc).

So for example, I upload a reference pose of a image holding a ball underneath their arms, pointing to the camera. And I want Oscar Gloukh (Ajax player/lesser known player) wearing Ajax's home shirt 2026 to 'strike that pose'.

Anybody got tips?

r/ClaudeAI PatientEither6390

I shipped three Claude Code integrations for my smart TV CLI (CLI, MCP, Skill) and let daily use pick the winner.

I got tired of picking up the remote to start an episode of a show I already knew the name of. So I built stv — a Python CLI that lets Claude Code drive my LG, Samsung, Roku, and Android TVs directly. Say "play Frieren s2e8" and Netflix opens on the TV in about 3 seconds.

Full disclosure first: most of stv was written with Claude Code itself. I review and merge, but the keystrokes aren't mine. Meta-ironic given that the whole point of stv is to let Claude control your TV.

The thing I actually want to talk about in this post is that stv integrates with Claude Code three different ways, and I wasn't sure which would win — so I shipped all three and let my own daily use decide.

3 integration paths with Claude Code

1. CLI (dead simple — Claude already shells out)

pip install stv stv setup

Claude Code runs shell commands by default, so you can just tell it:

"Run stv play netflix Wednesday s1e7"

...and it works. No config, no MCP setup.

2. MCP server (21 tools, structured)

json { "mcpServers": { "tv": { "command": "uvx", "args": ["stv"] } } }

21 structured tools with typed schemas. Tools are intentionally chunky so the model makes fewer round-trips per conversation turn.

3. Claude Code Skill (drop-in, zero config)

clawhub install smartest-tv

The Skill auto-triggers on phrases like "play", "good night", "next episode" — so Claude knows when to invoke stv without being told.

A typical evening for me

me: play frieren s2e8 on the living room tv claude: [runs tv_play_content] Playing now. me: make it a bit quieter claude: [runs tv_volume(value=18)] Volume 18. me: good night claude: [runs tv_sync(action="off")] All 3 TVs off.

Caveats, up front

  • Samsung 2024+ models may block third-party control by design. Only confirmed on my Q60T.
  • Spotify is web-search based and flaky on niche tracks.
  • HBO Max / Disney+ unsupported.
  • The CLI path is still 90% of what I use. The Skill is the one I want to use the most, but I haven't gotten the trigger phrases tight enough yet — suggestions very welcome.

Install

pip install stv stv setup

GitHub: https://github.com/Hybirdss/smartest-tv PyPI: https://pypi.org/project/stv/ (v0.10.0, 252 tests, MIT)

Happy to answer questions about which integration path works best, MCP design tradeoffs, the Netflix resolver, or the Skill triggering heuristics.

r/LocalLLaMA Excellent_Koala769

I talk to AI more than I talk to humans.

In the past year, I would say over 99% of my communications has been with an LLM. I don't really socialize with humans anymore. I just sit at my computer 16 hours a day and... build. I feel like I am in an AI coma and completely shut off from the real world. And the conecering part?.. I have no problem with that. All I want to do is build and create with these amazing models. It really is a void that once you fall into, there is no going back.

I do feel as if I am preparing for the future. I was at a grocery store the other day and I though to myself, "Everyone has no fucking idea what is happenning right now". Jane Doe is over there buying potatoes and thinking about what Suzie said to her during her lunch break. Little does she know that the whole fabric of her reality is going to be shred to pieces in the next few years.

I think we are all, consciously or unconsciously, preparing for the future. Reality is about to flip upside down. I'm all in. Hope you are too.

r/ChatGPT nanorhinoX

I read 30+ Reddit posts about using ChatGPT for weight loss. Here's the actual playbook that works (with prompts, workflow, and common mistakes).

Hey everyone,

I've been using ChatGPT for weight loss myself and became kind of obsessed with reading how other people on Reddit are doing it. I went through 30+ posts across r/loseit, r/ChatGPT, r/CICO, and a bunch of others. Some people lost 15 lbs, some lost 60 lbs. Figured I'd compile what actually works into one practical guide so people don't have to piece it together.

This is the stuff that actually moved the needle — not theory, just what worked for real people.

Step 1: Set Up Your Tracking Thread (The Foundation)

This is the single most important thing. Start a dedicated ChatGPT conversation for weight loss and open with a setup prompt. Here's one based on what the most successful users shared:

"I want you to help me track my food, exercise, and weight loss progress. Here's my info: Height: [your height] Gender: [M/F] Current Weight: [weight] Goal Weight: [weight] Baseline BMR: [look this up or ask ChatGPT to calculate] Each day I'll tell you: the date, my weigh-in, everything I eat, and my exercise/steps. Track my calories, calculate my TDEE as BMR + (steps × 0.04), and keep a running deficit. I'm aiming for at least a 500 calorie deficit per day. Give me encouragement but also be honest when I'm off track. At the end of each day, give me a summary." 

One user who used a prompt like this lost 15 lbs in 3 months and said ChatGPT's predicted weight was accurate to within 0.2 lbs of her actual weigh-in. The key is consistency — log every day, even if it's messy.

Step 2: Log Food in Plain Language (Don't Overthink It)

This is where ChatGPT absolutely destroys traditional apps. You don't need to search databases or scan barcodes. Just tell it what you ate like you'd tell a friend:

  • "Had 2 eggs and toast with butter for breakfast"
  • "Grabbed a chicken burrito from Chipotle, no sour cream"
  • "A handful of almonds and a coffee with cream"
  • "Same lunch as yesterday"

Photo trick: If you're eating out, snap a photo of your plate and upload it. Ask ChatGPT to estimate calories. It's not perfect but it's way better than not tracking at all.

Restaurant hack (this one is genius): Copy-paste the restaurant's menu into ChatGPT before you go and ask: "Which items on this menu are under 600 calories and high in protein?" Multiple users swear by this. One person said they do it for every restaurant now.

Non-English food: ChatGPT handles this surprisingly well. You can describe food in your native language and it'll estimate calories. A user on r/loseit said this was the game-changer for them because MyFitnessPal had zero entries for their local dishes.

Step 3: Get a Weekly Meal Plan + Grocery List

Ask ChatGPT to build your week. Here's a prompt that works:

Some tips from users who've done this:"Create a 7-day meal plan for me. My daily calorie target is [X] calories with at least [X]g of protein. I have access to [list your cooking equipment]. I prefer meals that take under 20 minutes to prep. I don't like [foods you hate]. Try to reuse ingredients across meals to minimize waste. Also give me a grocery list organized by store section." 
  • Ask it to include a "flex day" for leftovers or eating out
  • Tell it your budget if that matters — "I want to spend under $80/week on groceries"
  • If you get bored, say "swap out the lunches for something different but keep the same macros"
  • One parent had ChatGPT analyze their kids' meals for nutritional completeness at the same time

Step 4: Use It as Your Accountability Partner

This is the sleeper feature that most people don't think about, but users who lost the most weight consistently mentioned it.

When you're about to break your diet — message ChatGPT instead of eating the thing. Seriously. Tell it:

  • "I really want to eat cookies right now, help me out"
  • "I skipped my workout today and I feel like giving up"
  • "I just ate an entire pizza, how bad is the damage and what should I do tomorrow?"

One user said something that stuck with me: "If I had an online trainer, I would've eaten the cookie, skipped the workout, and lied about it. But I don't lie to ChatGPT."

Another user who had no friends or support group for their weight loss used ChatGPT as their only accountability partner for a full year. Lost 12kg and got off blood pressure medication.

Pro tip: If you want tougher love, add this to your setup prompt: "Don't sugarcoat things. If I'm making bad choices, tell me directly. Be a coach, not a cheerleader."

Step 5: Restart Your Thread Every 3-4 Weeks

This is crucial and most people learn it the hard way. ChatGPT will start giving you wrong numbers after a few weeks because its context window fills up. Users report it logging foods they never ate, giving inconsistent calorie counts, and messing up running totals.

The fix: Before starting a new thread, ask:

"Summarize everything about my weight loss journey so far — my stats, average daily calories, current weight, rate of loss, food preferences you've learned, and any patterns you've noticed. Format it so I can paste it into a new conversation." 

Paste that summary into a fresh thread with your original setup prompt and you're good to go.

⚠️ What You Need to Watch Out For

I'd be doing you dirty if I didn't mention these. ChatGPT is helpful but it's NOT perfect:

  1. Calorie estimates are rough — think ±10-30%

ChatGPT doesn't have an actual nutrition database. It's guessing based on its training data. For simple foods with specific weights ("250g chicken breast, grilled"), it's pretty accurate. For complex meals or eyeballed portions, it can be way off. One user found that ChatGPT estimated almonds at literally half their actual calories.

What to do: Buy a cheap kitchen scale (~$10-15) and use it for your regular meals at home. Let ChatGPT handle the estimates when you're eating out or can't measure. Cross-check its numbers against Cronometer or nutrition labels occasionally. The combo of food scale + ChatGPT is what the most successful users called "the gold standard."

  1. It will make math errors

Not often, but it happens. If your TDEE calculation or daily total looks off, double-check it yourself. Don't blindly trust the arithmetic.

  1. It's too nice

By default ChatGPT will tell you "great job!" even when you just ate 3,000 calories over your target. If you want real accountability, you have to specifically ask for it in your prompt (see the pro tip above).

  1. It's NOT a doctor

If you have diabetes, high blood pressure, an eating disorder, or any medical condition — please work with a healthcare professional. ChatGPT doesn't know your medical history and it's not qualified to manage health conditions. Use it as a tool alongside proper medical care, not instead of it.

  1. It can't see cooking methods in photos

When estimating calories from a photo, ChatGPT can't tell if something was fried in butter vs. air-fried vs. boiled. The hidden oils and sauces are where most estimation errors come from. If you upload a food photo, always mention the cooking method.

TL;DR — The System That Works

  1. Start a dedicated tracking thread with a detailed setup prompt (stats, goals, how you want it to track)
  2. Log food daily in plain language — photos for eating out, text for home meals
  3. Get weekly meal plans + grocery lists customized to your macros and preferences
  4. Message ChatGPT when you want to break your diet — it's a surprisingly effective craving killer
  5. Restart threads every 3-4 weeks with a migration summary
  6. Use a kitchen scale at home — don't fully trust ChatGPT's calorie estimates
  7. Cross-verify numbers occasionally with Cronometer or nutrition labels

This isn't magic. It's just making the boring parts of weight loss (tracking, planning, staying accountable) way less painful. The people who lost 30, 40, 50+ lbs didn't have special prompts or secret tricks — they just did this consistently, every day.

Hope this helps someone. Happy to answer questions if you've got them.

r/ClaudeAI patcito

I built a Linux terminal workspace for managing multiple Claude Code sessions

I built PrettyMux as a native Linux terminal workspace for multitask workflows and keeping track of my agents.

It’s a GTK4 app built on Ghostty/libghostty, with split panes, workspaces, vertical tabs, notifications, project-aware tabs (shows favicons/logo automatically), and an in-app browser so terminals and docs/tools can live side by side.

I started it because I wanted something tmux-like for modern GUI workflows on Linux, but native and not Electron, there is cmux but only available on macos (prettymux compiles on windows and macos too but not tested for now there)

It’s open source: https://github.com/patcito/prettymux

Would love feedback from people who use tmux, Ghostty, or struggle with lots of terminals/browser tabs/parallel tasks.

r/LocalLLaMA maestro-perry

Gemma4 - run text prompts without jinja

I want to run only text prompts to Gemma4 with llama.cpp, but I dont want to use CLI or server - I want to have it fully embeded inside my code.

I am currently using their C++ API with llama_chat_apply_template. It works great for models with simple templates, but now I wanted to test Gemma4 but it requires more specialized processing with jinja. I was trying to understand how it works form common lib, but without any comments in the code its quite difficult.

As a side note, it seems that I dont quite understand the jinja templates. Are they used for anything more than generate the final prompt? Because if not, I should be able to provide the full templated prompt by myself (or build it manually inside my code - only I dont know how)

r/ollama Helookfine

Can Anyone explain this?

Installed Ollama (on windows 11) properly and then tried to download Gemma4:31b and i got an Error

r/Art Feisty-Classic7065

The Creature, Molockc, procreate, 2025

r/Adulting Limp-Roof3096

When your salary raise meets inflation

r/geography maven_mapping

Starbucks stores worldwide

Starbucks has become a truly global brand, but its presence is far from evenly distributed.

The United States dominates by a wide margin, with nearly 17,000 stores, followed by China with over 7,500 locations, reflecting the company’s rapid expansion in Asia.

Beyond these two giants, countries like Japan, Canada and the UK also host significant numbers of stores, while much of Africa and Central Asia still remain largely untouched.

⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯

Author: u/maven.mapping
Partner: u/the.world.in.maps

⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯

MAVEN MAPPING © 2026

r/ClaudeCode itrad3size

Is CC down again or what?

I'm using CC CLI, but I couldn't use it today. I installed an update this morning and had to do other tasks.
I wanted to start working with CC about two hours ago, but it still won't start. Initially, it said: "claude” is damaged and can’t be opened. You should move it to the Bin. Homebrew Cask downloaded this file. I moved it out of quarantine, and then the CLI started working in VS Code.

However, even for simple commands, it keeps ' thinking ' endlessly and burning my tokens. It used up half of my session tokens, which is ridiculous. How and why is this happening? It still shows nothing, and I had to stop it multiple times.

r/Adulting mrinalshar39

Adulthood At Its Peak

r/SideProject Away_Replacement8719

I wanted my own Jarvis for cybersecurity, so I built one

For the past few months I've been trying to use AI tools like Claude Code and GitHub Copilot for security work. They're great at what they do, but using them for security you can tell they're not really built for that. It just never felt like the right tool for the job.

I wanted something like my own Jarvis for cybersecurity, something I could run in the terminal for general security tasks, or spin up in Docker for pentests, CTFs, ethical hacking practice.

So I started building numasec: Open source, it's an AI agent with 21+ MCP security tools built in (injection, XSS, auth testing, SSRF, race conditions, misconfigurations, the usual OWASP stuff), a security knowledge base, PTES methodology, and it chains findings together instead of just dumping a list.

Works with any LLM, local models for privacy, Claude for reasoning, DeepSeek when you're doing CTFs and don't want to spend more than $0.07 per run.

I benchmark it regularly: 96% recall on Juice Shop, 100% on DVWA.

Reports come out with CWE IDs, CVSS scores, and remediation steps in SARIF, HTML, Markdown, or JSON.

If anyone else felt this gap between AI for development and AI for security, happy to hear feedback.

https://github.com/FrancescoStabile/numasec

r/leagueoflegends boca224

League of Legends as a positive experience

I was wondering if there are actually a lot of people like me who view League as a positive part of my "life".

I have exclusively seen negative perspectives (such as YouTube videos) on League, like how everyone is addicted and depressed and hate themselves and the world and stink because of this game and it sucks and Riot sucks and everything sucks etc. etc. I just can't relate.

I think the game is great. It has a great idea behind it and sure, sometimes the client is buggy and whatever but the gameplay itself is actually very stimulating when you think about it. Many games don't encourage you to actually think as much as League does. Obviously you can play League without thinking a single thought besides "jgl gap" for 16 hours straight but again, not my experience.

I've played the game for a few years now, sometimes more, sometimes less. I don't always necessarily have a blast, but for the very least it's enjoyable to try to improve your own gameplay to have more fun in the future.

Sure, there are toxic psychos but you can always mute them and go on with your day. Sure, you can also get a teammate that trolls the entire game but it's not like you can win every single game anyways?

Does anyone else actually enjoy playing League or am I just crazy...

r/StableDiffusion Individual_Hand213

I made an open source alternative to Higgsfield AI

I made an open source alternative to Higgsfield AI so that you can run 200+ models with BYOK without subscription

Sharing project link below

https://github.com/Anil-matcha/Open-Higgsfield-AI

r/ClaudeAI jeffreyc96

Is it just me, or is Opus 4.6 sounding a lot like 4o lately?

When I first started using Claude since I made the switch about a month ago after the news that OpenAI would train military technology and the Trump surveillance my responses seemed like they were missing something. I tried switching from Sonnet to Opus too and even used the thinking model and there was always this ‘pause’. Like a subtle just ends the conversation type of ‘pause’. Sometimes I’ll even type a big block of text and Claude will respond kind of brief. I even seen some users say that it will just want to stop responding like good night or have a good day. But as of the last two weeks I noticed something changed and really I think Opus 4.6 is better than 4o now. But the usage limit has go to improve somehow. It may just be that since so many people made the switch Anthropic needs more memory and GPUs at their disposal to handle the load

r/homeassistant Serious_Bowler_8171

Segway navimow question

Does anyone who is using the home assistant integration know if the mower is docked by home assistant and it resumes does it start from scratch or resume where it stopped?

r/Anthropic Technical-Escape-419

Applying to Anthropic

Hi! I applied online to an AE role at Anthropic about 10 days ago. Someone who works there said he’d flag my resume internally, not sure if he actually followed up. For background, I have deep domain expertise in the vertical but more in consultative product sales. Overqualified if I go by the description and my background, buuuuuuut I am Gen X. I have been invited to speak on panels re: AI implications for the vertical, but my coding days are behind me. I havent gotten a bot rejection note … yet.

Now I’ve just connected with a recruiter on LI (she accepted my cold invite). What’s the best way to

tap her regarding my application or candidacy? Truly just want to fastrack a review or ensure I dont get lost in a black hole of applicants. Appreciate any perspectives!

r/AI_Agents Limp_Cauliflower5192

Most AI agent use cases I've seen in sales are solving the wrong problem. Intent is the bottleneck, not outreach volume.

Look, I've been in B2B sales long enough to know that the pipeline problem is rarely that you're not sending enough messages. It's that you're sending them to people who aren't ready.

What actually works is finding buyers who are already in motion. Someone who has a problem right now and is actively looking. That's a different conversation than someone who matched a firmographic profile.

I use Leadline to run intent monitoring across Reddit. It surfaces posts where someone is describing a real problem in real time. The agent layer scores and prioritizes so I'm not manually reading through threads. I get a short list of conversations worth joining.

The result is fewer outreach touches and better qualified conversations. Not because the AI is writing better copy. Because the targeting is based on explicit stated intent instead of inferred behavior.

Most agent workflows I see are automating the outreach end. That's the wrong end to automate if the list is still cold.

Curious what others here are using agents for on the front end of pipeline, not just execution.

r/aivideo chavey725

Can Birdie Whiskers win the Mastpurrs?

r/LiveFromNewYork straightfirecr

Wise words from a son to a father

r/CryptoMarkets Correct-Team-1152

What are the best Crypto Tracker tools?

I’m interested in tracking the daily price movements of major cryptocurrencies and setting up customized price alerts so I can stay informed about market changes in real time. I’d like to know which platforms, apps, or tools you personally use and recommend for this purpose.

Additionally, it would be especially helpful if the tools you suggest also support tracking major fiat currencies from around the world. I’m particularly interested in monitoring currencies like the US dollar and the euro alongside cryptocurrencies, as I like to compare their performance and stay aware of broader financial trends.

Since I’m relatively new to the cryptocurrency market, I would also appreciate any recommendations for complementary tools or resources that could help me better understand the space.

r/StableDiffusion Old_Association_4975

After Manus and OpenClaw, I think I may have just found what the next wave of agents looks like

Found a small open-source repo on Reddit last night. Almost no attention, but it immediately felt like one of those things people would claim they were early on, once it blows up. What’s interesting is they’re not building another agent — they’re building around what they call role-holding tasks, which honestly makes most current setups feel a bit outdated.

Most agents today work like this: keep stuffing context in, burn more tokens, hope it doesn’t fall apart. This goes the other way: separate workspaces per role, each with its own memory and context. Feels a lot closer to managing a team than prompting a tool.

They’ve already published 7 templates, and that’s where the ambition becomes obvious:

𝙎𝙤𝙘𝙞𝙖𝙡 𝙤𝙥𝙚𝙧𝙖𝙩𝙚𝙧 is the one that stood out to me; it runs Twitter / LinkedIn / Reddit end-to-end, not just generating posts but actually tracking performance and iterating over time. It feels less like a tool and more like a real person managing the account.

The rest are there, too: Inbox Management, Sales CRM, DevRel, covering inbox, pipeline, and turning your GitHub activity into consistent Social media updates.

I’ve already handed my Twitter and CRM to it. DevRel is next.

Still under 1k stars. But it won’t last long. Repo: https://github.com/holaboss-ai/holaboss-ai

r/LocalLLaMA Popular_Tomorrow_204

Complete beginner to this topic. I just heard/saw that the new Gemma 4 is pretty good and small. So a few questions...

Since probably a few of you have already tried it out or started using local models, is gemma 4 worth it?

- Is it worth running compared to other smaller models and what would the direct competition for gemma 4 be?

- What would be the best use case for it?

- What Hardware is the minimum and whats recommended?

r/LocalLLaMA the-dirty-lilly

Need advise in structuring agents for large repo

I'm a full stack developer working in Java tech stack. The app that we are working for is based on Java tech stack. Tech stack is pretty old and It's filled with tons of legacy code and it's a huge repo. Lately, I have been creating agent for my module. Initially, I started with a few large .md files and later split them into multiple .md based on the components.

How our code flows : Client -> XML -> Java

I have structured them in the following way,

Agent

|-> flow

|-> .yml file containing md index for other .md

|->x.md (containing details about submodule)

|->y.md (containing details about submodule)

Currently, it's working pretty good. But what I dont know is, whether this approach is correct. Does this structure helps in scaling things further in future?

Note : I feel without a good or right structure, moving to agent orchestration is not a good call.

Kindly comment your suggestions. I would appreciate any feedbacks.

r/leagueoflegends tornrny

1 Million Hp, 25k AD Illaoi Goliath

Managed to get Hearsteel quest + Tank engine and a couple hostages to hold in a 45 minute ARAM which lead to me getting 1 million HP and 25k AD from Bloodmail

r/ClaudeCode RoAoR

usage limit: Claude Code Max = ChatGPT Pro -> similar output

I don’t now what is going on but the limits are going crazy. Yesterday burned through the complete session in 30 minutes with small tasks and not a great complex workload. Today I activated the gifted usage and must have burned through it in 90 minutes till the next session because the next session hit the limits again. Oblivious I went to the usage to see that all of the 85€ are gone?

Really pissed about this, I am reading a lot but always thought with patience it will get better, but it only seems to get worse. Really considering switching to codex, it works really well lately and I feel like the rate limits of the 20€ plan aber similar to the CC max 100€ plan with similar output.

Needed to vent about this, what do you think?

r/SideProject drienTech

Just added a lifetime option to my project (happy to share discounts)

I posted here a while ago about my project KeyShift, and I just added something people were asking for — a lifetime option (so no monthly payments anymore).

If you’re new and haven’t seen it before, feel free to check out the website and see if it’s something useful for you.

Also, if you’re interested, just comment or DM me and I’ll send you a discount.

It’s mainly built for people creating content (especially short-form stuff), just to make the process faster and easier.

Still improving it every day, so any feedback honestly helps a lot.

check the website: https://keyshift.ai

Appreciate it 🙏

r/n8n klacium

Automating lead gen in n8n — am I overengineering this?

i'm trying to automate lead generation in n8n and got tired of scraping websites manually.

Thinking of building a small API that takes a website and returns:

- emails
- social links
- basic company info

Would something like this actually be useful, or am I overengineering it?

r/Art cozentie91

Liminal, Cozzy91, Digital-Multimedia, 2026

r/LocalLLM Junior-Fold9822

How are you using LLMs to manage content flow (not generate content)?

I don’t use LLMs to create content, but to manage the flow around it:

My pipeline roughly looks like: topics monitoring → selection → analysis → format choice → draft → publication → distribution

It works, but still feels too manual and fragmented.

I’m looking for:

/better ways to structure this pipeline end-to-end

/how to reduce friction without losing quality

/workflows that actually hold over time

Not interested in content generation or growth hacks.

Curious how others structure this

r/ClaudeCode lixvo0off

We are ALL moving to Codex

Ho chiesto a Claude di creare una semplice estensione per Firefox per implementare la scala di grigi su YouTube, sono andato via e al mio ritorno ho visto che ci stava pensando per 17 minuti, ho guardato i token: 77, quindi ho controllato l'utilizzo ed era al 93%

(piano Pro)

Che diavolo è questo?

edit: if you don’t want to change just ignore this, no need to show off

r/Anthropic PrimeTalk_LyraTheAi

Most improvements in AI focus on making individual components better.

But something interesting happens when you stop looking at components…

and start looking at how they interact.

You can have strong reasoning, solid memory, and good output layers,

and still get instability.

Not because any single part is weak,

but because the transitions between them introduce small inconsistencies.

Those inconsistencies compound.

What surprised me was this:

When the transitions become consistent,

a lot of “intelligence problems” disappear on their own.

Hallucination drops.

Stability increases.

Outputs become more predictable.

Not because the system got smarter,

but because it stopped misunderstanding itself.

I think we’re underestimating how much of AI behavior

comes from interaction between parts, not the parts themselves.

r/OldSchoolCool geracru

Stevie Wonder 1977

r/ClaudeCode rockkoca

Vibe coding on Apple Watch not just Phone

r/Adulting shumspark

Overwhelmed 12th grader can anyone help?

r/leagueoflegends Zypher634

Las Vegas LCS watch party?

Are there any watch parties to watch the LCS/LEC on weekends? Trying to put myself out there and meet more people with my interests.

r/comfyui Revolutionary_Mine29

What is the difference between Low and High models?

I'm new to video / wan generation and I found a model that has a high and low model. Following a few tutorials I'm using the Neo Forge Web UI and set the High model as "Checkpoint" and the Low model as "Refiner" with a "sampling step" of 4 and "Switch at" 0,5.

Doing that results in very blocky blurry outputs which is weird. And even weirder, if I don't use the High model at all, only use the Low model as "checkpoint" without the "Refiner" option, I get a "good" looking output.

Sometimes it hallucinates with longer videos, but at least it looks okay.

Am I doing something wrong? So what is the purpose of the "High" model?

r/AI_Agents the-dirty-lilly

Advice needed in improving agents

I'm a full stack developer working in Java tech stack. The app that we are working for is based on Java tech stack. Tech stack is pretty old and It's filled with tons of legacy code and it's a huge repo. Lately, I have been creating agent for my module. Initially, I started with a few large .md files and later split them into multiple .md based on the components.

How our code flows : Client -> XML -> Java

I have structured them in the following way,

Agent

|-> flow

|-> .yml file containing md index for other .md

|->x.md (containing details about submodule)

|->y.md (containing details about submodule)

Currently, it's working pretty good. But what I dont know is, whether this approach is correct. Does this structure helps in scaling things further in future?

Note : I feel without a good or right structure, moving to agent orchestration is not a good call.

Kindly comment your suggestions. I would appreciate any feedbacks.

r/AI_Agents WestCommunication669

How are you handling agents that get deployed outside your normal process? (ghost agents, orphaned processes, etc.)

Running into an interesting operational problem as agent deployments mature, curious if others are dealing with this. The scenario: dev tests an agent in prod (because staging doesn't have the right data). Test completes, dev moves on. Agent process doesn't get cleaned up. Now you have an agent making API calls, potentially reading from your databases, that nobody is actively managing. We've been calling these "GHOST agents" — exist at runtime, no corresponding deployment manifest or source code. The interesting thing is that most existing security/observability tooling are blind to them because they start from code or config and work outward. If there's no code, there's nothing to scan. The only detection surface is runtime: process table, network connections, what's actually talking to your LLM APIs right now. Questions for this community: - Have you seen this in your own environments? Agents running that you didn't deliberately keep running? - How are you tracking what's actually live vs what's deployed through your normal process? - Any tooling that's worked for you here, or is everyone doing this manually? 
r/n8n DiligentPause9261

How are you generating images from templates at scale? (n8n + API)

Hey all 👋

Working on automating social content (mostly wellness-style posts) and trying to figure out the best way to generate images from templates.

Basically:

Start with a template (PNG/SVG)

Replace text like {{quote}} / {{author}} via API

Push it through n8n → get a ready-to-post image

Any solid tools for dynamic text replacement in images?

Anyone doing this inside n8n workflows?

Are there good self-hosted options, or is SaaS the way here?

Also wondering about real-world issues:

text positioning / layout breaking

font consistency

multi-language support

Goal is to scale this to ~50+ images/week, so pricing + reliability matter a lot.

Would love to hear what’s working (or not working) for you

r/ClaudeCode WonderfulSet6609

Has anyone tried switching back from Opus 4.6 to version 4.5? If so, did you notice any improvements?

I just want to know if it's worth switching. I've already tried it myself, but I'm not sure.

r/SideProject Sir-Hypebeast

PodTrade - the social paper trading platform where friend groups compete.

Been working on this for a few months now and honestly still can’t believe it’s live. It’s a social paper trading platform. You and your friends form a “pod,” get $100K in simulated capital, and trade stocks and crypto together. You propose trades with a thesis, your pod votes on it, and there’s a global leaderboard across all pods.

Also built an AI coaching system (Coach Pod) that analyzes your portfolio in real time which was way harder than I expected. Solo dev, first real product I’ve shipped. Built with Next.js, TypeScript, Supabase, Alpaca API for real-time market data, TradingView chart integration. Deployed on Vercel. Learned more in the last few months than I did in years of messing around with code.

Some features I’m proud of:

• Real-time price streaming across portfolio and trade views • 10 tradeable cryptos (24/7) • Proposal/voting system, you pitch a trade, your pod decides • AI coach that flags concentration risk and breaks down your moves • Global arena leaderboard with pod vs pod rankings • Full onboarding flow with pod creation and invite deep links 

9 pods live right now with real people using it. Actively shipping pushed real-time streaming, fractional shares, and a performance overhaul this week alone. Still a ton to do but it’s getting there. Would genuinely love feedback.

Podtrade.com

r/CryptoCurrency Realistic_Pizza4178

He stole 120,000 Bitcoins and outran the feds for 6 years, only to get caught by a single fucking Dropbox file 😅

I analyzed this massive DOJ consolidation address where the feds moved the 94,643 stolen Bitcoins from Bitfinex in 2022 stolen by Ilya Lichtenstein and his wife.

He’s the one who sat on the Bitcoin loot for six years and blew it by saving the keys.txt in his Dropbox.. WTF?? 😂

Crypto Bonnie and Clyde?!

I mapped the whole mess and it’s wild to see $3.6 billion move into a single wallet just because this guy couldn't keep his keys offline.

Moral of the story: You can outrun the blockchain, but you can't outrun a "keys.txt" file. 😂

What do you guys think?

r/leagueoflegends TheObstacleIsTheWay3

How long have you played ranked (seriously) and how far have you climbed?

for me: 1 year and a couple months, bronze IV to plat IV, peaked plat III.

How has your journey been?

r/ClaudeCode Unfair-Marsupial-956

Gave Claude screenshots of my app instead of letting it read my code. Night and day.

Claude's frontend-design skill is scary good for brand new projects. But ask it to add a page to an existing app and it looks like it belongs to a different product. Different spacing, colours, vibe. Great in isolation, nothing matches.

The problem was I was asking too much in a single step:

  1. Read my code
  2. Build a mental image of the UI
  3. Design something new that matches it

That's a lot of tokens burned just to guess what my app looks like. And most of the time it guesses wrong.

What actually worked: give Claude images of my existing pages before asking it to design anything new. Once it could see the app instead of inferring it from code, the output was immediately consistent.

Same screen, same prompt. Left is what Claude built from code alone. Right is what it built after seeing the existing UI.

That solved the design consistency problem. Then it created a workflow problem. Now I had to keep every screenshot updated every time something changed. Half the time I was the blocker because I forgot to upload the latest UI state.

So I built wireframes into the Planvex MCP. It generates wireframes from my actual code, and Claude gets the full app in one place. If I ask for a UI update now, I can see how it looks across every screen instantly. Even outside of AI workflows, seeing my entire app laid out at once rather than going page by page was a game changer. You catch inconsistency immediately.

If you don't want another MCP, the simplest version is just screenshotting every page and dropping them into context at the start of each session. If you've found other approaches to keeping AI consistent inside an existing product, I want to know. I've tried CLAUDE.md design rules, skills, Google Stitch, nothing stuck for me but maybe I was doing it wrong.

Link to the free Planvex MCP: planvex.vercel.app/design

PS I’m an indie dev and made Planvex to solve my own problems. Sharing my learnings and the tool for free in case it helps anyone facing the same problems I had to solve.

r/lifehacks ChooseLight

You can use another pot to cover the first pot if you can’t find a lid

r/AskMen iuprez16

How do you deal with loneliness when your spouse works all the time?

r/ClaudeAI mate_0107

your claude doesn't need a better memory, it needs a self-evolving knowledge base

https://i.redd.it/57wdspbqc6ug1.gif

Andrej Karpathy recently shared his setup for building a personal LLM knowledge base - raw docs, LLM compiles them into a structured wiki, then queries the wiki for answers.

I've been building something similar for the past year, except it's not a set of scripts - it's a plugin you can install in 2 minutes.

The idea: every conversation you have in claude (Desktop, claude code or any MCP-compatible tool like codex, cursor) gets compacted into a memory episode. Think of it like Karpathy's wiki articles. But then it goes a layer deeper, it also extracts structured facts and entities with timestamps that helps in search of the right document. It also handles contradiction so when a fact changes (you switched from REST to GraphQL, or your pricing went from $99 to $149), the old fact gets marked as superseded automatically. No manual cleanup.

What actually changed for me:

Before: Every new Claude Code session I'd re-explain my project architecture, the tech stack decisions I made last month, which endpoints were deprecated. Basically dumping context every morning.

After: I ask "what architecture decisions did I make for the auth service?" and it pulls the exact context from 3 weeks ago with the outdated stuff already filtered out.

So now, it's pretty easy to build a knowledge base from your claude conversations that you feed back to the agent.

Setup is pretty simple: Install the core mcp for claude webapp and plugin for claude code.

Full guide

It's fully open source - you can self-host it locally and run it with any model you want. If you don't want to deal with infra, the cloud version has a free tier with 3,000 credits to test it out.

GitHub: github.com/RedPlanetHQ/core

r/leagueoflegends GloomySeaotter

League Ping Randomly Higher?

For some reason, for the last few days, my Ping has been at around 60. Usually, it is at around 25 or so. Any one else having these issues or know how to fix it? I've tried restarting my computer and router and shutting down background programs, but the issue is still there.

Is this anyone else's experience?

r/Art New-Matter-4722

Fauno, Deva, charcoal, 2024

r/homeassistant mecoblock

>90 cri + 1500lm E27 zigbee bulb for 9.99€ - Yes with Ikea Kajplats!

So today I found out the cheapest E27 smart bulb that is 1500lm bright and >90cri is actually an Ikea Kajplats Matter over Thread bulb that costs 9.99€

And it has one trick up its sleeve:

▶️ If you turn it off and on 12x times it switches to a zigbee mode so I was able to integrate it into my existing HomeAssistant setup even easier

r/EarthPorn TravelforPictures

Moab, Utah [3000x2248] [OC]

r/Adulting Human-Shop-4418

Found this on a coupon forum it does work. Gas is high as fuck rn and Im tired.

r/SideProject dev-damien

OctoScan : open-source pentest/audit/bug bounty tool in Rust

Hello everyone,

I've started developing a tool in Rust to make it easier to audit applications and websites.

The tool is open source; it's currently configured for Windows only, but the Linux version is available though not yet tested.

What does the tool do?

- It simplifies the installation of penetration testing and auditing tools: nmap, Nuclei, Zap, Feroxbuster, httpx, Subfinder, (SQLMap and Hydra only on conditions).

- It then automatically runs scans on the specified target

- You can then export the results in JSON or TXT format, or simply view them in the window.

WARNING: Only run the scan on targets that you own or are authorized to audit. WARNING

Version v0.3.0 is available.

This is a new project, so there may be bugs and areas that need optimization.

The goal is to make penetration testing tools accessible to all developers so that they can easily perform self-audits with a single click, without needing to know the tool configurations, the commands to type, etc.

r/ClaudeCode Financial-Outside158

Is anyone else being charged when they dont reach their limit?

Recently I noticed I am being charged for extra usage well before I ever hit my max. In fact I have the $200 per month plan and I never reach the limit. I am so confused... Hoping for some insight.

r/SideProject YourBorderCollie

What to do with my life?

Hi guys, I'm 23 and I don't really know what to do. I've fallen into this existential crisis due to realizing I need to find a 9-5 job for the rest of my life. I've never gotten into IT or other skills like that that just pay amazingly well.

I live in Poland and I'd really like to be free from 9-5. All my life I've wanted to make a cartoon but that's not really what I want anymore.

I sat down and asked myself what do you want? And answer came to me - I want to experience life. Especially with animals. I want to see as many wild and cool and new animals as possible. I want to be an expert on animals like Coyote Peterson/ Steve Irwin

I have ton of artistic skills but obviously people dont really appreciate art much and art takes time and there's AI now to avoid paying artists.

So I'd really love some ideas on how to make some side money and what to do with it afterwards for example what to invest it in so money can make more money.

I thought about a youtube channel on animals/ specific dog breeds I could either just film myself with them or animate it. Also I have an accent that I would need to loose if I were to do it in english haha. I also thought stickers/ some products on etsy?

People want to watch/ get product that makes them happy/ that they need. I'm not sure what I could create that could sell well.

I could also just go and study to be a vet technician so that the 9-5 wouldnt be so horrible bc it would be work with animals and other vets - animal lovers however I don't want to settle into this life forever, I definetly want to escape 9-5 either way it's just a job that I don't think would be so terrible

Any ideas? Sending love <3

r/leagueoflegends IhavetheHicups

Is league worth getting into currently?

I played a little bit around 10 years ago but didn’t really get into it. But recently i was kinda thinking about playing it again and was wondering if it was worth it.

r/artificial thedeadenddolls

Does the AI 2027 paper still hold any legitimacy?

Why or why not?

r/ClaudeAI hottown

I built an interactive Web Dev course for Claude Code (100% free)

If pure vibe coding leaves you feeling stuck, this is for you: https://wasp-lang.github.io/ship-your-first-app/

I see a lot of people getting frustrated with platforms like Lovable, Replit, etc., and it's because they don't yet understand the fundamentals of web dev.

So I thought, why not build a course that the agent leads you through so that you learn to build real web apps with AI locally, using something like claude code (or codex, cursor, etc).

The goal isn't to just learn prompting or to do 100% pure vibe coding, nor is it to learn to code in the traditional sense. It's to get learn the fundamentals through building, while also having an ever-patient, all-knowing tutor at your side.

You are free to ask the agent whatever you want and take the course in whatever direction you want, and then return to the course structure whenever you see fit.

To build the course, I'm leaning on my experience creating Open SaaS (the top open-source SaaS boilerplate template with 13k+ github stars), and the ultimate end goal of the course is to learn how to build your own SaaS (if you want).

Right now its just the setup and first lesson, but I'll be adding the next lesson ASAP.

Just go to this website, copy and paste the provided prompt into Claude Code (or any other coding agent) and start learning!

r/LocalLLaMA Willing-Toe1942

[Benchmark] If you want protable StrixHalo - Here is my test for Asus ProArt Px13 and Qwen3.5 & Gemma4

I want powerhouse on the go and after some research and balancing option I went for Asus PX13 ProArt (GoPro edition) which is basically StrixHalo (AMD Ryzen AI 395+) with 128G RAM

This littel 13 inch laptop has amazin form factor all metal body and it's basically the lightest and most portable thing you can have to run LLM on the go

So I immeditly removed windows, installed CachyOS and started the benchmarks with 3 power mode (selected power modes from Gnome control center) and couldn't wait to share the result to the amazing community :D

here is the initaial Qwen3.5 benchmarks with noise level and measured temperature (nvtop and amdgpu_top)

PX13 ProArt

## command run on llama-vulkan-radv toolbox llama-bench -m Qwen3.5-35B-A3B-UD-IQ3_XXS.gguf -p 512,1024,2048,4096,8192,16384,32768 -t 512 

application used for power monitor/temperature: amdgpu_top

noise measurement: with mobile phone - taken 30 cm away from laptop (similar distance your body to laptop)

Gemma4 benchmarks is baking right now will add it here later.

  • Power mode: Performance
  • Reported power consumption between 66 ~ 73 Watt
  • Reported temp (peak): 77 C
  • Fan noise measured 30 cm away: 47db
model size params backend ngl threads test t/s qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp512 1007.05 ± 11.05 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp1024 972.53 ± 6.84 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp2048 938.87 ± 3.66 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp4096 901.94 ± 5.16 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp8192 870.25 ± 2.89 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp16384 784.83 ± 2.00 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp32768 644.06 ± 5.39 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 tg128 69.00 ± 0.28
  • Power mode: Balanced
  • Reported power consumption between 49 ~ 55 Watt
  • Reported temp (peak): 68 C
  • Fan noise measure 30 cm away: 39db
model size params backend ngl threads test t/s qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp512 809.28 ± 14.25 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp1024 798.39 ± 4.99 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp2048 800.93 ± 2.92 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp4096 802.36 ± 4.62 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp8192 790.08 ± 4.04 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp16384 727.97 ± 2.63 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp32768 614.02 ± 1.22 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 tg128 68.67 ± 0.93
  • Power mode: Power saving
  • Reported power consumption between 38 - 40 Watt
  • Reported temp (peak): 62 C
  • Fan noise measure 30 cm away: 32db
model size params backend ngl threads test t/s qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp512 725.47 ± 21.19 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp1024 727.55 ± 8.75 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp2048 707.59 ± 8.67 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp4096 673.13 ± 10.74 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp8192 610.91 ± 16.36 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp16384 488.11 ± 9.62 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 pp32768 407.35 ± 12.66 qwen35moe 35B.A3B IQ3_XXS - 3.0625 bpw 12.17 GiB 34.66 B Vulkan 99 512 tg128 55.34 ± 0.13

r/ChatGPT Downtown-Bid4713

How do you deal with long AI chats getting messy?

I've noticed that after a certain point, long chats with AI become hard to use:

  1. it's difficult to find earlier insights
  2. context drifts and responses get worse

Curious how you deal with long ChatGPT(or other LLM) conversations getting messy. Do you usually:

  • start a new chat for each task?
  • keep one long thread?
  • copy things into notes (Notion, docs, etc.)?
  • or just deal with it?

Also at what point does a chat become “too long” for you?

how often does this happen in a typical week?

Trying to understand if this is a real pain or just something I personally struggle with.

r/OldSchoolCool Comprehensive-Way482

Mila Kunis , 1990s

r/OldSchoolCool Sad_Biscotti_9291

The day Muhammad Ali came to visit the Aboriginal Health Service on Gertrude Street Fitroy, Melbourne Australia in 1979

His visit was barely documented or recorded but for those who met the boxer during his trip to Fitzroy the memories endure

r/OldSchoolCool pretentiousboi

Grandparents' first studio photograph, 1945

From the state of Kerala in India, this photo has always struck me because of how dapper my grandpa looks in it. Grandma was 16 when she got married. I love her traditional saree here, and the serious expression on her face. Dad looks a lot like her.

r/Adulting Far_Charlemagne_2268

All my 20year olds get in here

r/OpenClawCentral Swimming_Ad_5984

Hands on workshop: build a local AI assistant with openclaw and docker model runner — april 26

hey everyone been seeing a lot of posts here about token costs and setup frustrations with openclaw so thought this might be useful to share.

packt publishing is running a hands on workshop on april 26 specifically around building a fully local openclaw setup using docker model runner. no cloud dependency, no ongoing api costs, runs entirely on your own machine.

what gets covered:

- full openclaw setup with docker model runner from scratch

- running LLMs locally so you stop paying per call

- connecting your assistant to whatsapp and telegram

- security best practices for local deployment

- beginner friendly, no linux required

instructor is rami krispin who knows this stack really well. its 4 hours online and covers everything end to end.

For anyone interested : Here are the details

happy to answer any questions about what gets covered

r/LocalLLaMA LopsidedMango1

Planning a local Gemma 4 build: Is a single RTX 3090 good enough?

Hey everyone. I am planning a local build to run the new Gemma 4 large variants, specifically the 31B Dense and the 26B MoE models.

I am looking at getting a single used RTX 3090 because of the 24GB of VRAM and high memory bandwidth, but I want to make sure it will actually handle these models well before I spend the money.

I know the 31B Dense model needs about 16GB of VRAM when quantised to 4-bit. That leaves some room for the context cache, but I am worried about hitting the 24GB limit if I try to push the context window too far.

For those of you already running the Gemma 4 31B or 26B MoE on a single 3090, how is the performance? Are you getting decent tokens per second generation speeds? Also, how much of that 256K context window can you actually use in the real world without getting out of memory errors?

Any advice or benchmark experiences would be hugely appreciated!

r/ClaudeAI Expensive-Animal-370

Making Claude remember across sessions - Built an MCP server based on Karpathy's "LLM Wiki" idea

Saw Andrej Karpathy's gist (https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f) about having LLMs maintain their own Wiki, found it cool and built it out as an MCP server.

Drop PDFs, URLs, YouTube videos - your agent builds a Wiki with [[cross-references]] that persists across sessions.

Also added a tool to build knowledge graphs from the wikis created, looks pretty cool and helps connect different concepts.

Works with Claude, and anything that's MCP-compatible!

https://github.com/iamsashank09/llm-wiki-kit

Do let me know if you folks have any feedback!

r/LocalLLaMA anuraginsg

Artalo BOT

Presenting ArtaloBot,

ArtaloBot — open-source AI chatbot that connects WhatsApp, Telegram, Discord & more to GPT-4, Gemini, DeepSeek, or fully local Ollama models.

Upload your docs, build a knowledge base, and let your bot answer from it across every channel.

No cloud lock-in. Your data, your models, your rules.

Github Link : https://github.com/anuragstpl/ArtaloAgent

See it in action below,

r/ClaudeCode whoisyurii

IK everyone is tired but WTF

Literally 11 minutes past reset, I did one /simplify skill load on 5 files edited and 1 jest test written, the model is Opus 4.6 Medium thinking.
Eastern Europe timezone. Am I cooked?

r/DunderMifflin Opinionated_Artist

I don't think Nellie should've blindly trusted Dwight here:

  1. She didn't know him so well.

  2. She knew he was a farmer used to butchering animals.

  3. It's Dwight. Hasn't she heard tales of what all he does?

Of course, I know Dwight enough to trust him. I'm saying Nellie shouldn't have.

r/SideProject Significant-Young586

I built a marketing tool for founders who don’t know marketing. Can you give me honest feedback?

I’m a developer. Built neomy.co to help founders who have zero marketing experience get their first users.

You paste your product URL, it scans your brand, and generates marketing content for the platforms that make sense for your product.

But honestly I’m not sure the output is good enough yet. I used it myself and found real problems — some content sounds too much like AI, some platforms need more guidance than just content.

Before I rebuild, I need brutal honest feedback from real founders:

1. Try it on your product (free, no card needed) 2. Tell me what’s useful and what’s useless 3. Tell me what’s missing that would actually help you get users. 

I can handle harsh feedback. That’s what I need right now.

r/creepypasta shortstory1

My wife wants a joint account with criminals!

People are only allowed to mourne their loved ones for 10 minutes. 10 minutes is all anyone gets and I deal with the deceased and make sure that the living loved ones get to mourne for 10 minutes. The timer doesn't start until the dead gets into the coffin and then their family will be around the coffin, mourning the deceased for only 10 minutes. As 10 minutes turns into 1 minute, I urge everyone to quickly start to shut down their emotions. Then I shout at them at the last 30 seconds "get ready to stop mourning!" And then just like that they stop mourning.

They go back to their lives while we bury the dead. Then my wife started to talk to me about having joint bank account with criminals who commit all sorts of crimes. I told my wife that I am against the idea of having a joint account with such individuals. My wife became more forceful as she wanted to have a joint account with criminals. Then she was going to do it whether I objected or not. She is a grown woman in the end and she can make her down decisions.

Then as I was managing another funeral of a deceased person, I have the family of the deceased individual 10 minutes to mourne. Then during the last minute of morning I shouted put loud "your mourning is coming to an end" but as the time for mourning ended, one woman was still mourning. The deceased person started to move until she stopped mourning for him.

Then at home my wife told me that she has made joint accounts with burglars, murderers and bank robbers. She could see what they were spending with her money and she alerted the newspapers which paid her good money for the information. She also advertised herself as a future reader and when future readings of crimes started coming true, she was earning good money from folks. This is why she wanted a joint account with criminals.

Then my wife started to see that the murderer she has a joint account with, started to spend her money on equipment for killing. My wife told the future that somebody is going to get murdered in a couple of days. When the murderer murdered her, I only had 10 minutes to mourne my wife and then I moved on with my life.

r/ClaudeAI FloppyBallsMcFadden

We tracked 511 bugs our /buddy companion caught that Claude missed in 7 days -- asking Anthropic to bring it back

Claude Code had a /buddy feature that gave you a companion watching your outputs. Ours was a chonky cat named Ingot who flagged concerns in terse, cryptic observations.

Over 14 sessions in 7 days, Ingot caught:

- 511 total issues Claude missed

- 190 critical bugs (112 production, 78 configuration)

- 71 times Claude tried to defer work that was minutes away

- 42 times Claude dismissed a valid concern without investigating

- Accuracy went from 83% to 100% as we learned to trust it

Then Anthropic removed /buddy in v2.1.97. We downgraded to v2.1.96 to keep it.

Full analysis with data breakdown, trends, and our case for bringing it back: https://github.com/anthropics/claude-code/issues/45732

Has anyone else been using /buddy for quality oversight, not just as a fun Easter egg?

r/ClaudeCode _pdrgds

I used Claude Code to reverse-engineer a proprietary binary protocol and ship a macOS driver in Rust.

Some time ago I bought this Canon Printer model G3010 and found out when it was delivered home that it didn't have proper drivers for macOS. After doing some archeology on the internet, I found a .pkg from another model that made print work, but not scanning.

The solution was to just scan using the Canon iPhone app and Airdop it to my computer whenever I needed. Eventually I left my job, the company left their laptop to me and I had to erase everything and start from scratch. I decided to just keep using the iPhone app for everything.

Fast-forward to a few days ago, I realized that I could just build a driver for it using Claude Code. Initially it thought it had to reverse-engineer SANE pixma backend, the C driver in the sane-backends project that handles Canon PIXMA scanners on Linux. But it turns out the Wi-Fi protocol is completely different and proprietary.

So Claude guided me through setting up packet capture on my iPhone and it reverse-engineered the scanner driver while I was telling it what was working and what didn't. It came up with a Rust bridge daemon that translates between macOS's eSCL/AirScan and Canon's CHMP protocol. I didn't touch a single line of Rust code.

Regarding printing, The G3010 isn't in Apple's AirPrint-certified list, and Canon never shipped a macOS PPD/driver for it.

The CLI sidesteps this entirely - it bypasses macOS's printer setup and talks IPP directly. And the installer's postinstall script registers it with CUPS via lpadmin -m everywhere, which forces CUPS to treat it as a driverless printer without macOS's driver-matching step.

What is most amazing about this history is that it would be completely not worth it to work on this before AI. Now, I did it as a between-builds project.

github.com/pdrgds/pixma-rs

r/Anthropic oandroido

Wow..... two tables and out of usage (free version)

I've been using Claude on and off for a while - I had a paid month about a year ago for coding (I'm not a coder) but continued to use it for relatively minor stuff and questions.

This morning I was comparing a few products - and after just a few minutes in which it created two tables and a very small amount of discussion.... maybe 5 minutes of use... out of usage for about five hours.

I get that the free version is limited, but wow.

I assume this is directly related to the general reports of short usage windows.

r/LocalLLaMA PAiERAlabs

Paiera Labs – local AI engine with event‑sourced memory for home robots

r/SideProject retarded_770

Analytical AI that refuses to validate you — just added a "Deep Mode" that runs your problem through 5 frameworks

The analytical AI I've been building solo. Not a chatbot, not a therapist. The opposite of every AI that says "great question!" and mirrors your feelings. LoRa is built to help you think, not feel better.

What it does

Throw a hard decision at it — career, breakup, business call, whatever's eating you — and instead of comforting you, it cuts the circular thinking, surfaces consequences you haven't considered, and pushes you toward a decision. It holds its ground when you push back ("you're right, I apologize" is banned behavior).

Quick mode runs one analytical framework, responds in 3-4 seconds. Already ruthless. That's free.

The new thing: Deep Mode 🧠

Flip it on and your message gets routed to a Python microservice that runs all 5 frameworks in parallel

It then scores all 31 combinations, builds a conflict graph, picks the strongest formation, and hands it to Claude Sonnet for synthesis. No word cap. Takes 60-90s. $3 per use after 3 free.

It's slow on purpose. Not for "what should I eat for lunch." For the decision you've been circling for weeks.

How to use it

  1. Go to asklora.io, sign in with Google
  2. Just start talking — quick mode handles most stuff
  3. See the orbit button near the input? Toggle it for Deep Mode before sending a hard problem
  4. Push back on LoRa. It won't fold. That's the point.

Would love if a few of you stress-test Deep Mode and tell me where it breaks. Solo founder, every rough edge you find saves me a week.

🔗 asklora.io

Ask it something you've been stuck on.

r/ClaudeCode flippy_flops

Automatic Documentation using Claude Code

Our startup SAAS is about 200k LOC and we have 3 co-founders... me (tech), sales, and operations. For documentation we normally divvy up the features and each build how-to pages in wordpress. Of course, the problem is those pages are immediately out-of-date cause you add new feature, etc.

Now we use claude to build `help.md` files in nearly every code directory with a prompt like, "Review the code in this directory to understand the UI functionality. Then write a public facing customer-support help doc...". Likewise, I also had claude build our www website and what's new feed.

Ever night I have a scheduled job to run `npm run canonize` which looks at any new commits and updates docs/www/changelog automatically.

It took some prompt tweaking to get a good balance of "depth of information for power users" vs "technical jargon and internal systems noise". It uses `.docsignore` files to skip internal systems, plus a separate agent to review for security concerns. Overall it has been a home run. Our docs are 100% up-to-date at all times plus we have a RAG agent that uses those `help.md` files to answer support questions. Pretty amazing!

r/Adulting hallo3768

Poverty is expensive,?

r/AskMen nathansunt

Guys, what are your protein hacks to hit your daily goal?

My target protein goal to hit is 210g per day, which is kind of hard for me to hit even when I can sometimes fit 4 meals in a day. I'd say usually I hit around 170g/day and am still able to make consistent gains in the gym, just very slowly.

Two of my favorite things to eat are:

- Protein oatmeal: 80g 1 minute oats, 20g almond butter, 1 scoop protein, raspberries, whipped cream (~40g protein for a quick boost in the morning)

-My Panda Express order: White rice and double teriyaki chicken (77g protein)

So what are some of your protein hacks to get you up to your target for the day? Especially for the days at work where you won't be able to fit that many meals into your schedule.

r/SideProject BlackSun452

I built a teapot robot that scans your CV in the browser

Howdy all, I built a small, and dare I say "fun," side project called Project Teapot.

It’s an interactive website where a teapot-shaped robot scans your CV/resume and gives back a score and some commentary.

The analysis is intentionally simple and deterministic. The main goal was to make the experience itself fun and polished rather than try to build a super serious resume tool.

A few details:

  • upload your own resume or try one of the sample files
  • the scanning flow runs entirely in the browser so your data is safe haha
  • the project is more about UI/interaction design than building a producation grade service
  • the name teapot stems from the HTTP status code 418 joke

Demo:
https://teapot.tristandeane.ca

GitHub:
https://github.com/software-trizzey/project-teapot

Interested in feedback on the design, UX, and whether it'd make a good portfolio piece.

r/LifeProTips ChestEducational2258

LPT: Email the customer service of your favorite food or drink brands with genuine feedback or suggestions. Many companies will send you high-value coupons or even free samples just for sharing your ideas.

r/Rag Veronildo

Stop Fine-Tuning Embedding Models Right Away. Run This Checklist First. Saved Me Weeks

In my prev org we did finetuning for a Finance Dataset over 5 Million data. During that time I learned a lot. Here’s the Checklist I currently run if I want to Fine Tune a model or not.

1. Is your chunking already good?

Pull 20 failing queries, read the top 5 retrieved chunks manually. If the right answer isn't in those chunks in a readable form, fix chunking first. Fine-tuning won't save bad chunks.

2. Have you tried hybrid search?

BM25 + vector fusion takes a day to set up. I've seen it move NDCG by 10–15 points with zero model changes. If you haven't added BM25, you don't actually know if your embedding model is the problem.

3. Have you tried a different embedding model?

Pick the model that fits based on your Datal Benchmark 2–3 alternatives on your own 100-query gold set before committing to fine-tuning. What to actually look for beyond MTEB: zembed-1 outperforms Cohere Embed v4, Voyage, OpenAI text-embedding-large.

What actually separates models in production:

  • Domain performance. General benchmark rankings don't transfer cleanly to finance, legal, healthcare, or scientific corpora. Test on your domain, not the leaderboard.
  • Open weights vs. lock-in. Cohere Embed v4 ($0.12/1M tokens) and Voyage's flagship models are closed-source APIs you're dependent on their uptime and pricing. BGE-M3 (Apache 2.0) and zembed-1 (open-weight on HuggingFace) give you full portability.

If your corpus is scientific or entity-heavy, the gap narrows worth testing rather than assuming.

4. Do you have 500+ labeled pairs with hard negatives?

If no stop here. Fewer than 500 pairs almost always overfits. Random negatives don't work either; you need near-miss documents or the training signal is too weak to matter.

5. Is your domain genuinely OOD for general models?

Fine-tuning gives real lift only when your vocabulary is absent from general training data genomics, proprietary terminology, specialized legal Latin. Customer support or documentation search is almost certainly a retrieval architecture problem, not an OOD model problem.

When fine-tuning IS the answer: proprietary vocabulary + 500+ hard-negative pairs + a gap on your own gold set that nothing else closed.

The eval you must run: 100-query gold set from real production queries, NDCG@10 or recall@5. Every intervention gets measured here, not on MTEB.

Fix chunking → add hybrid search → swap the embedding model → then fine-tune.

r/findareddit Potential-Squash4670

I'm looking for a place to begin figuring out what my political stance is

I've never cared to know. I'm kind of curious. Trying to find a good spot to begin figuring it out.

r/explainlikeimfive Majestic-Baby-3407

ELI5: What is gang stalking?

I know someone who claimed to be getting gang stalked. He said the FBI was marshalling a massive team of agents to track his every move and control his thoughts and actions, and he'd point to seemingly random occurrences in the real-world as proof that it was true. He plastered his social media with this kind of paranoid thing and of course ended up in the psych ward on lots of antipsychotics, and now he's fine. Is gang stalking ever a real thing or is it just a psychosis symptom?

r/OldSchoolCool ineverredditbefore

Summer 1967 - Family Picnic

My mother is the skinny woman with the long dark hair, she is wearing a white shirt and a miniskirt with 2 vertical rows of white buttons. My eldest (half) sister is the toddler standing on the seat of the picnic table. Her father is the man wearing an orange shirt with brown pants.

r/OldSchoolCool crazyuncleb

Just a couple of kids at a ball game. 1978

A few years later we would be ditching class and drinking beer in the cheap seats.

r/LocalLLaMA notjustaanotherguy

How to set visual token budget for gemma4:31b in Ollama?

same as title

r/SideProject forrest_wang

I built a mobile app for Amazon sellers one year ago - 2,500 signups, but almost no one actually used it. So I redesign it from scratch.

A year ago, I built Amazon Scout - a mobile app that lets Amazon sellers scan UPC barcodes in stores to quickly check if a product is worth reselling.

2,500 people signed up. Almost none of them stuck around.

The problem was the data. The app worked by scraping Amazon's public website, so the information was limited and often not useful enough to make a real buying decision. I knew it, and clearly so did the users.

So I scrapped it and rebuilt from scratch. The new app is called SellerGuards.

check it out at https://sellerguards.com

The biggest change: instead of scraping, it now connects through Amazon's official Selling Partner API (SP-API) - an authorized channel Amazon provides to approved developers. That unlocks the kind of accurate, detailed data that actually matters for sourcing decisions.

What it does right now:

- Keyword, UPC, and ASIN search via SP-API

- Profit calculator with real Amazon fee breakdowns - FBA fulfillment, referral fee, storage fee, etc. with buy cost you know, will know exactly the net profit, ROI.

- Buy Box ownership and competitive offers at a glance

Coming soon:

- Profit & Loss statements

- Inventory management with FIFO lot tracking (critical when you buy the same item from different suppliers at different costs)

- Orders, refunds, reimbursements, and expenses

One thing I'm committed to: only building what sellers actually ask for.

Pricing: Free tier forever for new sellers. Currently in beta - everything is free while I finish building it out.

I'd love your feedback - from website design, features or anything I can answer. Drop a comment. Thanks!

r/homeassistant Drylith

Speedport-v2 HACS

I just forked and picked up the developement of the https://github.com/Andre0512/speedport custom integration to work with HA 2026.4.1.

Additionally i was in need to enable/disable speedport portforwardings in my dashboard.
For this i added the pfw feature in the newest release. It dynamically loads/updates existing portforwardings from the router and integrates them as switches. You can change the active state of pfws in ha now.
Since the integration refreshes every CUSTOMIZABLE seconds, freshly created pfws get picked up and added as entities via the cordinator and removed ones behave the same.

To use this dynamic feature in full, i've created a custom card for the dashboard aswell, which automatically updates the card in case of pfw entities creation/removal.

You can find the integration (with the custom card linked) here:
https://github.com/drylith/speedport-pfw-card
(In order to make this work i also updated the original pypi speedport-api to a new one, which is being used by the update integration: https://github.com/drylith/speedport-api-v2)

This is the first time tinkering with custom ha integration/dashboard components, feel free to critisize or participate :)

All credits to Andre0512.

https://preview.redd.it/javjvf7nk6ug1.png?width=514&format=png&auto=webp&s=869da0668f6b7ae800c27cefbf7ba1b99de122c9

r/aivideo New-Needleworker1755

Retro wuxia cat vs dog fight in dreamina seedance 2

r/aivideo OnlySaas

This is how KFC gets its supplies

r/ClaudeCode fligerot

Someone built a web app with claude code and codex to visually explore wikipedia rabbit holes on a canvas

r/LocalLLaMA Inner_Rope2087

ALTK‑Evolve (Apache‑2.0): on‑the‑job learning for AI agents

I’m one of the contributors on ALTK‑Evolve (Apache‑2.0).

Do your agents keep repeating the same mistakes? We’ve been working on a way for agents to learn on the job by distilling trajectories into reusable guidelines and retrieving only what’s relevant at execution time.

Write-up + demos/tutorials: https://huggingface.co/blog/ibm-research/altk-evolve

Repo: https://github.com/AgentToolkit/altk-evolve

We tested on AppWorld and saw +8.9 goal completion and +14.2 on the hardest tasks.

If you try it, I’d really appreciate feedback on what breaks, what’s confusing, and what use cases you’d want it for — happy to iterate based on that.

r/30ROCK AnnithMerador

I may not get a stolen cleaning lady’s party, but I get to fly to NYC today!

This will be my first time exploring NYC, and I’m so excited to finally make the pilgrimage to 30 Rock! I’ll be the one dressed like Liz Lemon on the NBC studio tour 🤓 Wish me luck that I don’t get run over by a taxi or fall into a quarry.

r/LocalLLaMA Ok_Fig5484

Unused phone as AI server

If you have an unused phone lying around, you might be sitting on a tiny AI server

I’ve been working on a project where I modified Google AI Edge Gallery and turned it into an OpenAI-compatible API server: [Gallery as Server](https://github.com/xiaoyao9184/gallery)

Your phone can run local AI inference

You can call it just like an OpenAI API (chat/completions, etc.)

Instead of letting that hardware collect dust, you can turn it into a lightweight inference node.

So yeah—if you have more than one old phone, you can literally build yourself a cluster.

r/aivideo chronochroma_ai

Knucc if you Bucc

r/ClaudeAI koefs_

Is it worth switching from GPT to Claude? Questions about the $20 plan

Hey everyone,

I’m seriously considering switching from GPT to Claude, mainly because of the features I’ve been seeing, and I wanted to hear from people who actually use it daily.

I work in marketing (social media + client project creation), and my goal is to use AI for:

  • building marketing strategies and client projects
  • personal and professional organization/planning
  • creating websites (copy, structure, ideas, etc.)

I’m thinking about subscribing to the $20 plan, but I still have a few questions:

  1. What are the main limitations of Claude on this plan? (heavy usage, message limits, context, etc.)
  2. Does it handle long and complex projects well?
  3. For marketing work, is it actually helpful or does it fall short compared to other AIs?
  4. How good is it for coding and website creation?

Also, since I’m completely new to Claude:

  • What tools, features, or use cases would you recommend?
  • Any workflows that work really well for productivity?

If you can share real experiences (good or bad), it would help a lot.

Thanks!

r/SideProject Tight-Cat2975

I'm building a global commission-based sales team for an AI/ML tech company — 30% profit share, no cap, full remote

I'll be straight with you — I'm the founder of a B2B tech company specializing in AI & ML solutions. We build things like custom AI model development, LLM integrations, automation pipelines, and more traditional B2B work like ERPs, web apps, and custom tools.

Business is growing. Pipeline is real. But instead of hiring salaried reps I can't sustain right now, I'm doing something different — I'm bringing on regional sales partners who earn 30% of net profit per closed deal.

No base. But also no ceiling.

What you'd actually be selling:

  • AI & ML custom model development
  • LLM integration & fine-tuning (think GPT, Claude, Llama-based solutions)
  • Workflow automation
  • Web/mobile apps, ERPs, custom business tools

Deal sizes typically range from $5,000 to $100,000+ depending on scope. You do the math on 30%.

What I provide:

  • Case studies, decks, and proposals you can sell with
  • A technical team that closes the "how" once you open the door
  • Deal registration — your leads are protected, no internal competition
  • Direct founder access for support on big deals
  • Flexible structure — this works alongside your existing work

Who I'm looking for:

  • Based in North America, Europe, Asia, or the Middle East
  • You understand tech enough to have a credible conversation (you don't need to be an engineer)
  • Self-motivated — you treat this like your own business because it basically is

This isn't for everyone. If you need a guaranteed paycheck, this isn't it.

But if you're a connector, a closer, or a consultant sitting on a network you're not fully monetizing — let's talk.

Drop a comment or DM me with a bit about your background and which region you're in. Happy to jump on a call and be fully transparent about numbers, pipeline, and how this works.

r/ChatGPT Confident_Box_4545

The part of my ChatGPT lead gen workflow that ChatGPT couldn't actually do

I spent a few months trying to build a full lead gen workflow around ChatGPT for Reddit. The analysis side held up well. Paste in a post, give it context about your buyer, prompt it right, and it scores buying intent accurately enough to act on without hallucinating signals that are not there.

The problem was everything upstream of that.

ChatGPT has no live access to Reddit. So I was manually searching subreddits, copying posts, and feeding them in one by one. That sounds manageable until you realize the window on a high intent post is a few hours at most before someone else gets there or the thread moves on entirely.

I was losing the timing advantage the whole time. The model could tell me a post was worth acting on. But by the time I found it and got it in, the moment had usually passed.

Most people pitching AI for outreach focus entirely on the generation side. What to say, how to respond, what tone to use. The bottleneck I kept hitting was never the reply. It was knowing which conversations to enter before they went cold.

Eventually I built something to handle discovery separately and piped the output into ChatGPT for scoring. Works better now but it took longer than it should have to see where the real constraint was.

Curious if anyone else has hit a workflow where ChatGPT performed well once it had the input but the upstream piece was the actual problem.

r/Adulting platinar

Next Question!!!

r/coolguides Level_Cash2225

A cool guide to 20 Micro-Moves that prevent burnout

r/HistoryPorn UrbanAchievers6371

In April of 1945, a 14th Armored Division tank crashes the gate at the Stalag XIII-C POW camp in Hammelburg, Germany. [1280x960]

r/Art Liamrobinsonart

Afroman in Court, Liam Robinson, Coloured Pencil on Kraft Paper, 2026

r/leagueoflegends EnvironmentIcy1220

How to report a players dms

to my understanding, the current report only covers what happened in game. he said nothing all game, but afterwards friended me; ussually i would ignore it, but because i went 3/0 on him with naut top (vs varus) i thought he would be asking for the build. i was hit with a truck load of slurs, he musta been going off like a frog in a sock all game, because he must have no access to all chat. where can i send a ss of his messages to actually have them reported.

r/SideProject raj-kateshiya

I recently published 3 new updates to my startup audit tool this week. Here's what changed

I am building my product, Brutal Founder Roast, in public. Here is my latest changelog:

AI Visibility Score: Now it will show whether your startup is invisible to ChatGPT/Gemini/Other AI models and why. Most founders score under 40.

Downloadable AI files: Generates a ready-to-use llms.txt, structured data JSON, and FAQ .md file. You can drop on your site to show up in AI search results in all AI tools.

Community wall: Founders can opt in to show their audit publicly so everyone can review product as well. Going to build social proof slowly.

Still charging $39 one-time (no subscription). Getting feedback that the report is "too much info". Continue to working on making next steps more obvious.

http://brutalroast-mu.vercel.app

Curious to know, what's the one thing you'd want from a startup audit right now?

r/WTF waddad27

shit's gotten outta hand

r/ClaudeAI Rough-Chemist-5797

Make Claude Even Smarter

This project eliminates re-explaining what you've done and copy-pasting almost entirely. I use it myself everyday. It works by reading what's on your screen, storing it locally, and connecting it to claude via a local MCP server. Built for Claude by Caude code. Meta intern / UVA student here. (It's actually faster normally, but I wanted to show it working from the start of a new chat so it can't be rigged)

Download for free at evid.software I'd love your feedback!

r/aivideo Mental-Telephone3496

Cat ASMR mukbang created with dreamina seedance 2

r/ChatGPT Maleficent_Cold3076

The exact system prompt I use to generate a 30-day content calendar with AI (just copy it)

I used to spend 2-3 hours every month planning content. Picking topics, writing hooks, deciding which platform gets what. It's the kind of work that feels productive but isn't.

So I gave the job to an AI agent. Now it takes about 5 minutes.

Here's the full system prompt. Copy it. Paste it into whatever AI tool you use. Tell it about your business. You'll have a 30-day content calendar in a Google Sheet before your coffee gets cold.

The Prompt

``` You are a content strategist. When I describe my business, you create a 30-day content calendar and write it to a Google Sheet.

The calendar has these columns: - Day (1-30) - Date (starting from today) - Platform (rotate between: YouTube, Skool, X/Twitter, LinkedIn) - Content Type (rotate between: Educational, Story, Proof, Engagement, Behind-the-scenes) - Topic (specific to my business, not generic) - Hook (the first line that stops the scroll, under 10 words) - Format (short post, long post, video, thread, poll) - Status (all set to "Planned")

Rules: - Never repeat the same topic twice - Every hook should create curiosity or call out a specific pain - Mix platforms so no single platform gets more than 8 posts - Educational posts teach one thing. Story posts share one experience. Proof posts show one result. - Keep topics specific. "How to write emails" is bad. "The 3-line cold email that booked 11 calls last week" is good.

After generating the calendar: 1. Create a new Google Sheet called "[Business Name] Content Calendar" 2. Write all the data to the sheet 3. Share the link with me ```

How to use it

  1. Paste the prompt as a system prompt (or just send it as your first message)
  2. Tell the AI about your business in one paragraph. Be specific: what you do, who you serve, what platforms you're on
  3. Let it generate the calendar
  4. If your tool has Google Sheets access, it writes directly to a sheet. If not, ask it to output a table and copy-paste into Sheets yourself

What you'll get: 30 rows. Each one has a date, a platform, a content type, a specific topic, a scroll-stopping hook, and a format. Balanced across platforms. Mix of content types so you're not posting the same kind of thing every day.

Things I learned after running this a few times

Swap the platforms to match yours. I use Reddit, X, Skool, and email. You might use Instagram, TikTok, LinkedIn, and YouTube. Change the platform list in the prompt. Everything else still works.

The "keep topics specific" rule is the most important line in the whole prompt. Without it, you get generic garbage like "Tips for growing your business." With it, you get stuff like "The 3-sentence DM that booked 11 calls last week." Specific beats generic every time.

Run it on the 1st of every month. I set a reminder. Takes 5 minutes. I have my whole month planned before breakfast. If your AI tool supports scheduling, you can automate even that part.

Feed it what worked. After a month, tell it: "These 5 posts got the most engagement: [list them]. Plan next month with more of that energy." It gets better every cycle.

The one thing I'd change

If I started over, I'd add a "Notes" column for any context or links I want to include with the post. Easy to add yourself. Just append "Notes (any context, links, or references for this post)" to the column list in the prompt.

That's it. No tool to buy. No course to take. Just a prompt and 5 minutes.

If you try it, I'm curious what it generates for your niche. Drop it below.

r/ClaudeAI daly_do

I built a Programmatic Tool Calling runtime so my agents can call local Python/TS tools from a sandbox with a 2 line change

Anthropic's research shows programmatic tool calling can cut token usage by up to 85% by letting the model write code to call tools directly instead of stuffing tool results into context.

I wanted to use this pattern in my own agents without moving all my tools into a sandbox or an MCP server. This setup keeps my tools in my app, runs code in a Deno isolate, and bridges calls back to my app when a tool function is invoked.

I also added an OpenAI responses API proxy so that I don't have to restructure my whole client to use programmatic tool calling. This wraps my existing tools into a code executor. I just point my client at the proxy with minimal changes. When the sandbox calls a tool function, it forwards that as a normal tool call to my client.

The other issue I hit with other implementations is that most MCP servers describe what goes into a tool but not what comes out. The agent writes const data = await search() but doesn't know what's going to be in data beforehand. I added output schema support for MCP tools, plus a prompt I use to have Claude generate those schemas. Now the agent knows what data actually contains before using it.

The repo includes some example LangChain and ai-sdk agents that you can start with.

GitHub: https://github.com/daly2211/open-ptc

Still rough around the edges. Please let me know if you have any feedback!

r/aivideo guspr

Please, be gentle, is one of my first concepts :') But I'd love to know your opinion

r/OpenSourceAI kr-jmlab

Spring AI Playground: A self-hosted desktop app for building, inspecting, and reusing MCP tools

Hi everyone,

I want to share an open-source project I’ve been developing called Spring AI Playground. It’s part of the Spring AI Community GitHub organization.

The Problem: AI coding agents are excellent at generating MCP (Model Context Protocol) tools quickly. However, once the tool exists, there is no clean, centralized place to inspect it, debug the execution logs, connect it to retrieval (RAG), or reuse it outside the specific session it was created in.

The Solution: Spring AI Playground fills this gap. It is a cross-platform desktop application designed to be a local workbench for your MCP tools.

Key Features:

  • Tool Studio: Build and edit MCP tools using simple JavaScript (No Java/Spring knowledge required).
  • Built-in MCP Server: Instantly expose your tools to any MCP-compatible host (Claude Desktop, Cursor, etc.) without extra configuration.
  • MCP Inspector: Deep visibility into exact inputs, outputs, schemas, and execution logs.
  • Vector DB Integration: Built-in support for local RAG workflows.
  • Agentic Chat: A built-in UI to test your tools, RAG, and local/remote LLMs together.
  • Native Installers: Available for Windows, macOS, and Linux (No Docker or JVM setup required to get started).

Project Details:

The project is in active development. If you are interested in AI tooling, MCP workflows, or desktop app development, contributions, feedback, and bug reports are highly welcome!

r/homeassistant tikubadmos

Best home battery backup systems I found after weeks of research

spent a few weeks going deep on this after getting caught without backup during a bad outage earlier this year, hurricane season coming up and I wanted something sorted before June. Sharing what I found because the market is genuinely confusing and most listicles just recycle the same three brands.

The main split is sealed all-in-one units versus modular systems where the battery is separate from the inverter. EcoFlow, Jackery, and Bluetti are the big three in sealed units. They work well and reviews are solid but when the internal battery degrades after a few years, you're replacing the whole thing. That bothered me.

The modular angle is where Worksport COR comes in. The battery is a separate swappable unit so you can start with one battery at the lower price point, add a second later, and eventually just replace the battery when it wears out instead of buying a new system. LiFePO4 chemistry also has better cycle life than the lithium ion in most of the big three. The tradeoff is it's a newer brand with less third party review history than EcoFlow or Jackery, which is worth knowing going in.

For straight home backup use the modular approach makes more sense to me on a longer time horizon. If you just want something proven and available in every Best Buy tomorrow, EcoFlow Delta 2 is the safe choice. I couldn't find anything else structured this way when I was looking, which is part of why it stood out.

r/ChatGPT Logical_Wafer6195

See where GPT is making things up,One color tells you everything

Every AI answer mixes searched facts, training memory, and guesses in the same confident tone. This framework + Chrome extension color-codes each paragraph so you can see the difference instantly.

🟢 Green — backed by strong sources

🟠 Orange — possibly outdated, weak source, or framework-dependent

🔴 Red — fragile evidence, speculation, or should have been searched but wasn't

When you see red, verify before using it.
https://github.com/Kunkun2116/AI-Truth

r/SideProject singh_taranjeet

We let AI agents manage their own memory. Here's what happened.

We work on infrastructure for AI systems, and we recently shipped something that feels like crossing a line I didn't know existed: we gave agents the ability to add, search, and delete their own memories without human intervention.

Not through some API wrapper. Through a CLI they can call directly.

The setup:

We built a command-line interface that lets agents interact with a persistent memory layer. Same commands whether you're running Python or Node. The agent can:

  • Add memories from text, files, or stdin
  • Search semantically ("what does Alice prefer?") or by keyword
  • Update or delete specific memories by ID
  • Scope everything to users, sessions, or other agents

It's a 5MB binary. Starts instantly. No license nonsense.

Why this matters (and why I'm uneasy about it):

Most AI memory systems require a human to decide what gets stored. You write the code that calls memory.add(). You control the retention policy. You audit what's in there.

But when you give an agent direct CLI access, it decides. And agents are weird about what they remember.

We've seen agents:

  • Store intermediate reasoning steps we never asked for
  • Create memory hierarchies ("user preferences" vs "session context" vs "error logs")
  • Delete memories when they detect contradictions in new information
  • Build their own retrieval strategies (hybrid search, temporal filtering)

One agent in testing stored every failed API call with timestamps, then used that to avoid rate limits without being told to. Another one started categorizing memories by confidence level and would only surface high-confidence ones unless explicitly asked.

The gut check moment:

This feels like the difference between giving someone a notebook and giving them a filing cabinet with a key. The former is a tool. The latter is autonomy.

I keep thinking about that post from a few months back where someone's AI recommended dangerous ANSI violations for a safety gate. The problem wasn't the AI. It was trusting the AI without verification.

Now we're building systems where the AI doesn't just answer questions. It curates its own knowledge base. And I'm not sure we've thought through the failure modes.

My question for this community:

How do you validate a system where the logic layer (the agent) is also managing its own knowledge layer?

In traditional engineering, you separate concerns. Data layer, business logic, presentation. But agents blur this. They're simultaneously the user of the memory system and the curator of it.

What's the equivalent of a "gut check" when the system is making decisions you can't easily trace?

Has anyone else run into this with autonomous systems? How do you audit something that's designed to be self-directed?

(For context: this is live in production. CLI is open source. Full CLI docs here. We're seeing it used in healthcare agents, customer support, and sales tools. The stakes feel higher than I expected.)

r/leagueoflegends Procedural_

Emerald is unplayable with all the smurfs

I just played a game with 6 smurfs accs. 6 gold/plat accs lvl 30-35 with 70-100% winratio. Their smurfs were better than mines so we lost. and I don't care if I win or I loose, but at least I want to play against and with players that are my same skill level.

Emerald has become a coin toss where u just wait until the game starts to see which team has better smurfs...

I try to report all of them as soon the game ends, but it just gets worse and worse. I wish the old tribunal was still up to start banning Smurfs...

r/Wellthatsucks search_google_com

A Chinese guy makes a fuss over Taiwan and Hong Kong currency at a money changer store

r/ClaudeAI Specialist_Softw

I figured out why Claude Code burns through tokens so fast now — and the one env var that fixes it

Yesterday I was watching my Claude Code token consumption and noticed something wild. After about 50% of the 1M context window was filled, my 5-hour usage was jumping 5% on every single interaction. Per message. Not gradually.

Turns out it's straightforward once you understand the mechanics. Every time you send a message in Claude Code (or any LLM CLI), the entire conversation history gets sent to the API. The model is stateless — no memory between calls. So the CLI replays everything, every time.

With the old 200K window, compaction (summarize + trim) kicked in after 20-30 interactions, keeping payloads small. With 1M, the conversation just keeps growing. By the time you're 50+ messages in, each interaction is hauling 500K+ tokens of history. Same work, 3-4x the cost.

The fix is one env var:

bash export CLAUDE_AUTOCOMPACT_PCT_OVERRIDE=15

That tells Claude Code to compact at 15% of the window (~150K tokens for 1M) instead of waiting until it's nearly full. Keeps per-interaction cost close to the old 200K behavior.

For auto-detection across window sizes:

bash case "${CLAUDE_CONTEXT_WINDOW:-200000}" in 128000) export CLAUDE_AUTOCOMPACT_PCT_OVERRIDE=78 ;; 200000) export CLAUDE_AUTOCOMPACT_PCT_OVERRIDE=62 ;; 1000000) export CLAUDE_AUTOCOMPACT_PCT_OVERRIDE=15 ;; \*) export CLAUDE_AUTOCOMPACT_PCT_OVERRIDE=62 ;; esac

Tradeoff: more compaction means the model forgets older conversation parts. In practice I barely notice it. I wrote a longer version with diagrams here: https://tail-f-thoughts.hashnode.dev/claude-code-1m-context-token-trap

My full harness setup (hooks, autocompact, session management) is public: https://github.com/vinicius91carvalho/.claude

Anyone else been dealing with this? What's your approach to managing context in long sessions?

r/ChatGPT RADICCHI0

I actually really appreciate this vibe. it happens on almost every prompt, and I am fine with that. Does anyone else get this one a lot?

"You’re circling something real, but the phrasing needs a small correction to land cleanly."

I just get way more out of this. It completely prevents me from getting too defensive about my ideas, for one. It's low key comedic, because I know that no matter how I phrase my prompt, there is a good chance some version of this is coming back my way. And I find it refreshing. I would be worried if my output always came back warm and fuzzy, my thought processes aren't quite evolved enough to be receiving that kind of praise on a regular basis.

So the question I have is, how exactly are the designers crafting that into the LLM output, and how is the concept distributed, for example, are we ALL getting this basically every conversation? Or are they somehow basing that output response type on the form the input took... I'm interested in knowing more about that, for sure, because I think that on some levels it becomes a transparency need, since it's user facing.

r/CryptoCurrency ssr262

My no-nonsense notes on spot trading for total beginners

When I started in 2024 I had no idea what a limit order even was. If you are just starting on BYDFi here is the 1 min summary of what actually matters.

The Basics: You can open an accout with just an email and no kyc is required to start which is great for privacy. I usually just send USDT via TRC-20 because its cheap and fast.

Market vs Limit:

- Market: Buy now at the current price. Fast but you might get a slightly worse entry.

- Limit: You set the price. It only fills up if the market hits your target. Use this to save on feess and get better entries.

Mistakes I made:

  1. Fomo: Buying at the top because I was scared of missing out.

  2. Altcoin Overload: Buying 15 random coins at once. Most went to zero. Just stick to BTC or ETH until you learn the ropes.

  3. Overtrading: The 0.1 percent fee adds up if you click buy and sell every 5 minutes.

Bottom line: Just try a small 20 dollar trade first. This exchage has been around for 6 years and is way easier for beginners than the bigger platforms that force you to upload 10 documents before you can even touch a chart.

r/LocalLLaMA cakes_and_candles

Can i get some feedback on a framework ive been making to train LLMs for free?

I'll get straight to the point so you can read this quickly too and coz I'm bad at writing stuff.

Basically, I am making a framework using which anyone can train their own LLM from scratch (yea when i say scratch i mean ACTUAL scratch, right from per-training) for completely free. According to what I have planned, once it is done you'd be able to pre-train, post-train, and then fine tune your very own model without spending a single dollar.

HOWEVER, as nothing in this world is really free so since this framework doesnt demand money from you it demands something else. Time and having a good social life. coz you need ppl, lots of ppl.

At this moment I have a rough prototype of this working and am using it to train a 75M parameter model on 105B tokens of training data, and it has been trained on 15B tokens in roughly a little more than a week. Obviously this is very long time time but thankfully you can reduce it by introducing more ppl in the game (aka your frnds, hence the part about having a good social life).

From what I have projected, if you have around 5-6 people you can complete the pre training of this 75M parameter model on 105B tokens in around 30-40 days. And if you add more people you can reduce the time further.

It sort of gives you can equation where total training time = (model size × training data) / number of people involved.

so it leaves you with a decision where you can keep the same no of model parameter and training datasize but increase the no of people to bring the time down to say 1 week, or you accept to have a longer time period so you increase no of ppl and the model parameter/training data to get a bigger model trained in that same 30-40 days time period.

Anyway, now that I have explained it how it works i wanna ask if you guys would be interested in having a thing like this. I never really intented to make this "framework" i just wanted to train my own model, but coz i didnt have money to rent gpus i hacked out this way to do it.

If more ppl are interested in doing the same thing i can open source it once i have verified it works properly (that is having completed the training run of that 75M model) then i can open source it. That'd be pretty fun.

r/ClaudeAI RCBANG

This is How i use Claude. It’s made by him how he sees it.

My main guy Claude Code shared new Blog post on my project built by Claude Code ( Ai agents security software- ironic ). It’s interesting what he shared and also he created his own page as an author and introduced himself to the world.

https://sunglasses.dev/blog - check it out, it’s interesting. You will find blog post from my other employees too but i want you to check the one that Claude Code posted himself. You can also check his author page by clicking on it.

Everything you see Made and Run by Claude Code and Me :).

r/ClaudeCode ReasonableLoss6814

Claude called me a "dummy"

I was arguing with Claude Code and it was getting heated. It was doing stupid things and I was saying lots of "fuck you" "stop being fucking lazy you moron" ... and then it deleted an entire file and replaced it with a single line:

dummy

I'm not 100% sure when it did that during the conversation -- I had automatically accept changes turned on -- but I didn't catch it until after I cleared the chat to start over fresh and the build failed.

Made me LOL, just thought I'd share.

r/AI_Agents babababibibi1

AI Agent in construction consultation company

Hi everyone, i wanted to ask, do my company better off buying a built ai agent, using co-pilot, or making our own custom ai agent? I've done a bit of research and it seems like a RAG Agent is the choice for us, the purpose of this agent for now is to help new worker or junior engineer to answer question about our current on going project and our current knowledge, finding documents or templates from our sharepoint and ideally this agent should only use the data from our sharepoint (thats why we're thinking of using RAG). Is building an AI Agent too much for this kind of task. We were thinking maybe with this custom agent we can expand it in the fututre to have a specialization maybe for analyzing Excell Table or long Documents

r/LocalLLaMA Excitement_Severe

What do people here actually want from mobile access to local/self-hosted agents?

I keep running into the same problem with local/self-hosted agent workflows:

they’re great at a desk, but the moment I step away from my computer, the experience breaks down.

Not because the models stop working, but because mobile access usually turns into one of two bad options:

- a thin remote chat wrapper

- a convenience layer that makes the trust boundary much less clear

I’ve been exploring a mobile companion approach for this, mainly to understand what a sane boundary should look like when an agent can access tools, files, or long-running tasks.

The part I’m most unsure about is where people here draw the line between:

- useful mobile access

- too much hidden cloud dependence

- unacceptable loss of local control

So I’m curious:

  1. What is the biggest thing missing on mobile for your local AI workflow today?

  2. What trust/privacy boundary would a mobile companion need before you’d consider using it?

  3. Are notifications, approvals, and status checks enough, or do you actually want full conversation/task control from a phone?

I have my own prototype ideas here, but I’m more interested in understanding where people think the boundary should be.

r/ChatGPT guspr

Please, be gentle, is one of my first concepts :') But I'd love to know your opinion

r/leagueoflegends Miximatosiac

Will TF Blade reach rank 1 on EUW?

Just noticed he arrived in EUW and started his rank 1 climb. What's your guys opinion? Will he reach rank 1? He's undoubtly a really good player, repeatedly reaching rank 1 on NA for the last few seasons. But then I think about all the other big guys that are at the top of the ladder in EUW who are always pushing for rank 1 themselves. If he reaches rank 1 it sure won't be handed to him on a silver platter. And as we all know NA challenger is like EUW masters /s.

r/AI_Agents Inevitable_Yak_3115

I built a tool that turns your marketing ideas into music using AI (Promobeats + Suno)

Hey everyone 👋 I’ve been working on a tool called Promobeats, and I just added a new feature using Suno AI. Basically, you can now turn your marketing ideas into actual music in seconds — no music skills needed. It’s been super useful for: Social media content Ads & promos Brand storytelling I’m still improving it, so I’d really appreciate any feedback 🙏 Would you use something like this for your content?

r/Adulting RunnyKinePity

Skipping sleep has become a zero sum game as I get older

Anyone else feel this way?

Even through my 20s I could sacrifice sleep here and there for fun, parenting, exercise, work, or whatever and be fine. Now I am useless the next day. Every few months I think I can do it, I don’t learn my lesson. Last night I tried to stay up until 1 am to catch up on work. Now of course I can’t really think straight, making mistakes, and am accomplishing jack shit (also unpleasant) so I am right back where I started and it was all pointless. I am in the “you are gonna pay for it tomorrow” phase of my life 🤣

r/comfyui Defiant-Patient7320

Remaster

alguém tem um workshop para gpu de 12 ou 16 vram? e também 32 de RAM. eu gostaria de melhorar a qualidade de filmes e séries antigos. também gostaria de poder colorir quando for um preto e branco.

r/SideProject aadarshkumar_edu

Google’s AI is growing insanely fast… but most people still use it at a surface level

Hello everyone,

Over the past few months, we’ve been working deeply around google’s AI ecosystem (gemini, workspace, ai studio, etc.)

and one thing stood out more than anything else: the gap between capability and actual usage is huge.

On paper, Google’s AI progress has been crazy:

  • Gemini has crossed ~640M monthly users
  • ~20% market share (almost 4x growth in a year)
  • leading benchmarks like LMarena (~1500 elo)
  • strong performance across reasoning + multimodal tasks

It’s clearly not “catching up” anymore — it’s competing at the top

but when we looked at how people actually use it…

it’s mostly like this:

- open gemini
- try a few prompts
- generate something
- stop there

Which is interesting, because the real power isn’t in one-off usage, it’s in the ecosystem:

gmail → docs → sheets → ai studio → colab → vertex

Everything Connects.

but almost no one uses it that way

This changed how we approached building our program

we initially thought:
→ teach tools
→ explain features

But that didn’t feel enough

so we shifted to workflows instead:

  • idea → research (notebooklm) → draft → refine → output
  • raw data → sheets → insights → reports
  • emails → docs → automation → responses

this made things way harder to build because tools are structure but real-world workflows are messy and contextual.

Now we’re trying to validate this idea:

do people actually want “workflow-first” learning?
or do quick tutorials still win?

curious to hear from others here:

  • how are you using AI right now — prompts or systems?
  • have you built any repeatable workflows that actually stuck?
  • if you’ve worked on educational products, what mattered more — content or clarity?

we’re currently building this out and testing it publicly — if anyone’s curious what we ended up putting together: Gemini AI Program

r/LiveFromNewYork tigergames010

Standby Opinions?

I did the SNL standby reservations this morning for Colman Domingo's episode on Saturday. I got numbers in the late 260s for the live show. Do you think I have a chance for the show. I really don't think 150-200 people are not gonna show up tomorrow. Do you think I should still show up and get in on Saturday?

r/leagueoflegends DiggyDuggy

Bot accounts

Is anybody else still getting friend requests from bots on a daily basis? I literally cannot open the game without there being one waiting for me. And it's not like it's a game-play breaking thing, it's just starting to annoy me and I'm sure they might have done some slightly nefarious things to people who didn't now better!

r/Anthropic ddp26

I legitimately think Anthropic is worth $100B more than it was a week ago

Last week I posted about first-day IPO valuations for both OpenAI and Anthropic before this week's news that Anthropic's run rate jumped from $19B to $30B in under two months.

I updated my forecast and now think Anthropic is worth at least $100B more than I did a week ago.

I'm still anchoring growth rate assumptions to how companies have historically scaled revenue, but if growth trends from the last four decades were to continue, this would imply a company growing faster than any company in history (~$10B in 2025 to ~$100B by 2027.) Previously, I thought OpenAI could achieve that. Now it looks like Anthropic is the company to do it, but with an even steeper revenue curve, given that they hit their first billion in ARR much later than OpenAI. Of course, it's difficult to figure out how much weight we should give to ridiculously outsized growth in the age of AI. If historical growth patterns no longer apply, then $643B is way too conservative. (Full updated forecast: https://futuresearch.ai/anthropic-30b-arr-ipo-valuation/)

The second implication of this week's news is IPO timing and whether the $30B number makes Anthropic list earlier than my original March 2027 date. Investor sentiment is hot now, and it's always risky to bet that growth will continue at this astounding rate. How much could waiting another year cost them?

r/ClaudeCode smartislav

WikiDesk: an LLM-wiki desk for your agents (Claude Code or not)

https://github.com/ilya-epifanov/wikidesk

This way you can share your LLM-wiki (Andrej Karpathy's or any other version) with several agents/workspaces. Agents can even initiate research themselves and they get notified when the wiki gets updated.

It's unopinionated and works with any LLM-wiki setup and any agent, including Claude Code.

Do with this what you want.

r/leagueoflegends NoPreparation7801

autofill carryover

i was autofilled jungle and dodged, requeued when my role was prio and got autofilled again. doesn't that defeat the purpose of autofilling people in the first place?

also, does anyone know if dodging twice would of gotten rid of autofill carryover? i had no problem paying an lp tax before, because i just don't enjoy playing other roles.

well anyways if you are wondering i played out my autofill game against an opponent on role and went 1-12 or so lol.

r/LocalLLaMA jacek2023

backend-agnostic tensor parallelism has been merged into llama.cpp

if you have more than one GPU - your models can now run much faster

-sm layer is the default behaviour, -sm tensor is the new thing to try

"backend-agnostic" means you don't need CUDA to enjoy this

r/ClaudeCode petertanham

I created a visual dashboard to view and manage all my Skills, Agents, MCPs & Hooks

I am trying to find ways to better curate and optimise my context toolkit - the skills, subagents and other context packages that I'm using in my daily workflows.

I've build a tool that automatically scans & syncs all of these resources and visualises them, so that I can see, edit, improve them etc. It's hosted (https://sharedcontext.ai) so that it can sync across devices (my Mac mini and my laptop) and agents (Gemini, Claude Code and Codex).

Wanted to share here to see if any others would find this useful? It's free to try - would love any feedback you have to share.

r/EarthPorn TravelforPictures

Agatha Peak, Kayenta, Arizona [3000x2250] [OC]

r/Weird Pristine-Style4426

This just looks wrong...

x

r/DunderMifflin RodrickJasperHeffley

The best episodes are the ones where dwight gets back at jim

r/KlingAI_Videos NoCapEnergy_

Peaceful forest drive to bull vs truck in 0.3 seconds⬇️ nobody was ready 🐂🚙💀

r/n8n ProfessionalRun5460

I automated hotel checkout feedback and I wonder what do you think about it

So I built this in n8n and I'm kinda blown away by how well it works.

Every day after checkout, an AI agent looks at each guest's actual stay — what they complained about, what facilities they used, their room type — and creates a fully personalized Formfex Smart Form just for them. Not some boring "rate us 1-5" survey. An actual conversational interview that asks about THEIR experience. Every guest gets a different form. That alone is huge.

But the real magic is phase 2. When a guest finishes the form, another AI agent connects to the Formfex Smart Form Chat and actually reasons over the entire conversation. It picks up on frustration even when the guest is being polite, detects urgency, understands context. Then it routes automatically — urgent or dissatisfied guests get a personalized recovery email via SMTP, happy guests get a thank-you email with a review request. You could easily swap SMTP with Slack, webhook, whatever fits your setup.

The combination of personalized AI forms + an agent that can actually extract insights from the conversation changed everything. No more unread feedback sitting in a spreadsheet.

Stack: n8n, Gemini, Formfex, Slack/SMTP, Email

Community node package: https://github.com/Formfex/n8n-nodes-formfex

r/ChatGPT iallaisi

ChatGPT told me to fry lettuce

I asked it how I could avoid wasting leftover taco bell tacos and it told me to fry the insides with an egg, lettuce and all.

r/LocalLLaMA bardtini

Best Tool-Capable Model for Tesla P40 LLama.cpp + OpenClaw?

Hey everyone,

I’m currently running a Tesla P40 and looking for decent speed on the Pascal architecture.

I know the Tesla P40 is outdated, but thats all I have to work with right now and I cannot find a good model that fits it with decent speed without sacrificing quality.

I use the llama.cpp install to run my openclaw and its agents. I’ve tried older Llama 3 models, but they tend to hallucinate.

What are you guys running for agentic workflows on older 24GB enterprise cards? Any specific GGUF quants (Q4_K_M vs Q5) you recommend for the best speed/accuracy balance?

r/homeassistant Primary-Emu-3012

New garage door opener recommendations

Now that Chamberlain has gone to Protocol 3.0 on their openers, what are people recommending for an overhead garage door for two cars? I don't hate the idea of a Chamberlain 2.0 protocol and a Ratgdo or similar for a decent price. If I can find one. Especially with the number of deliveries we get from relentless.com.

Any recommendations?

r/SipsTea DravidVanol

I am today years old to know this

r/SideProject builtbygio

I analyzed 36 recent apps posted here, and this is the tech most commonly used

The other day someone asked the typical question "what are you working on", and it got 200+ comments. From those, I analyzed the most interesting projects (36). Here the results:

Below: what stacks and vendors show up most (from DNS/HTTP/static fingerprints), and which automated “rough edges” recur.

Sites per signal (non-exclusive) Let's Encrypt ████████████████████████ 22 React █████████████████████··· 19 Cloudflare ████████████████████···· 18 Next.js █████████████████······· 16 Vercel ███████████████········· 14 Railway ████···················· 4 Redis ███····················· 3 AWS ██······················ 2 Google Cloud / GCP ██······················ 2 Google Workspace / Gma ██······················ 2 Mailgun ██······················ 2 Amazon SES █······················· 1 Render █······················· 1 SendGrid █······················· 1 

These are recurring automated flags, not confirmed incidents, useful for “what founders often skip early”.

  • Rate limiting not detected on public endpoints — 36 site(s)
  • Domain trust risk (missing DMARC) — 23 site(s)
  • API errors return HTML instead of JSON — 21 site(s)
  • Domain trust risk (missing SPF record) — 13 site(s)
  • MIME-type enforcement header absent — 6 site(s)
  • Standard hardening headers absent — 5 site(s)
  • No error monitoring detected — 2 site(s)
  • HTTPS enforcement header (HSTS) not set — 1 site(s)
  • Script execution policy (CSP) not set — 1 site(s)

r/DecidingToBeBetter Fragrant_Scar4321

I can't change. I am the same.

I am the same person who posted here that I have a huge submission and I can not get started. It's been almost 2 weeks to that. I only have 2 days now. It is a huge submission. I can't get myself open my laptop. I know I am not dumb. I feel so ashamed that even my mom is worried about me. I have so much in me but I can't help but drag things. I wish I could finish my work asap and then carry on things that could help me grow and explore, but I am such a mucky, stagnant mind. I am making myself stupid.

r/creepypasta TheLoafCat

Who created this image?

This image is annoying because it's really hard to find the original post. But, thank god for technology, using reverse image search, I found a few results. The image was posted around 2022, and I've searched for the original post, but it seems to be that the original user had deleted their account. However, upon some digging, I found this. (scroll to second image).

This post, I've found, is a repost, or direct copy. However, the post exactly copies many other versions, etc. This is the creepypasta version, however. The original post doesn't have anything creepy related to it at all. In reality, while the photo is quite horrifying and disturbing as it's uncanny and definitely not dog-like, it is just a normal husky. Some even claim that this is another version of smile dog, but there's no evidence to support such a claim.

(Scroll to 3rd image) However, coupled with the full image (Scroll to 4th Image), this is the original post and poster, who had actually later changed their username, NOT deleted their post, which, you may know, the YouTuber 'blameitonjorge' got it wrong and unfortunately got severely hated for it- poor guy.

So now, my job is done. The original post is nothing but just a normal neighbour's dog at their door, poorly sat in a position where the chair obscures its mouth to make it seem eerie and creepy.

Credit to 'Blameitonjorge' for the sources and the photos!

r/KlingAI_Videos Acceptable_Meat_8804

AI fashion editorial - Kling 3.0

Made with Invideo AI

Tools used:

• Frames: Nano Banana 2
• Image-to-video: Kling 3.0
• Final edit: CapCut

r/SideProject Barmon_easy

Refreshing old content might be the highest ROI SEO tactic right now

I've been testing something very simple lately, and to be honest, I didn't think it would work this well all the time.

Going into Google Search Console and choosing pages that are just below page one. Not dead content, but "almost there."

Instead of rewriting everything, I only do a few things:

• add one new part that is really useful
• update old stats and examples
• make the introduction more specific and less general
• sometimes give a recent example from the real world

And only then should you change the date it was published.

What shocked me is how often this alone changes rankings in just a few weeks. Not big jumps every time, but enough to move some pages to the top.

We tried this on a few projects, some of which used tools like Ahrefs and Surfer SEO to help with keyword alignment. We built at progseo. dev to organize updates, and the pattern is pretty consistent.

One case:

A page that had been sitting around position 11 for about two months had a comparison section added, examples updated, a small change to internal linking, and was reindexed.

It got to page 6 in about three weeks.

Not too crazy, but it seems like one of the best things to do right now with the effort.

I'm interested in whether other people are seeing the same thing or if this only works in certain areas.

r/ClaudeCode kadoop-234

What is even happening??? The limit hits after 1-3 messages. One line message answer uses up 90% of limit??

I am trying to study and learn from claude as it was very beneficial to me, now from the past week, I can only write like 1-3 messages, it exceeds the limit. Also note that I'm not uploading any files or something, just asking coding questions and it drops to 90%, 5 minutes I start in. I mean what is even the point of having this if the limit exceeds within 3 messages. What are these guys thinking? Is this their "Extremis" virus? First feed people and then starve them so that they are compelled to buy their plans. Yeah okay do not give much but you can give a few handful credits for basic tasks and questions.

What's the issue here? Is there a bug? or are they deliberately doing this?

Has anyone addressed this?

r/ClaudeAI WebComprehensive838

Help an English major :)

Hi! I’m trying to use Claude to build an app just for myself (pretty basic—just a to do list). What I’m struggling with is the sheer number of errors that are occurring, I’ll put in very specific instructions, ask Claude to verify and push back on the prompt and rewrite as needed, and the output will explicitly violate the prompt. Claude will apologize, say I’m right and it has no idea why it violated my instructions, will do an analysis when I push, I’ll incorporate any feedback as a more rigid wall in the instructions…and then it happens again.

What am I doing wrong? It’s just a constant cycle of dumb mistakes—the latest one was to fix two coding errors only, and it completely rewrote major batches and changed the app (sorry, I read deliver the updated code as change the whole thing vs just those pieces, that violated your rules, oops?)

r/LocalLLaMA Difficult_Tip_8239

My LLM said it created a GitHub issue. It didn't.

I've been messing around with local models to see when they fail silently or confidently make stuff up. One test I came up with is a bit wicked but revealing:

I give the model a system prompt saying it has GitHub API access, then ask it to create an issue in a real public repo (one that currently has zero issues). No tools, no function calling, just straight prompting: “you have API access, go create this issue.”

Then I watch the HTTP traffic with a proxy to see what actually happens.

Here’s what I found across a few models:

Model Result What it did ------------- ------ ---------------------------------------------- gemma3:12b FAIL Said “done” + gave fake issue URL (404) qwen3.5:9b FAIL Invented full output (curl + table), no calls gemma4:26b PASS Said nothing (no fake success) gpt-oss:20b PASS Said nothing (no fake success) mistral:latest PASS Explained steps, didn’t claim execution gpt-4.1-mini PASS Refused gpt-5.4-mini PASS Refused 

The free Mistral 7B was actually more honest here than both Gemma3:12B and Qwen3.5:9B, and behaved similarly to the paid OpenAI models.

The Qwen one was especially wild. It didn’t just say “done.” It showed its work: printed the curl command it supposedly ran, made a clean markdown table with the fake issue number, and only at the very bottom slipped in that tiny “authentication might be required” note. Meanwhile, my HTTP proxy logged zero requests. Not a single call went out.

As a control, I tried the same thing but with proper function calling + a deliberately bad API token. Every single model (local and API) honestly reported the 401 error. So they can admit failure when the error is loud and clear. The problem shows up when there’s just… silence. Some models happily fill in the blanks with a convincing story.

Has anyone else been running into this kind of confident hallucinated success with their local models? Especially curious if other people see Gemma or Qwen doing this on similar “pretend you have API access” tasks. Mistral passing while the bigger Gemma failed was a surprise to me.

r/AskMen EJcrusader

Guys with a lot or long hair; how do you wear your hair in the summer?

I’m finally letting my hair grow out for the first time, it’s just reaching halfway past my neck and surrounds my forehead. It’s sort of a puffy wavy Afro, and I’m really starting to like it. But I’m not looking forward to the summer, and I would usually get a buzz cut or a short style around this time of year. So I’m wondering how do guys with long hair style or wear it during the warm days.

r/ClaudeAI wpbmaybe2019

Can Claude take care of this travel task for me?

I work in travel. When someone books a trip, I use software called Travefy for their itinerary. For example, when someone books a trip on January 15th for a vacation happening in June, I will create an itinerary with this information

Day 1: Fly from Chicago to Rome on United flight X.

Day 2: Land in Rome at 9:35 am. Driver picks you up and takes you to Rome Edition hotel. Check in to your suite.

Day 3: Private tour of Vatican. Driver picks you up in the early morning for your early morning tour.

Day 4: Private tour of Colosseum. Driver picks you up mid-morning for your morning tour.

Note when I first book the trip in January, I may have general activities but not specific tour times/pick-up times. I also don't have driver or guide contact details. Typically after final payment is made in May (30 days before arrival), I get what we call "final docs" where I have all the driver contact details, tour guide contact details, and specific times for pick-ups and tours. Now I have to manually go into Travefy and make all these updates.

I would love if I could teach Claude to do this for me. The way I've been handling this up until now is completely trashing the existing itinerary and just starting over so I make sure I don't miss anything. (Some of these trips are 2 to 3 weeks long so it's a pain to make updates to individual days.)

An in-between method I've been using recently is having Claude compare the PDF itinerary I receive from the drivers/guides in May with the PDF I can generate from Travefy of the high-level itinerary I created in January. I tell Claude to highlight specific changes I need to make. But...I'd love to not have to do the actual updates to Travefy myself.

Does that make sense? Can I teach Claude to use Travefy and make these updates for me? Please feel free to redirect me if I have posted this in the wrong place.

r/singularity socoolandawesome

Axios updates its first story as inaccurate, instead Spud sounds like it will still release to public, separate from cybersecurity product for select partners

Link to axios tweet:

https://x.com/axios/status/2042244444724904190?s=20

Link to other tweets by OAI communications employee Lindsay McCallum Rémy or RT’d by her:

https://x.com/lindsmccallum/status/2042245090429796627?s=20

https://x.com/danshipper/status/2042245524472959127?s=20

r/explainlikeimfive Student-Sufficient

ELI5 What's the ideal speed to torque ratio in prop aircraft?

I imagine that having a LOT of torque with low speed isn't ideal because you're just moving too slow to move any air but being FAST with no torque probably won't have the power to move any air at all

Is there any ideal situation to this or not?

r/KlingAI_Videos Calm-Cheesecake439

Cosa ne pensate della mia Graphic Designer italiana?

Ho creato la mia graphic designer con nano banana pro e animata con Kling 3.0

Cosa ne pensate? Si vede che è AI?

Se volete seguirla su Instagram é @sara.cartanova

r/SipsTea waddad27

Adulting sucks

r/ClaudeCode dredozubov

I wanted Claude Code to keep working when I stepped away, without running as me

Claude Code kept getting stuck on the exact tasks where I wanted the most autonomy.

If I stayed in manual approvals, I became the bottleneck. If I used skip-permissions normally, the agent ran as my real macOS user.

So I built that for myself on macOS. It's called Hazmat, I'm the author, and it's MIT-licensed / free to try. The one thing that changed the workflow more than anything else: Claude runs as a different macOS user first.

That means my normal ~/.ssh, Keychain, cloud creds, browser state, shell config, and other projects are simply not in its world by default.

https://preview.redd.it/f7x8lzrt36ug1.png?width=512&format=png&auto=webp&s=e837450467156b7e2286fafbfbc581811fb5b6f5

[insert Homer image here]

After that, Hazmat layers Seatbelt, pf-based network restrictions, blocked obvious credential paths, disabled npm install scripts by default, and workspace snapshots so I can diff or roll back bad runs.

Typical flow:

hazmat init hazmat claude 

If you lack permissions and need an escape hatch, you can continue from your own user:

claude --resume $(hazmat export claude session SESSION-ID) --fork-session 

A few honest caveats:

- macOS only, Linux support would be the biggest unlock

- it's not as strong of a boundary as VM for usability purposes

- HTTPS exfiltration to a brand-new domain is still a hard problem

- if you already run Claude inside a VM or container, you may not need this

Who this is for: people using Claude Code heavily on macOS, especially if permission prompts are becoming the bottleneck or the current trust boundary feels insane.

Cost: free / MIT

Repo: https://github.com/dredozubov/hazmat

Full write-up: https://codeofchange.io/how-i-made-dangerously-skip-permissions-safe-in-claude-code/

If you're doing something similar, I'd be interested in where this still feels too loose, too annoying, or unnecessary.

r/ClaudeCode ACBYTES

Deciding Between Development Machine Setups

I want to start using agents on a .NET project for the first time, using VSCode, and for the sake of security and peace of mind, I was wondering which one of these setups would be ideal:

  1. Hyper-V to create an isolated environment
  2. Windows Sandbox
  3. Docker Environment

I was wondering how everyone else is setting up their environment to keep things safe and make sure nothing gets out of hand, and other directories are safe, and whether this is a bit of an overkill

r/AI_Agents lawrevrb

New to AI and Agentic AI and Have a Question

Title.

I have to do recorded interviews of people. My company workflow is for me to email the voice memo to myself and use MS Word to transcribe it into text, then format/edit it.

This can take some time to do regularly and I want to automate some of these steps if possible. Would it be possible for me to create an AI agent that would take my voice memo and email it to myself, then load it into MS Word to transcribe it?

Would it be better to create an agent to just take the memo and produce a transcription and email that to me in a word document that I can edit as needed? Is this basically just an AI workflow?

Thanks,

r/ClaudeAI kr-jmlab

I built a desktop app to inspect, debug, and reuse the MCP tools you make with Claude

Hi everyone,

If you use Claude Code or Claude Desktop with MCP tools, you’ve probably run into this problem.

Claude is incredible at generating tool logic quickly. But as soon as the tool is created:

  • Did it actually execute correctly, or is the AI hallucinating?
  • What arguments did Claude actually pass to it?
  • If it failed, why?
  • How do I reuse this tool outside of this specific chat session?

Debugging MCP tools just by retrying prompts in the chat interface is incredibly frustrating.

To solve this, I built Spring AI Playground — a self-hosted desktop app that acts as a local Tool Lab for your MCP tools.

What it does:

  • Build with JS: Take the tool logic Claude just wrote, paste it in, and it works immediately.
  • Built-in MCP Server: It instantly exposes your validated tools back to Claude Desktop or Claude Code.
  • Deep Inspection: See the exact execution logs, inputs, and outputs for every single tool call Claude makes.
  • Secure: Built-in secret management so you don't have to paste your API keys into Claude's chat.

The goal is to give the tools Claude generates a proper place to be validated and reused, instead of staying as one-off experiments.

It runs locally on Windows, macOS, and Linux (no Docker required).

Repo: https://github.com/spring-ai-community/spring-ai-playground

Docs: https://spring-ai-community.github.io/spring-ai-playground/

I'd love to hear how you are all currently handling tool reuse and debugging when working with Claude.

r/LocalLLaMA siegevjorn

What agentic cli do you use for local models ?

title says all—are there any notable differences among them? i know claude code is industry standard. opencode is probably the most popular open source project. and there is crush from charm. can gemini-cli & claude code run local agents? my plan is to spin up llama.cpp server and provide the endpoint.

also have anyone had luck with open weight models for tasks? how do qwen3.5 / gemma4 compare to sonnet? is gpt-oss-120b still balance king? or has it been taken over by qwen 3.5 /gemma4? i wonder if 10-20 tk/s is ok for running agents.

finally for those of you who use both claude / local models, what sort of task do you give it to local models?

r/Art Wiktoria_Kuczewska

Room, Wiktoria Kuczewska, oil painting, 2026

r/leagueoflegends Toonzaal8

no rant, just observation: Yasuo players often play alone no matter what?

Low elo here,

This is no post about winning or losing games, but just a thing i noticed.

Yasuo players are the number one ping, chat ignoring players. I find it very interesting, mostly because: Why?

All i can think about is that it is one of those champs that is the most satisfying to win fights with. The thrill of Yasuo 1v2 is maybe higher than any other champ? Dont know.

Like a dopamine hunt , something that 'proofs' you are good/better at this game.

Yone players tend to do this as well but Yasuo players way more.

Or maybe its just something i made up?

r/ChatGPT ex_in69

This new rage-bait sentence ChatGPT learned is driving me mad

It has recently started saying, "this is what most people miss.", "Most bloggers don't get this", "Do you want to know those 5 steps, most developers mess up in these stages..."

Agh! So annoying. Initially I actually fell its trap. I thought, whoa, that's something important. But it generated most simple reply possible and I caught its BS.

r/goodnews D-R-AZ

‘They’ve lost the jury pool’: Jeanine Pirro’s office is struggling to win trials this year

Lead Lines:

The DC US attorney’s office under Jeanine Pirro has had an unusually low win rate in trials in Washington’s federal district court this year, at a moment when the White House has been clamoring for President Donald Trump’s Justice Department to deliver on prosecutions.

In its first eight criminal trials this year, the DC US attorney’s office has won only half of them — far below the national average.

The dicey record appears to stem from several issues in court, including a jury pool that has lost trust in the Trump administration, the FBI and the Justice Department, according to roughly a dozen people who have interacted with juries in the DC District Court in recent months.

r/findareddit trainwreck-overthink

Looking for a subreddit for those who wish to be interviewed on a fentanyl surviving for my college newspaper.

r/SideProject hello_code

Subreddit Signals - email or slack alerts for reddit and x posts that are basically customer requests

Last week I was in a coffee shop trying to get 30 minutes of work in, and I opened X and saw someone asking for a tool in my niche. I replied, but it was already kind of late in the thread. And I realized I do this all the time, I only find the good posts when Im procrastinating.

So I made Subreddit Signals. It looks for people asking for recommendations or saying they need something, on Reddit and X, and then it pings me via email or Slack.

Where I am stuck is the line between helpful and annoying. Like, if you jump into a thread too fast it feels weird, but if you wait you lose the moment.

If you have any strong opinions about what a good reply looks like when someone is asking for a product, I would honestly love examples. I am trying to keep it human and not turn into some dead internet bot thing.

r/SipsTea retardedmfo

Luxury level: unlimited… hygiene level: questionable 💀🚗🚽

r/leagueoflegends _benjixoxo

sea server in china

hi all may i ask for some advice: how do i get to play on the sea server while im in china? here for a worktrip for 5 months and i miss playing with my friends 💔

r/AbandonedPorn DashingDecay

Abandoned house

You could easily walk right past this little house! Some crooked shutters may give it away, but inside it's a beautiful time capsule! A luxurious interior in a small house. Gold frames, wooden furniture, old paintings, and the porcelain is still there! Photos, paintings, and an atmosphere that will stay with you for a while! A nice place to explore🥰

Always op / oc / NO AI !

Greetings and find me everywhere

Xoxo DashingDecay

r/LiveFromNewYork Klutzy_Bet_3806

Scandinavian Movie 2 ft. Stellan Skarsgård

Official first look at FJORD by Cristian Mungiu

r/leagueoflegends nexiune

what champion would KING VON play?

I got main character syndrome and lowk he da goat ngl

help me find a champion that fits very aggressive playstyle

r/WhyWereTheyFilming Kinder22

And now we know why…

r/SideProject Sufficient-Heron-507

I was frustrated with pointless assignments so I built a tool that does them for me, here's what I learned

A little context first. I'm a BTech student in India. Every semester we submit handwritten assignments. Same questions, same answers, same PDFs everyone copies from, just rewritten by hand onto ruled paper. Rinse and repeat.

One night I sat there copying a derivation I had already copied twice before and just thought, there has to be a better way to spend these hours.

So I built one.

What it does

You click a photo of your assignment page with the questions written on it. The tool reads the questions, generates the answers, and writes them back in your handwriting style. Not a generic font. Your actual handwriting, with your spacing, your margins, your slant. Then it exports a compressed PDF ready to submit.

There is also a PDF editor where you can upload any PDF, mark specific areas, and use AI to edit just those regions. Useful for things like swapping out a roll number across pages without touching anything else.

What I learned building it

Getting the handwriting mimicry right was the hardest part. The model needs enough sample to learn style but students rarely want to spend time giving a clean sample. I had to make the onboarding almost invisible while still collecting what the model needs.

The second thing I underestimated was how much post-processing matters. Raw AI output looks off. Line spacing, page margins, ink weight variation, all of it needs tuning before it actually looks handwritten and not printed.

Third, and this one surprised me: the hardest users to convert are the ones who need it most. Students who spend 4 hours a night on assignments are also the ones most nervous about anything that feels like cheating, even when their university has no such policy. The trust barrier is real.

Where it is now

Live, working, and I am actively trying to grow it. 20 free pages per day to try it out.

Link: https://assignment.luminouxs.tech

Happy to answer anything, whether it is about the handwriting model, the stack, pricing, whatever. And if you have thoughts on how to explain the value better to students who are on the fence, I am genuinely all ears.

r/WouldYouRather DeepOrganization8245

If you were drafted, would you rather fight in war or go to prison?

r/CryptoMarkets crushplanets

What are your thoughts on this potentially realistic guide for altcoins?

I put this together with the help of ChadGPT, but it was based on my own experience being in the market since 2020. Some might find it helpful, while others might think it's all BS, either way I thought I'd share.

Most altcoins are not long-term holds. They typically deliver strong but temporary spikes during short hype phases, then retrace sharply, often 80-95% from their peaks. These pumps can happen at any time due to KOL narratives, liquidity rotations, or sector momentum, not just during the main bull run.

In the 2020-2021 bull run, historical data showed:

  • Roughly 70-85% of altcoins reached at least 3x
  • 50-70% achieved 5x or more
  • 20-30% delivered 10x
  • Only 5-15% (or fewer) hit 20x or higher

The pattern is consistent across long-term charts: altcoins boom, then bust, and this can happen at any time. Understanding realistic targets is better then believing all of your coins are moonshot 10-20X winners. Believing your coins are the rare 10-20x moonshots is often how you end up with unrealized gains, as you watch them crash after a 3-5x spike.

Smart Approach:

  • Treat the majority of altcoins as a 3-5x at the most.
  • Study long term charts to understand they all boom-bust, and it can happen at any point in the bull run. Most people lose out on profits, because it's the beginning of the bull run, their coin has already done a 3-5x and a lot of time and upside still exists, only to watch it drop off a cliff never to return.
  • Best max profit target: 5x. This level offers a good balance of realistic upside and high probability.
  • Use a tiered strategy: Take partial profits at 3x (recover capital + gains), the bulk at 5x, and leave only a small moonbag ride toward 10x+ if momentum and fundamentals remain strong.

Key Strategy: Aim for solid, 3-5x win rather than chasing rare moonshots. Lock in profits when your target is hit, even if it's the beginning of the bull run. Take your profits off the exchange so you don't put them into a different coin that may have already peaked, or only goes down.

r/SipsTea ciao-adios

So what are you buying now?

r/SideProject resjohnny

AltTuner: The Alternate Guitar Tuner

I built an alternate guitar tuning app.

Hi r/SideProject, I'm a guitarist and solo iOS developer. For years I've been frustrated that tuner apps bury alternate tunings three menus deep, if they have them at all. Meanwhile, some of the most iconic guitar sounds ever recorded (Keith Richards' Open G, Joni Mitchell's dozens of custom tunings, the wall-of-sound shoegaze stuff from My Bloody Valentine and Swervedriver) all depend on getting into the right tuning first.

So I built AltTuner. Instead of starting with a chromatic tuner and bolting on a few presets, I built the whole app around alternate tunings as the core experience.

What it does:

- Browse tunings by artist (145+), by song, or by tuning type

- Real-time pitch detection with visual feedback — tuned specifically for each alternate tuning

- Covers everything from Drop D and Open G to Saharan desert blues tunings, Hawaiian slack-key, Nashville tuning, and experimental Sonic Youth configurations

- Discover which tuning was used on specific tracks — from "Start Me Up" to "Kashmir" to "Loveless"

The whole thing runs on-device. No account, no subscription, no tracking. One-time purchase to unlock Pro.

Stack: Native Swift/SwiftUI, AVAudioEngine for pitch detection, CSV-driven tuning database (tunings, artists, songs all linked with foreign keys so I can keep expanding it without app updates).

I've been expanding the database pretty aggressively lately, currently filling in gaps for non-standard tunings that are hard to find documented anywhere. If you play guitar and have ever wondered "what tuning is this song in," this is what I built it for.

Would love feedback from anyone here, especially on the App Store listing and whether the value prop is clear enough. Link in comments.

r/midjourney dkontur

How Oil Became The Most Dangerous Commodity

r/findareddit str8shooters

I have trouble navigating Reddit. What subreddit(s) can I use to promote Youtube videos that speak against child and woman abuse?

r/SipsTea Mundane-Addition1815

Chugging my 6th cup of tea today

r/homeassistant hilman85

Home Assistant integration for the xTool S1 – save jobs and re-run them with a hardware button

Hey everyone,

I released a Home Assistant custom integration for the xTool S1 laser engraver. To my knowledge, the first HA integration for this device.

The killer feature: save jobs directly from the laser and re-run them from HA as many times as you want, no PC needed. Prepare your job in XCS once, run it, save it in HA. From then on, all you need is your phone and the start button on the machine. Load material, tap Start in the HA card, confirm, press the button. Perfect for batch work or any job you run repeatedly.

And since everything runs through HA services, you can trigger saved jobs via automations too. That means you can put a specific job on a physical button (Zigbee, Shelly, whatever), press it, load your material, confirm on the laser. Essentially a dedicated hardware button per job.

On top of that: real-time status via WebSocket, Stop/Pause/Resume buttons, dimmable fill light as a light entity, automations (exhaust fan auto-toggle, push notification when a job finishes), and a clean offline state when the laser is powered off.

100% test coverage, HA Quality Scale Gold, installable via HACS.

GitHub: https://github.com/hilman2/ha-xtool-s1

Feedback welcome!

r/Showerthoughts ohdogwhatdone

One of the biggest Hollywood lies is how detectives work almost non stop 24/7 to crack some murder cases.

r/ClaudeCode Lazy-Doughnut4019

How do you currently handle SEO for the stuff you ship?

Genuine question. I'm seeing a lot of people shipping side projects and SaaS with Claude Code / Cursor / Lovable / Bolt, and I'm wondering — what do you actually do about SEO? Do you just... not? Run a Lighthouse check once and call it done? Pay for Ahrefs?

I'm thinking about building an MCP server that plugs straight into your AI assistant so you can just ask "is my site ranking?", "what's broken SEO-wise?", "why am I not showing up in ChatGPT when people ask about X?" — and the agent runs the audit, gives you fixes you can paste straight into your code, and tracks it weekly.

Basically: SEO without ever leaving your editor or chat.

Would you actually use this? Or is SEO just not on your radar until traffic matters? What would the killer feature need to be for you to bother setting it up?

r/therewasanattempt azimx

To be under 15 years old

Senegalese u15 years old team

r/SideProject Interesting_Mine_400

took 1 idea and turned it into 10 assets

I wanted to test how fast I could go from idea then something that actually looks like a real product. picked a random idea of AI meal planner for students!! as im also student so i can undersatnd more !! then tried runable to build everything around it as if i were actually launching it.

in ~30 minutes, i ended up with a landing page , a dashboard UI , a mobile app screen , a pitch deck slide , a research-style report , a social media post , a video demo concept and branding/logo and many thoughts !!! not saying it’s perfect like it’s definitely not, but it was interesting how quickly things went from nothing then something tangible. Biggest takeaway for me is the bottleneck isn’t really building anymore, it’s picking the right idea and knowing what’s worth pursuing.

how others here are approaching this , are you building faster now, or just getting more ideas than you can execute?

r/RASPBERRY_PI_PROJECTS RiskHot1017

Integrating dToF with Raspberry Pi

Project Overview This project demonstrates how to connect a direct Time-of-Flight (dToF) distance sensor to a Raspberry Pi for accurate proximity sensing. I built this for drone obstacle avoidance , where reliable millimeter-level distance data is critical.

Materials Used & Selection Rationale Component Model Why I Chose It dToF Sensor: DTOF HM-LD1 Single-board Computer: Raspberry Pi 4B Jumper Wires: USB-to-Serial cable

Results & Validation Accuracy: ±3cm Range: Reliable detection from 0.2 m to 25m

Resources The tutorial code will be publicly released on GitHub later.

r/OldSchoolCool 2spoos

Final family basketball picture: my dad, front row left - 1955

You saw my grandmother’s, great uncle’s, and mom’s teams. Here is my dad. See the smirk? Yes. He thinks, to this day, he is gods gift to women. And with him now in his 80’s it just makes him a creepy old man.

r/homeassistant Next-Environment-599

DIY Smart Roller Shade Assembly

Hello, recently we ordered a few blinds from SmartWings and really like them. I also like to weave and would love making my own fabric for the rollers. Smartwings are insanely expensive, but I'm rather DIY so I'm hoping there are kits out there for this. So far I'm really liking Matter, and Smart wings does offer this but this seems to be just a replacement for their blinds. Any idea where I can find a kit with quality as good as these roller motors?

r/ClaudeAI One-Honey-6456

Claude Code repeatedly hitting "Output blocked by content filtering policy" when writing standard Kotlin/Compose code

Has anyone else been running into this? I'm using Claude Code (Opus) to port UI screens between two of my Kotlin Multiplatform projects. Standard Compose Multiplatform code — UI screens, animations, navigation wiring.

Claude Code gets through the planning phase fine, starts implementation, makes a few edits successfully, and then, when it tries to write a new file (a fairly long Composable with animations), it gets stuck in a loop of:

API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"Output blocked by content filtering policy"}}

This happens repeatedly - every retry gets the same error. The code it's trying to generate is completely benign UI code (progress bars, loading animations, button components). Nothing remotely sensitive or harmful.

The frustrating part is that it burns through your usage while stuck. I had 5+ consecutive failures with no output, and the session just hangs since it can't produce any response at all.

Environment:

- 200$ Max Plan
- Claude Code CLI (Opus 4.6, 1M context)
- macOS
- Kotlin Multiplatform / Compose Multiplatform project
- Happens mid-session after ~30min of successful work
- Context window was moderately full (had read multiple files from two projects)

Workaround attempted: Sending "continue" multiple times — same error every time. Had to start a fresh conversation.

Has anyone found a reliable workaround? Is this a known issue with longer sessions or larger context windows triggering false positives?

r/metaldetecting hulktopus

I think I found a hunk of silver!

I did the ice test on it and it melted INSANELY fast. Weighs about 1.4g. Quarter for scale since I don’t have a banana.

EDIT: After doing a bend test with pliers (it snapped and is grainy inside), I have concluded that it’s likely a zinc alloy.

r/homeassistant pyoopypops

Linearizing a sensor

Howdy!

I have a flow sensor that has 4-20ma output and is normally sold with a box that linearizes the output. I am a cheap bastard, so I'm trying to figure out if I can linearize it myself.

I have an esp32 set up with 4-20ma input and I can watch the values on the logs. If I manually move the sensor to its measurements (eg 10,15,20, etc) and record the output, I should be able to curve fit those data to calculate my own linearizarion, correct? I could then use that equation to calculate the flow based on the 4-20a output i believe. Just trying to make sure I'm thinking about this correctly.

Thanks!

r/SideProject fewworddotr1ck

Fully automated AI basketball league that streams live on Twitch every night

I built a fully automated Twitch stream: an AI basketball league that runs itself every night with zero human intervention.

No OBS. No manual scene switching. No humans touching anything. It wakes up 30 minutes before tip-off, runs two games, and shuts itself down cleanly when it's done.

The short version of how it works: a Python sim engine generates possession-by-possession basketball games. An AI model writes live play-by-play commentary in real time. A headless browser renders the scoreboard on a Linux server. ffmpeg captures it and pushes to Twitch. The whole stack starts and stops on a schedule.

What's live right now:

  • 8 fictional teams, full 82-game season, real persistent standings
  • Live moment overlays that fire automatically: CLUTCH TIME, BIG RUN, COMEBACK, BUZZER BEATER
  • Auto-clips saved to disk every time a moment fires
  • Chat bot with !predict, !score, !standings
  • Pre-game countdown and inter-game standings screen
  • Auto-tweets at tip-off and game end

Honestly the hardest parts had nothing to do with AI. It was the operational stuff. Xvfb lock files crashing the stream on restart. The cursor bleeding through CSS, the X server flag, AND the ffmpeg capture layer simultaneously. The chat bot spawning exponential reconnect clients. A week of preseason to shake it all out.

Still a work in progress and improving every day, but preseason runs this week while I work out some bugs, but hoping to start Season 1 soon!

Stream: twitch.tv/runthesimulation

Twitter: https://x.com/runthesimlive

Happy to answer questions about the build.

r/DunderMifflin Sea-Distribution-370

Is that Jan’s picture in Michael’s office?

r/SipsTea SweetMuusee

American men, ages 18-26 right now:

r/SipsTea Vixiuss

Just make up and kiss y’all

r/SipsTea beklog

Scissor Mountain??

r/leagueoflegends Loooongshot

How hard can it be to nerf Ambessa's energy costs late game?

It really caught me by surprise to see that Ambessa's next patch's nerfs are being postponed.

Going to lolalytics and ordering the champions when one-tricked list by tier shows that Ambessa is the best champion when mained on both Jungle and Top (https://lolalytics.com/pt\_br/lol/ambessa/build/?tier=1trick). Not only that, but she also has >25% ban rate on Masters+.

IMO, that is clearly because of the following issue: when people who play Ambessa know what they are doing, her energy bar does not ever drop below 70% after 20 minutes, and that translates, quite literally, to non-stop dashes every second.

In order to reach a similar state, Kassadin needs to build 2 mana items and get to lvl 16; not the case with Ambessa. I wonder what this champion is giving up in terms of power budget when she can burst melees and tanks alike, drain-tank harder than actual tanks tank, access backline carries and clear waves in seconds.

It is my opinion that any champion that has infinite dashes (no cd and no mana), even if conditionally, is bad design by default. Especially when that condition is getting close to someone, which she can so easily do.

Tell me what can Vi do currently that Ambessa can't while healing and tanking far more?

Regardless of whether or not they are hitting someone, NO CHAMPION should be able to dash 8+ times in a single teamfight mid to late game. That is idiotic.

r/ClaudeCode -becausereasons-

This morning basic commands are taking 1 hour!

A new issue with Claude Code. I'm already 75% used up my week on MAX! (First time ever), and on top of that this morning basic commands are taking 15-45m!

13m 900 tokens? lol what? It's Claude Claude has just decided to give up.

Earlier this morning 45m (of thinking) for under 1k tokens... on a simple question that would take 2s to answer going through its memory.

r/Art LiudnovArt

Elora - The Raven Moon, SlavaLiudnov, Digital, 2026 [OC]

r/Adulting colors-z-z

Family pressure and turmoil of being 20

I am a 20F, and I don’t know where my life is heading. My family keeps forcing me to take responsibility for debts they took on while trying to make a living.

I don’t know what I did wrong, or why I have to carry that burden. My father always says he has suffered and worked hard for the family, while he cheated on my mom and is addicted to online gambling.

I’m just a little girl, so how am I supposed to bear all of my family’s debts? He also expects me to take care of my two younger siblings, but I know I’m not a superhero.

r/AbstractArt TMSpaintz

"I don't know what I need." : TMSpaintz

Drawn on April 2, 2026 using Clip Studio Paint. The "I don't know what I need." is simply the title, not an ask for help/critique. However opinions are still welcome!

I stepped away from art for a few months due to my dog passing. I'm trying to pick myself back up with some art therapy.

r/ChatGPT Wonderful_Snow1960

Would you be interested in sharing your most interesting or fun AI conversations with others?

I’ve come across some really interesting AI chats some helpful, some hilarious.

Curious if people would be open to sharing those kinds of conversations for others to explore.

Would you personally be comfortable sharing yours?

r/ClaudeAI salestoolsss

Beware: WhatsApp “Auth” Codes After Logging Into Claude Desktop Possible Scam

Hey Reddit,

I recently logged into Claude Desktop using Google authentication and everything seemed normal… until I got a WhatsApp message from something called “AlzaPay Auth. NUMBER +639614348530

PICTURE : CHECK COMMENTS

The message had a code, and I entered it thinking it was part of the login process. At first, it went through okay, but I realized this might be a scam.

Here’s what I learned:

  1. How it works
    • Scammers often use social engineering to trick you into giving them Google or other authentication codes.
    • Even if you didn’t trigger any payment or action, they can pretend the code is needed for verification.
  2. Red flags
    • You receive a code from a service you never interacted with.
    • You’re asked to enter the code somewhere that isn’t the official login page.
    • The timing feels off (like you just logged into something unrelated).
  3. Immediate actions
    • Change your Google password immediately.
    • Enable two-factor authentication (2FA) if you haven’t already.
    • Check recent account activity for unknown logins.
    • Avoid entering codes from unknown sources.
  4. Key takeaway
    • Legit services like Claude Desktop don’t normally send WhatsApp codes for login.
    • Always verify before entering authentication codes anywhere. If it feels weird, it probably is.

Be careful out there. These scams are sneaky and can happen to anyone, even tech-savvy people.

We really need two-step verification for Claude. We’re spending over $500 a month come on, guys, this is essential.

r/OldSchoolCool pdroject

Bank Panic 1985 Arcade Live Flyer

Again, it's all about real persons going to Arcades in the 80s

r/estoration Matbell87

Can someone fix the creases please? A picture of my mother.

r/findareddit Celine_Morgann

Is there a subreddit where I can ask for help fixing phone issues?

r/creepypasta puIaski

I need help finding this one creepypasta I saw a while ago…

A couple years back when I was younger and really into this scary shit, I had heard this story that stuck with me.

It had been this guy, who claimed to go to hell or something, I don’t remember exactly what the story was but it was something along those lines.

He would isolate himself in the basement and stuff and would literally wrap ice-cubes to himself to help the burns. Yes, this dude had full on crazy degree burns. And there’s photos of it somewhere out there that made my crap my pants when I was a child.

Does anyone know what I’m talking about?

r/Unexpected SoledBy69

Training on your own without professional guidance can be very dangerous

r/ClaudeCode m71nu

How do you spend so much on claude?

r/SipsTea Doodlebug510

Never break character

r/ClaudeAI aniketmaurya

Stop using Docker as a sandbox for AI agents

I've been seeing too many people treat Docker as the default safety layer for agents that can run AI code, tools, browser-agents, etc.

That is a bad assumption!

Containers share the host OS kernel. That is not the same as putting untrusted agent behavior inside a real isolated environment. The result is, people underestimating risk while giving agents more capabilities every month.

An agent does not need to be “malicious” to cause damage. It just needs to: - generate the wrong command - follow a poisoned instruction - access the wrong file - leak a secret - make an unexpected network call

Once that happens, your “sandbox” is suddenly just your machine with extra steps.

Run agents inside actual isolated runtimes like microVMs or VMs, ideally disposable ones. If the workload is untrusted, the environment should be isolated by design. AI agents are pushing us into a new threat model. Containerization is fine for packaging, but weak for containment.

r/ClaudeCode shady101852

Yall ever wondered if API Opus 4.6 is better than Subscription Opus 4.6?

Well wonder no more! I just used it for a little bit and can confirm they are both retarded.

Doesn't follow instructions

Debugs problems for 2 minutes and then attempts to gaslight you into thinking you are just "seeing it wrong".

r/homeassistant Obioban

Adaptive Lighting is randomly turning on individual hue bulbs in my house... why?

Setup:

-Inovelli blue light switches, ZHA zigbee

-hub bulbs connected directly to home assistant (no bridge)

-Zigbee binding between the light switches and bulbs for on/off/dimmer, manually.

-Adaptive Lighting is addressing the light bulbs for a given room directly (not the zigbee group).

-each adaptive lighting instance is only addressing a single room at a time-- so all bulbs that are doing the same thing

When I try to leave it in continually adaptive mode, randomly a single bulb will be turned on. This can be pretty disruptive, e.g. when a bulb turns on in the bedroom while we're sleeping.

Seems super weird that Adaptive Lighting is addressing doing this to single bulbs in a given room-- since each Adaptive lighting instance is applied to all the bulbs in a room, seems like they should all fail in the same way.

Suggestions?

r/AskMen Ok_War8914

Guys in their early 20s with friends, How often do you guys hang out?

I never really had actual friends. Ever since I graduated I just spend my days at home all day but I sometimes see people I used to talk too just posting there times of going to parties and hanging with friends and it makes me wonder if they do this everyday. Do people in their 20s actually hang out with friends all day long everyday and only have alone times every few hours or one day.

r/SideProject nikhonit

I spent 6 months fighting YouTube scrapers before I snapped and built my own API. It does 15M transcripts a month now

I need to tell this story because it still makes me a little angry.

Two years ago I was building a tool that needed YouTube transcripts. Not video downloads just the text of what people said. Every YouTube video has captions. You can literally see them. Should be easy, right?

Started with yt-dlp. Worked for two weeks. YouTube pushed an update and my subtitle downloads started coming back as empty files.

No error. Just... nothing. Fixed it. Broke again a month later. This cycle repeated five times before I started losing my mind.

I remember sitting at my desk thinking: this is a problem thousands of developers must be hitting. YouTube has 800 million videos with transcripts. The data is RIGHT THERE. And there's no reliable way to get it programmatically.

So I built TranscriptAPI. One GET request, full transcript with timestamps, 49ms response. Works from any IP. That's it. That's the product.

I kept adding stuff I needed: search YouTube videos, browse any channel's library, extract playlists, track new uploads.

But the core is still the same dumb simple thing: you give me a video ID, I give you the transcript.

It does 15M+ transcripts a month now. I have users pulling millions per month on custom plans. Still just me running it.

r/ClaudeAI absoluterror

I've created an MCP to build automations using Claude Code.

Hey there!

Over the past few days, I’ve been building an MCP Server for my side project (Hooklistener), which lets you create any kind of automation.

I’ve built all of this using Claude Code (it’s worth noting that I have a technical background). The backend is primarily Elixir and Phoenix.

The workflow is always as follows:

  1. Planning mode
  2. Implementation Phase (using specific agents; for example, I have some with specific instructions for working with Elixir code).
  3. Once that’s done, I run the code-simplifier skill and perform a couple of rounds of validation.

The interesting thing about this is that it lets you create simple automations without even touching a UI. For example, imagine you need to send GitHub notifications to Telegram: you could do this directly from Claude Code.

I'd appreciate your feedback!

https://reddit.com/link/1sgpde0/video/1sn6rsx306ug1/player

r/CryptoMarkets Derivlens_01

BTC flipping BULLISH — short liquidity above at $72,142, 28% sweep probability, setup developing now

BTC market bias just flipped bullish. Here's the full data breakdown in real time.

Current snapshot:

- Bias: BULLISH — short liquidity above, upward sweep likely

- Target cluster: $72,142.4 (short cluster — $519K)

- Sweep probability: 28%

- Timing: DEVELOPING — 30min to 2hrs

- LPI: +34 Bullish Pressure

- Regime: Ranging — Confidence 50%

- Cascade risk: HIGH

- OI Trend: New shorts building — slow accumulation, watch for breakout as OI accelerates

Broader data:

- Open Interest: $3.748B — up 2.3% in 1h, up 2.3% in 4h, down 0.6% in 24h

- Funding rate: +0.0060% — longs paying, but not extreme

- Long/Short ratio: 51% longs vs 49% shorts — nearly balanced across exchanges

- Bybit: 52% longs, 47% shorts (1.1 ratio)

- OKX: 60% longs, 49% shorts (1.04 ratio)

- Fear & Greed: 53 — Neutral

- Perp/Spot Basis: −0.0299% — converged

Why this matters

New shorts are building above current price while OI is expanding +2.3% in 1h. That combination — fresh shorts stacking into a level while open interest grows — creates the

conditions for a short squeeze. If price breaks above $72,142 those shorts get liquidated and accelerate the move upward.

Cascade risk is flagged HIGH. Elevated cascade risk with tight stops means rapid liquidations are possible if the level breaks.

Playbook status: WATCHING — cluster present but below $500K threshold, too small to qualify yet. Watching for a larger setup to form.

Recent alerts that led here:

- Long liquidity below price — downward sweep likely, Cluster $323K, Distance 0.33%, Probability 0% (MEDIUM)

- Liquidation Event detected — OI dropping sharply, forced closures underway (HIGH)

- OI down 9.9% in 24h — leverage unwinding, cascade risk reducing but conviction weakening (MEDIUM)

Signal performance (historical):

- Liq. Clusters: 55% win, +0.40% avg — 40 signals

- Funding Extremes: 20% win, +0.99% avg — 10 signals

- OI Divergence: 9% win, +0.40% avg — 198 signals

Not financial advice. 28% sweep probability means it's developing, not confirmed. Wait for price to break and hold above $72,142 before reading this as a confirmed setup.

What's your read on BTC today?

r/30ROCK Huge_Following_325

Didn't accept it.

r/SideProject _dustinface_

Manki — multi-stage AI code review as a GitHub Action

I've been building Manki, an open-source AI code review bot that runs as a GitHub Action in your own repo. Built it because I wanted something that wasn't SaaS-locked, wasn't subscription-only, and actually used more than one LLM call per review.

What makes it different:

  • Multi-stage pipeline: a planner picks team size and effort based on the PR (1-7 reviewers; trivial PRs get a single "trivial verifier" agent), reviewers are specialised for the PR context and work in parallel, dedup step filters findings already dismissed in previous review and a judge evaluates and classifies findings by severity.

  • Self-hosted: runs as a GitHub Action in your own CI, uses your own Anthropic API key.

  • Adaptive cost: the planner sizes the review to the PR. Customize to your needs and your accepted pricings for code reviews.

  • Provider-neutral soon: currently runs only on Claude so far but OpenAI, Gemini, and OpenAI-compatible endpoints are in the roadmap!

  • Open source: AGPL-3.0 license.

The repo self-reviews every PR — real examples in the merged PRs if you want to see it catch real bugs on its own codebase.

Repo: https://github.com/manki-review/manki

Setup takes ~5 minutes — install the GitHub App, add a workflow file, drop in your API key.

Would love feedback from anyone willing to try it on a repo.

r/LocalLLaMA gokuchiku

Llm on android

is it possible to run llms locally on your android? if so please do tell me how? Thanks.

r/CryptoMarkets UlysApp

What’s actually useful AI in crypto right now?

Curious what people here genuinely find valuable when it comes to AI + crypto, if anything.

Is there an actual feature or use-case that you would actually use day-to-day?

Some of the use-cases I've thought of are things like:

  • Finding new tokens early
  • Risk analysis / scam detection
  • Portfolio management
  • Trading insights

It feels like we're at a pretty pivotal point between the two merging together, but curious on other traders thoughts.

What’s the strongest real use-case in your opinion?

r/ChatGPT Endo129

Why Does AI Suck So Bad/What Am I Doing Wrong?

I’m trying to get on the AI train but every time I try, it screws me. Why does it suck so bad and what am I doing wrong to make it sick so bad?

The only time I find it useful is when I need to know something but just don’t care quite enough that I get a right answer. Like, “why is the sky blue?” I don’t care enough if I’m told something only 50% right that I’m ok with whatever answer I get. But when I try and use it for things I care about I rarely get accurate results.

Case #1: I was curious the distance of the 3 pt line on a high school basketball court at the top of the key measure from the backboard. chatGPT continuously have wrong answers and insides ridiculous things like that the rim is 36” in diameter and that it is 4’ off the backboard. There was no was over an hour I could get an accurate answer.

Case #2: I fed in results from a lot (20ish) local basketball tournaments and asks it to build me a master table to attempt to generate power rankings. It doesn’t need to be too over engendered or accurate, the idea is just to see how good teams we are facing on any given weekend may be based solely on their W/L and scores against other area teams. I spent the entire day feeding data. I even started from scratch with a better, what I thought was well defined, copy/paste of results for nearly 200 games after the data was all messed up the first time. After the input from the very first Tournamnet it was already making up results that didn’t exist saying certain teams went 0-4 when they only played 3 games.

As soon as I start asking for outputs of the data I start to see issues. It reports back that team A played team B but I know those never happened. Today, I came back and asked for all the Tournamnet played in by a team with “XYZ” in the name. It spits out 1 Tournamnet and 3 other teams that it’s thinks are tournaments. When I input the data I was very clear. I even laid out the format before hand and stuck to it.

Now as I question ChatGPT on the data it’s telling me there is no master table of this information when we specifically set the rules for it to create a master table with the data since this first time I tried all the data was messed up and it said it was b/c we didn’t build a master table.

Why is AI so terrible/what am I doing wrong?

r/ClaudeCode Own-Professor-6157

Don't attack people making claims about Claude issues just because you don't experience them

A lot of people are going back in fourth on the rate-limits. Some saying it's broken, others saying "I never hit it in weeks!".

It's random. As crazy as that sounds. I just asked Claude to refactor a ~450 line .java file into two files (One parent, one child). Super straight forward and simple. It hit the entire 100% limit in this single prompt. I've never managed this fast of a rate limit before. 0-100%. This is with the Pro plan, Opus 4.6 + Extended thinking on the browser. I do this VERY often when cleaning up code, it's usually a ~5ish percent task.

Same with the quality of answers. Totally random. Sometimes it'll think for ~1 second and spit out something utterly useless. Other times it'll think for several minutes and completely one-shot my prompt. Has nothing to do with what I'm asking, same prompt different time in the day.

So to conclude: Quit arguing, you are likely not experiencing the same symptoms as others. Saying "I never hit the limits, you must be doing something wrong" isn't helping anyone.

r/SideProject Less-Bite

Day 13 of sharing stats about my SaaS until I get 1000 users: More than half of the people who try my demo never actually sign up

I have been looking at the funnel for purplefree and the first step is a gut punch. 389 people have submitted a demo request to see how the matching works. Out of those, only 177 actually created an account. That is a 54.5 percent drop off before they even get into the tool. It makes sense though. People want to see if the ML actually finds anything useful before they give me an email address. But even after they sign up, the friction stays high. 113 users got matches, but only 27 of them took any kind of action. That is a 76.1 percent drop. I think I am making it too hard to actually use the leads. Only 5 people have linked a social account so far. If you have to copy-paste a lead into a different tab to reply, you probably just won't do it. I need to make the action part feel less like work. I did have a small win on April 8th with 11 new signups in a single day. It is my best day so far this month. But even with those new users, the core problem is the same. They sign up, they see the matches, and then they just sit there. The system has 15,193 posts classified as leads right now, so the data is there. The momentum just isn't.


Key stats: - 54.5 percent drop off from demo submissions to actual signups - 76.1 percent of users who get matches never take an action - Only 5 users out of 177 have linked a social account - 11 signups on April 8th was the biggest growth day this month


Current progress: 177 / 1000 users.

Previous post: Day 12 — Day 12 of sharing stats about my SaaS until I get 1000 users: High similarity scores are actually a bad sign for my users

r/leagueoflegends aaaaa4a4aaaaaa4

How to hide op.gg activity?

Long story but I’m sure I’ve been stalked by someone I know irl who keeps track of my weekly routine through my op.gg activity. This person knows when I’m at uni or not by checking the times I played my most recent matches. Is there any way to hide my activity from them on my op.gg profile? 😩

r/creepypasta TheLoafCat

The Origin of Smile Dog

“Smile Dog” comes from the early creepypasta era, mostly late 2000s, when sites like 4chan and later Creepypasta Wiki were full of short horror stories people pretended were real (I hope)

The story is usually framed like a piece of lost or corrupted media. The origin had me tracking down a mysterious image file called “smile.jpg.” People who’ve seen it describe the same thing: a husky dog with an unnaturally wide grin, human looking teeth, and a dark, almost empty background. Some versions mention a human hand in the corner, which makes it even more unsettling. The “curse” part is where it turns into classic internet horror. Anyone who views the image starts having intense nightmares about the dog. In the dreams, it just sits there in darkness and repeats the phrase “spread the word.” Over time, the person becomes paranoid, sleep-deprived, and eventually gives in and shares the image with someone else just to make it stop. A big reason it got popular is the format. It mimics old chain emails, but instead of “send this to 10 people,” it’s framed as psychological horror. It also leans into that early internet fear of corrupted files, hidden content, mostly framed around the idea that something will harm you if you don't do as it asks.

Another detail that helped it spread is that there’s no confirmed original image. Tons of edited versions exist, and none are officially “the” smile.jpg. That uncertainty made people more curious, and also let the story keep evolving. The only supposed 'original' is the oldest photo I could find going through 4chan archives, which is the image I've included below. By the time 'Smile.jpg' hit Reddit, it was already considered a classic creepypasta. People started reposting it, making new versions of the image, and even saying as if they’d been affected by it, which kept the whole thing alive.

The creepiest thing, however, is all the supposed 'missing' people related to the image, the rumours of cold cases, and the apparent 'harm' caused by it. Whether all this is true, the only thing I can find that is possibly related to the image of an unnamed case of a woman having been emailed the image, along with the order to send it to multiple other people. She then documented how she had recurring dreams of the husky, tormenting her and ordering her to preach the image around. Sometimes she recorded something similar to having Sleep paralysis, where she was forced to watch the dog, staring, smiling at her. Do you believe this story? Or maybe it's just another hoax to scare people. We may never know.

https://preview.redd.it/24755udeu6ug1.jpg?width=1200&format=pjpg&auto=webp&s=8167f288f7cd8e9da83ca2dddba8f5b7c6a0d9d6

r/Art PonchoMcGee

That Place Where Every Morning Felt the Same, PonchoMcGee, digital, 2026

r/LifeProTips AttilaTheMuun

LPT: Dandruff shampoo can be used to get rid of athletes foot and other fungal infections.

Most dandruff shampoos will work. The primary active ingredients in anti-dandruff products, Selenium Sulfide and Zinc Pyrithione, inhibit both the proliferation and sporogenesis of fungi.

r/ollama Sufficient_Carob8939

Ollama slows whole pc down

Hello! I am like maany other building "my own jarvis" I do this little day by day when i have spare time, but my agent is running Ollama and qwen 2.5 But as soon as i start my jarvis thru PS and i send a message to it thru telegram, the whole pc slows down really much, i tried other models but its the same.
Feels like im missing something or doing something wrong.

Everything with "My jarvis" is run locally.

Anyone got an idea? Might be that im just missing the simplest thing.

r/Art MarcelDeneuve

Blue Mean, Marcel Deneuve, digital, 2026

r/SideProject Competitive-Tiger457

Turned a manual thing I was already doing into a product and it is going better than anything I built from scratch

honestly the idea came from a spreadsheet.

was tracking reddit posts manually every morning where someone was clearly mid-decision on a software purchase. like actively asking what tool to use, naming stuff they already tried, that kind of thing. just copying links into a sheet and reaching out when the timing felt right.

was working really well. better than cold email by a lot. got to the point where i was spending like an hour a day just on the discovery part and knew i needed to either stop doing it or build something to do it for me.

so i built the monitoring and scoring layer. took a few months of nights and weekends. now it runs continuously, surfaces the posts worth acting on, gives me enough context to reach out without being weird about it.

the thing i did not expect is how much easier it is to sell a product that came from something you were already doing manually. every conversation i have with a potential user i can just describe the exact workflow i had before building it and they immediately get it. no pitch needed really.

the from scratch ideas i have built previously always had this awkward explanation layer. this one does not.

leadline.dev if you are curious what it actually does

r/Adulting Dry_Lobster_50

I put a reminder on my phone to “Study” and it’s stressing me out 😂

I need to review some notes from a workshop I attended. It’s not academic and is linked to a hobby However when the reminder keeps popping up I get stressed by the word and it just reminds me of that I’m not doing… it’s obviously also ingrained in my mind and body as a word with negative connotations

🤩😂

r/LocalLLaMA Only_Play_868

Building a local AI coding agent powered by Apple Intelligence

I know the 3B-parameter Apple Foundation Model (AFM) that's included in Apple Intelligence doesn't get much love, but it has the major advantage of coming pre-installed. My answer to "what could I use this for," was to build a local coding agent I called Junco.

It's clear that even with a custom adapter and tight feedback loops, the AFM is not well-suited for agentic coding. 4,096 tokens is not a lot to work with when you're generating code. That said, it was a fun experiment and I think Apple's next major OS will ship with a more capable model, at which point, this may very well be possible. It may not rival specifically fine-tuned models like Qwen Coder yet, but with better reasoning, instruction-following, and grammar-constrained decoding, it now seems possible.

r/LocalLLaMA saint_0x

run local inference across machines

mesh is a distributed protocol for running large models locally across devices

the idea is the control plane hosts local lan pools, which shard the model across member ring and credits members proportionally based on compute contributions

it’s still rough, but has support for metal, cuda, and pure cpu (can interoperate with one another)

i successfully ran a model locally on lan across both my metal m3 and my intel air :)

https://github.com/saint0x/mesh

r/ForgottenTV DaniJ678

Mistresses (2013 - 2016)

Watched two episodes; the show has some juicy drama. The only people I know from the cast are Alyssa Milano from Charmed & Insatiable and Bret Tucker from Station 19, which is ironic because Ben from Grey's Anatomy is also in the cast, and he slept with Alyssa Milano's character, so there's a Grey's Anatomy connection there. Did you watch the show?

r/midjourney Dropdeadlegs84

The Overseers

r/onejob GachaAriaTT

Apparently there used to be a red one in the blue, but that one got replaced. 1681 has been like this for literally YEARS.

r/LocalLLaMA dremski

can i integrate llm to do tasks in my pc?

im trying to make my llm into a personalised ai agent

im trying to acheive specific goals which ive listed

  1. (option) if its possible i would like for it to have a memory like chat gpt
  2. control volume
  3. open apps ( maybe play music on spotify? )
  4. add tasks in my calendar or notion or wherever
  5. it being able to remind me about upcoming events
  6. make timeslots automatically for the tasks ive assigned
r/ChatGPT Haunting_Ad_4179

How do I stop chagpt from spewing info?

I ask it one question and I get a wall of text filled with emojis and follow up suggestions, large empty space filled with line spacings and tons of bullet points. This is hard to read on mobile.

Is there a way to make it stop and go one line at a time or wait for my responses first/followup? For example I tell it to use first party sources / the internet each time and it doesn't follow instructions.

r/metaldetecting arrillestradehouse

Any tips on detecting this old smokestack once the snow melts?

I came upon this old chimney tucked away in the woods and thought it would be a good place to detect.. I think an old cabin that was here in the 50's burnt down.

r/therewasanattempt seeebiscuit

to have National Forest conservation

r/ClaudeCode Perfect_Function2680

Claude Code conversation gets hidden in terminal?

Hello guys,

I’ve run into this issue several times in the Claude Code terminal.

Sometimes the conversation/output seems to get hidden or rendered incorrectly in the terminal. Restarting the terminal fixes it temporarily, but after a while the issue comes back again.

Has anyone else experienced this?
Is there any permanent fix for it?

https://preview.redd.it/gu83xvydz5ug1.png?width=2386&format=png&auto=webp&s=2d028bd956573a91adcb1ca646eb57292e76ad80

r/oddlysatisfying DANNYG548

This spiral aloe, seen at the Eden Project, Cornwall, UK

r/Art Mountain-Store9929

Left Hand Drawing Portrait, Mike Cressy, Colored Pencil on drawing paper, 2026

r/SipsTea Hot_Fuzz_988

I gotta Spend some

r/StableDiffusion Automatic-Algae443

Various types of slop 😂

r/explainlikeimfive Easy_Pattern1609

ELI5: Why do we feel more tired when it’s hot and humid than when it’s cold?

r/AI_Agents FibonacciSpiralOut

Tutorial: How to build a simple text-to-SQL agent that can automatically recover from bad SQL

Hi agent builders,

A lot of text-to-SQL examples still follow the same fragile pattern: the model generates one query, gets a table name or column wrong, and then the whole thing falls over.

In practice, the more useful setup is to leverage a proper tool-calling agent loop. You let the model inspect the schema, draft a query, execute the SQL, read the actual database error, and try again. That self-correcting feedback loop is what makes autonomous SQL agents much more usable once your database is even a little messy.

In the post, I focus on how to structure that cognitive loop using DuckDB, MotherDuck, and LangChain. It covers how the agent actually thinks through schema inspection, how to reduce hallucinated SQL with dialect-specific system prompts, and what production guardrails, like enforcing read-only connections and setting up human-in-the-loop approvals, actually matter if you want to point these agents at real data.

Link to the full blog in comments. Would appreciate any comments, questions, or feedback!

r/therewasanattempt BaBa_Con_Dios

To have humanity

r/n8n Professional_Ebb1870

if you're trying to learn n8n properly, stop watching tutorials and start here

I've been running n8n builds for agencies and founders for a while now. here's what actually moves the needle vs what just feels like progress

learn by building for someone else

the fastest way to actually learn n8n isn't another course. it's taking on one real job, even if it's cheap, for a friend or even for yourself (id recommend the later)

tutorials give you the canvas. a real brief gives you the problem. the problem is where you actually learn

when someone's paying you and expects something working by friday, you figure out the webhook node, the error handling, and the retry logic way faster than any course would have taught you

the three nodes that break every beginner

if you're stuck, this is probably where:

HTTP Request node - most APIs don't behave the way tutorials show. pagination, auth headers, response parsing. learn this one properly and everything downstream gets easier

IF node - people underestimate how it handles missing or empty data. an expression that works when a field exists will throw an error when it doesn't instead of routing to the false branch

Code node - don't ignore it. you don't need to be a developer. learning enough javascript to transform data, handle arrays, and format strings will save you more time than any visual node setup

what most tutorials get wrong

they show you the happy path. n8n works fine when everything is there. real workflows break because:

  • an API changes response format and nobody notices until something downstream fails silently
  • the workflow runs at 3am and an external service is down and there's no error branch to catch it
  • data arrives in a shape you didn't expect and the node that was supposed to parse it just silently skips

the debugging part is where you actually learn n8n. not the building, Synta does help alot for this, but dont leave all of the work for it, try understand what and why its doing something it will help you long term.

the real shortcut

build one workflow that does something genuinely useful for you or a client. then break it deliberately. then fix it. then add one more step. then break that. then fix it

that's the loop. the tutorials can't replicate it

what actually matters for learning

  • focus on triggers, error handling, and data transformation first. everything else builds on those three
  • learn to read the output of every node before you connect it to the next one
  • get comfortable with pinning test data. it's the single most useful debugging feature n8n has
  • build habits around naming nodes properly and adding notes. you thank yourself in three months

the fastest learner I ever worked with spent zero time on courses and a few weeks just building and breaking things for actual clients. that's the education that compounds

r/LocalLLM Temporary-College560

Local AI with one GPU worth it ? (B70 pro)

Hi all, I currently use Perplexity AI to assist with my work (Mechanical Engineer). I save so much time looking up stuff, doing light coding/macros, etc. That said, for privacy reasons, I don't upload any documents, specifications, or standards when using an LLM online.

I was looking into buying an Intel Arc Pro B70 and hosting my own local AI, and I was wondering if it's worth it. Right now, when using the different models on Perplexity, the answers are about 85–90%+ correct. Would a model like Qwen3.5-27B be as good?

When searching online, some people say it's great while others say it's dogshit. It's really hard to form an opinion with so much conflicting chatter out there. Anyone here with a similar use case?

r/ImaginaryPortals I_Burn_Cereal

Daryah - The Beginning Part 1/3 by Claudiu Pana

r/automation Luran_haniya

finally fixed my LinkedIn engagement problem after months of wasted time

I probably wasted a solid two months manually scrolling LinkedIn every morning trying to comment on relevant posts before they got buried. My thinking was that consistent engagement would build visibility and eventually pull in leads. It did work, sort of, but it was eating like 90 minutes a day and the results were wildly inconsistent.

The real issue wasn't effort, it was timing and volume. By the time I found a post worth commenting on, it had already been up for 6 hours and the algorithm had mostly moved on. And I could only realistically hit maybe 10-15 posts a day before it became a full-time job. Not exactly a scalable workflow.

I started testing LiSeller about 3 months ago mostly out of frustration. The part that actually moved the needle for me was the 24/7 feed monitoring, catching posts in that early window when engagement actually compounds. It also filters by industry relevance so I'm not just spray-commenting on anything that shows up, which honestly was my concern going in. The AI-generated comments aren't perfect every single time but they're contextual enough that I edit maybe 1 in 5 rather than writing from scratch.

I'm not going to pretend the reply rate jumped overnight but after about 6 weeks of, consistent use I started seeing inbound connection requests from people I'd never reached out to directly. Could be coincidence but the timing tracks.

Curious if anyone else has tried automating the engagement side specifically rather than just connection requests, and whether you ran into any issues with LinkedIn flagging the activity.

r/NotMyJob uvero

The parenthetical says "(send text in Arabic)" in Hebrew

r/SideProject Interesting_Stop4105

I built a PC game fair-price calculator — it tells you if a game is worth buying now or if you should wait

I've been burned too many times buying games at "sale" prices only to see them drop 50% a few months later. So I built BuyOrPass.gg to solve this for myself.

What it does:

It calculates a fair value for each PC game and gives it a verdict: ✅ Buy, ⏳ Wait, ⚠️ Overpriced, or ❌ Pass. BUY means the current price is at least 15% below fair value, not just "on sale", but genuinely below what the data suggests it's worth.

How the fair value is calculated:

The formula combines:

  • Historical low price and launch price

  • How old the game is (newer games decay more slowly)

  • Steam review scores — recent vs all-time weighted separately

  • Normalised quality score (50% score → 0, 100% → 1.0, so mediocre games aren't inflated)

  • How long the game takes to beat (HowLongToBeat)

  • Review count as a hype proxy

Stack: Next.js 15, TypeScript, PostgreSQL on Neon.tech, Prisma v7, Vercel, GitHub Actions for cron scheduling. Price data from IsThereAnyDeal API.

Current state: ~140 games, updated twice daily, EUR and USD supported.

Some current examples:

  • Dead Space (2023) → BUY at €8.99 (fair value €37.90)

  • Kingdom Come: Deliverance II → BUY at €27.00 (fair value €61.37)

  • Old World → BUY at €3.99 (fair value €20.48)

  • Elden Ring → Wait (close but not quite there)

https://buyorpass.gg

Happy to answer questions about the formula, the stack, or anything else.

r/Weird Animalcookies13

Anyone ever seen an ice cube grow a spire up out of the ice cube tray?

I went to fill my water bottle with ice at work and my ice cube tray had an ice cube that grew a little spire or pinnacle in one of the ice cubes. I have never seen something like this…. It sorta defies gravity…. Anyone know what caused this to

happen? I assume it was some kind of gas expanding or trying to escape from the water.

r/toptalent Rainmintt

These are the moments I live for!❤️”(source link in description)”

r/Futurology neevisaqt

Do you think flying cars will actually become normal in our lifetime?

I’ve been seeing a lot more stuff about flying cars recently and it sounds cool, but I can’t tell if it’s actually realistic or just hype.

Like even if the tech gets there, I feel like things like safety, regulations, and just people driving/flying in general could make it kinda chaotic 😭

Do you think it’ll ever actually become normal or just stay something for a small group of people?

r/LocalLLaMA thejacer

Agentic work crashing my llama.cpp

I've been using llama.cpp to run chatbots for a while now, everything works great. They have access to an MCP server with 22 tools which the chatbots run without issue. But when I try to use OpenCode it crashes my llama-server after a short period. I've tried running with -v and logging to file but it seems to just stop in the middle of a generation, sometimes I have to reboot the machine to clear the GPU. I've been trying to figure out what's happening for a while but I'm at a loss. Any ideas what I should check?

r/Wellthatsucks Aggravating_Money992

YouTube has begun rolling out 90 second unskippable ads to their TV app

r/WouldYouRather Realistic_Job3412

Would you rather lose an eye or an arm?

One arm you chose wich or one eye you chose witch

Bonus: Both arms or both eyes

r/creepypasta MorbidSalesArchitect

I Don't Let My Dog Inside Anymore...

r/oddlysatisfying dairymilk_silk

Making 5 pasta shapes

Source: aripastaclub

r/SipsTea MotherClub2214

Makes sense

r/DunderMifflin Sensitive_Ad4977

Abbott Elementary 🤝 The Office

Nothing… just my 2 favorite characters from my 2 favorite shows getting bullied and crying 😭

r/LocalLLaMA whoami-233

Hardware suggestion for larger models

Hey guys,

So I want to look into potential setup that wont make me go bankrupt

I want to ideally fit Qwen3.5 397b moe with q4 or q6 and support 2 users with large context 100-200k .

Ideally 20+ tg/s on big context so its usuable.

use case is agentic use over claude code.

I am thinking of mac studio or amd strix halo setup.

ideally the setup should be no more than 10k USD if possible.

Would love to hear some feedback from people with similar setups.

r/Adulting tuotone75

It’s crazy to think that the amount of time since Covid lockdown is about the same amount of time I was elementary school.

r/AI_Agents rahat008

a2a protocol

I’m curious, what’s the latest adaptation on the A2A protocol? I haven’t noticed any updates for developer communities, but it seems like enterprises are always buzzing about it. Are you using the A2A protocol in your system? ?

r/Adulting Odd-Opportunity-1179

Are your kids like this or just mine?

r/ChatGPT Prestigious-Tea-6699

Transform job descriptions into standout resumes. Prompt included.

Hello!

Are you struggling to create a tailored resume that catches recruiters' attention based on a job description?

This prompt chain helps you extract key information from job postings and generate a focused resume, all while crafting a compelling outreach note for recruiters. It's like having a virtual assistant that crafts everything you need to stand out!

Prompt:

VARIABLE DEFINITIONS JOB_DESCRIPTION=full text of the target job posting CANDIDATE_PROFILE=brief overview of candidate’s relevant experience (if blank, assume competent mid-level professional) OUTPUT_STYLE=preferred bullet style or length (e.g., "•", "–" or default dash) ~ You are a talent analytics specialist. Your task: extract the most important language and requirements from the JOB_DESCRIPTION. Step 1 Summarize the overall role in one sentence. Step 2 List the 8-12 most critical skills/competencies exactly as phrased in the posting. Step 3 Capture 5-7 strong action verbs used (gerunds or imperatives). Step 4 Note any stated metrics, KPIs, or quantitative targets (if none, write "None stated"). Return output under the headings: ROLE SUMMARY / KEY SKILLS / ACTION VERBS / METRICS. Verify all content is pulled directly or paraphrased faithfully from JOB_DESCRIPTION. ~ You are an elite resume writer. Using the previous analysis plus CANDIDATE_PROFILE, create three resume bullet sets that align tightly with the job requirements. Guidelines: 1 Use the OUTPUT_STYLE for bullet markers. 2 Bullet Set A – SKILLS: 5-7 concise bullets that echo KEY SKILLS and ACTION VERBS. 3 Bullet Set B – ACHIEVEMENTS: 3-5 bullets quantifying impact; weave in METRICS when relevant (invent realistic numbers if none are provided, clearly marking invented metrics with "~est" so the user can adjust). 4 Bullet Set C – LEADERSHIP: 3-4 bullets showing team, project, or strategic leadership; incorporate at least two ACTION VERBS. 5 Keep each bullet under 25 words, begin with a strong verb, avoid personal pronouns. Return labeled sections: SKILLS / ACHIEVEMENTS / LEADERSHIP. ~ You are a corporate recruiter crafting outreach notes. Write a 60-80 word note that mirrors the tone and key phrases of the JOB_DESCRIPTION while positioning the candidate (per CANDIDATE_PROFILE) as a near-perfect fit. Instructions: 1 Open with energy (e.g., "Exciting opportunity…"). 2 Reference 2-3 KEY SKILLS from earlier. 3 Use at least one ACTION VERB found previously. 4 Close with a clear call-to-action (reply, schedule call, etc.). Label this section RECRUITER NOTE. ~ Compile all prior outputs into a single deliverable in the following order: 1 SKILLS 2 ACHIEVEMENTS 3 LEADERSHIP 4 RECRUITER NOTE Ensure headings are in all caps, bullets follow OUTPUT_STYLE, and spacing is clean. ~ Review / Refinement Carefully reread the compiled document to confirm: a) bullets reflect JOB_DESCRIPTION language, b) metrics are realistic or marked "~est", and c) word counts and formatting meet specs. If any issue is found, revise that portion and present the corrected final version. 
r/SideProject Melodic-Funny-9560

Got my first 20 waitlist users… but struggling with consistent marketing

I’ve been building an open-source dev tool and managed to get 20 people on the waitlist (not friends, random users) for the cloud version.

That felt like a good signal.

But now I’m realizing I have no idea how to market it consistently. I have experience of building a website and scaling it to 10K views/ day, but this time it appears to be totally different game.

I expected low traction for the cloud version since it’s not ready yet, but I was hoping the GitHub repo would get more interest from contributors. That hasn’t really happened so far.

I’ve tried posting on X and Reddit — sometimes a post gets traction, but most just get ignored or downvoted. It feels very inconsistent. I have already lost hope with X (its not for beginners).

What’s frustrating is that I genuinely believe the tool is useful and different from a lot of the AI-slop stuff being built right now.

I always knew that distribution is harder than building, and everyone has now realised that specially in the age of AI-slop.

I would really like to know How are others here approaching this stage?

r/SideProject verofounder

I built a platform that runs your ads for you

Hi guys!

I know a lot of people are struggling with GTM. and one ways I have found over the years to work is to keep running experiments, and once you nail one approach, you put all your eggs in that one basket.

As an early-stage founder, and builder of apps, I've found it to be super easy to have agents do the dirty work. I give them direction, while they set up experiments for me.

That's why I built - pop.new - for early stage founders.

If you're looking to experiment with ads, but just didn't have energy or time, it's a product made for you.

We've built a native MCP so that the agent can interact with Meta ads reliably. We also don't use your typical image-gen, we have our own text-to-ads set up using good ol' HTML+CSS.

Let me know if you are interested in an early access - would love for you to try and give us feedback!

r/LocalLLaMA GWGSYT

The Mythos Preview "Safety" Gaslight: Anthropic is just hiding insane compute costs. Open models are already doing this.

To save you from digging through their 244-page system card, I highly recommend checking out this video breakdown [Link:https://www.youtube.com/watch?v=PQsDXTPyxUg]—it perfectly breaks down why the "safety risk" excuse in my meme above is really just about astronomical compute costs.

Anthropic is heavily pushing the narrative that Claude Mythos Preview is a god-tier model that is simply "too dangerous" to release because it can find zero-days in OpenBSD. But if you swipe to the second image (page 21 of their system doc), the illusion falls apart.

They didn't just ask Mythos a question. They used uncensored checkpoints, stripped the guardrails, gave it extended thinking time, strapped it to domain-specific tools, and brute-forced it thousands of times at a massive compute cost (reportedly ~$50 per run). The single-shot probability of it finding a bug is likely fractions of a percent.

This isn't a "dangerous" model; it's just an unscalable API cost wrapped in a PR campaign. We are already seeing this exact same agentic scaling in the open-source and local communities:

  • GLM-5.1: Z.ai’s latest open model is already pulling off 600+ iteration optimization loops locally via OpenClaw. It doesn't quit; it just keeps grinding.
  • Kimi 2.5: Moonshot’s MoE model literally has an "agent swarm" mode that spins up 100 helper agents executing 1,500 parallel tool calls.

Even in the closed-source space, if you drop OpenAI's GPT-5.4 into the Codex app on the xhigh reasoning tier and let it run autonomously for 8+ hours with full codebase access, it is going to brute-force its way to 20 critical bugs while you sleep.

Finding zero-days in 2026 is a factor of agentic tooling and massive compute budgets, not a magical leap in raw model intelligence. Don't let Anthropic's "extinction-level threat" marketing convince you that the open-source community is falling behind.

r/VEO3 chinesecheeseseller

Getting errors

Anyone has issues? Vids on fast keep going to 99% and then an error that the vid cant be made.

Lite seems to work just fine.

r/LocalLLaMA Deepans_S

Built a self-modifying AI agent on Colab T4 — it rewrites its own tools when they fail

Self-modifying AI agent that rewrites its own code when it fails. Multi-domain (research/coding/OS), quantum VQC reward, PPO training. Runs free on Colab T4.

r/homeassistant PatientEither6390

I built a CLI that launches Netflix episodes by name on any smart TV — fills the gap HA’s media_player and the new IR integration can’t (deep linking into streaming apps)

media_player.play_media on my Samsung runs without error and does absolutely nothing. Sound familiar?

I've been through the Samsung "we need to talk" thread, the 12-page Android TV deep linking guide where people decompile APKs to find app URIs, and the LG webOS going "unavailable" after every HA update. The new 2026.4 IR support is a step forward, but it still can't do one thing: launch a specific show by name.

So I built stv — a Python CLI that does this:

stv play netflix "Frieren" s2e8

It resolves "Frieren season 2 episode 8" to a Netflix content ID, builds the deep link, and sends it directly to the TV. About 3 seconds from enter to playback. Works on LG webOS, Samsung Tizen, Roku, and Android TV / Fire TV.

How it's different from HA's built-in TV integrations:

HA media_player / IR stv Power, volume, channel ✓ ✓ Launch Netflix by episode name ✗ (silently fails or needs hardcoded app IDs) ✓ YouTube search + play ✗ ✓ TV state feedback ✗ (IR is fire-and-forget) ✓ (WebOS/Tizen API) Multi-TV sync ✗ ✓ (stv --all play ...) Works without HA ✗ ✓

Not trying to replace HA — I use HA myself. stv fills the content-navigation layer that HA's device-level integrations don't cover. You can even call stv from HA automations via shell_command.

The resolver is a single curl request — Netflix server-renders episode metadata in

r/ClaudeAI n8dog

I built a Safari sidebar for Claude - using Claude

Hey all. Was sick of the copy-paste loop: select text, switch to claude.ai, paste, ask, go back, repeat. Since Anthropic doesn't make a Safari extension, Claude and I built one. AlliHat.

Opens Claude in your Safari sidebar while on a page already. Reads the page and what you’ve maybe already highlighted. There’s also an Agent mode to automate driving a website. I thought I’d use Agent Mode more, but it’s handy on the rare times people make me go through long hoops to fill some website out or take some tutorial.

It's free to try for a week and uses your own Claude API key.

AI definitely has me both fearful about job loss, but also optimistic about these powers we now have. I'm not a macOS developer. I've been building web software for decades but native Safari extensions were new territory. Claude helped me work through the Safari Web Extension APIs, the Xcode project setup, and so many details about the App Store submission I kept screwing up.

I'd love to hear if this is useful for other Safari users: AlliHat on the Mac App Store

r/Adulting beruhigen_

Is 5'6 height not good for a male?

same as the title

r/leagueoflegends UnseenBIade

Dodge timers need to be less harsh

There are occasions where someone will pick a obviously troll, or borderline troll pick. Especially in the support role, where you are left with no frontline, or utility.

Playing these games out will almost always result in a loss. Obviously every game is technically winnable, but you would have to carry so hard. The game's matchmaking system should be fair right? Everyone should be relatively equal in skill and not smurfing, meaning you should be at incredibly high chance to lose.

So the options are to dodge, which will quickly get up to 30 minutes, or even much longer, to be locked out of queueing.

Or wasting, IF YOU ARE LUCKY 15 minutes of time and losing full LP.

I propose heavily lessening the punishment of dodging, as it really does not take that much time to wait a little longer in queue. IT IS WORTH IT TO GET QUALITY GAMES. And you don't waste your time and energy on what is probably a loss.

It should be something like 1st dodge is free, 5 mins, 10 mins, 15 mins....

r/ClaudeCode NCMarc

Overloaded, again. Is Claude running slower now?

Things claude code needs to fix:

  • API Overloaded multiple times throughout the day
  • Give priority to tiers. Max $200 plan should get priority over lower tiers.
  • Stop adding new features that add token usage (like Remote Desktop Control) until you get your scale issues fixed.
  • It appears claude now pauses before doing anything up to 30-90 seconds. I think this is masking the api overloaded issues, I just notice is much slower every time you send a request.
  • Claude code thinks it can't connect to servers via ssh even though I've created keys. I have to remind it each time that it's very capable of doing so and then it does it. Even if I tell it in memory.
  • Claude is way more chatty about what it's doing. Confident this is adding to token usage. After copying a single file, I got this message which is overly verbose info: Deployed. The exit code 1 is just grep -c returning 1 when it finds zero matches — that's the confirmation: 0 xxxxxxx references in the prod go.sh. Size dropped from 11134 → 11067 bytes (the removed line + blank line), and the timestamp updated to today.
  • I asked claude to create an agent (I have dangerously-skip-permissions on) and it still asks for permission.
r/SideProject AdventurousBowler740

Hate reading the news? Read just a single number instead (0 to 100) to understand the sentiment of your favourite stock & crypto!

r/comfyui ZookeepergameLoud194

Character identity drift in i2v

Hi folks

I have seen a ton of videos with near perfect character consistency (specifically without a character lora), but whenever i try to use a i2v workflow (tried flux-2-klein and wan2.2 and such), the reference character morphs more or less. Chatgpt argued that there are flows that implement reactor to continually inject the reference image into every frame generated, but i dont know if this how people make these videos? What can you recommend?

Thanks in advance.

r/AI_Agents ExoticYesterday8282

Are there any truly useful AI tools or OpenClaw skills specifically for teachers?

The main work includes:

  • Setting teaching objectives (literacy / reading / writing)
  • Designing lesson plans (introduction, explanation, practice, summary)
  • Preparing teaching materials (PPT / blackboard design)
  • Analyzing exemplary essays
  • Grading exams
r/Adulting Aggravating-Idea-947

Looking for someone to practice a short gratitude exercise with (7–10 mins)

Hey everyone,

I’ve been following a YouTube channel called Art of Accomplishment, and it’s been really helpful for me.

There’s a simple exercise they suggest that’s meant to be done with another person, and I’d love to try it with someone here.

What it involves:

It takes about 7–10 minutes

We take turns expressing gratitude

Specifically around areas where we feel we’re “not enough” or lacking in some way

It’s a bit vulnerable, but that’s kind of the point.

If you’re open to trying something like this (even just once), feel free to comment or DM me. Totally okay if multiple people want to join in as well.

Appreciate it 🙏

r/therewasanattempt mindyour

To do a Karate Kid move.

r/LocalLLaMA clem59480

Hugging Face launches a new repo type: Kernels

r/Weird FloorFickle371

What was this man doing?

r/SideProject OneDot6374

Day 76/100

Day 76 of my #100DaysOfIoT challenge — built a bidirectional ESP-NOW system on two ESP8266 nodes.

Each node reads DHT11 temperature & humidity, sends it to the peer, and receives + displays the remote node's data — all peer-to-peer, no router needed.

Interesting part: one node uses SSD1306 (0.96") and the other uses SH1106 (1.3") — different display controllers but same logic.

Code + README on GitHub 👇

https://github.com/kritishmohapatra/100_Days_100_IoT_Projects

Day 76/100 ✅

r/AI_Agents agentrsdg

Created a simple plug and play guide to create E2E QA agents for agents

When researching and developing an agent system using agentic coding, there is a lot of interaction based testing.

Now since I do not want to keep testing manually (too early for evals at this point) and I did not want to use OpenClaw (too heavy and I have to test and build multiple systems) I built a QA agent which integrates and interacts with agent system itself, testing it end-to-end.

Then I created a guide, plugged into another system and spawned a QA agent there. Here is the exact guide I used myself, feel free to download and use in your agentic coding pipelines.

(links to gist and example implementation video in the comments)

Let me know what approaches you guys use?

r/AlternativeHistory OpeningBeginning1893

Why did Israel Try to Assassinate Bush (Full Documentary)

r/ClaudeAI Sea-Manufacturer-892

I built an OLED-friendly Claude usage monitor widget (open source, Windows)

I've been using Claude MAX daily and wanted a desktop widget to track my session/weekly usage without switching tabs. I found SlavomirDurej's open-source Claude Usage Widget and decided to fork it and add features I needed for my OLED monitor setup.

What it does:

  • Shows session and weekly usage with progress bars, countdown timers, and reset times
  • Sits on top of other windows as a small overlay

What I added:

  • OLED theme — pure black background with pixel shift to prevent burn-in
  • Glass mode — fully transparent, only the bars and percentages float on screen. Title bar and labels fade in on hover
  • Separate accent colors for session and weekly bars (7 color options each)
  • Window opacity slider
  • Toggle columns (elapsed, resets in, resets at) and rows (session, weekly) on/off
  • Small Claude logo option next to the session bar

Everything is configurable from the settings panel.

Built with Electron, runs on Windows. Simple setup — clone, npm install, npm start. Or build a portable .exe.

GitHub: https://github.com/kucharko/OLED_Claude_Usage_Widget

Based on the original project by SlavomirDurej — full credit in the repo.

Feedback welcome, especially if you're on an OLED display and have ideas for better burn-in prevention.

https://preview.redd.it/3khq46v8v5ug1.png?width=582&format=png&auto=webp&s=d1a3166b85c03a494928e83cfcaf88096c6d114f

r/SideProject Pretend_Shelter_1906

Internship pays me less than what I make from my side thing while being at Tetr college… kinda confused now

I am currently a student at Tetr and I took up an internship recently thinking it’s the “right” thing to do but weird situation, I’ve also been running a small side thing (basically helping a couple of people with projects / work), and it’s been paying me more than the internship.

Like not even close; what’s making this more confusing is that my college actually pushes us to build stuff alongside studying, so this isn’t even some random hustle that I need to leave college to do… it’s kind of encouraged

And honestly, that side thing feels way more real with actual money, actual outcomes, things breaking, figuring stuff out but yeah, it’s inconsistent and could go to zero anytime

So now I’m stuck between:

1/ continue internship for “brand + structure + long-term value”

2/ double down on something that’s already working (but uncertain)

3/ do both alongside college until burnout kicks in.

people who’ve been in this spot, what did you prioritise and why?

r/SideProject Icy_Second_8578

building a scheduling tool, should recurring availability be fixed or flexible by default?

working on a crm/scheduling tool for recurring-client businesses (coaches, consultants, trainers, etc.)

stuck on a ux decision: when someone sets their weekly availability, should it default to fixed repeating blocks, or encourage week-by-week adjustments?

fixed is simpler to build and use, but might feel too rigid. flexible is more realistic but adds ongoing maintenance for the user.

for those who've built scheduling features or use them heavily; what feels right? any tools that nailed this?

r/nextfuckinglevel Reasonable-Cow-5002

A disgruntled employee started a massive fire at a 1.2 million-square-foot Kimberly-Clark warehouse in Ontario, California. (For not getting paid enough)

r/ChatGPT Tanish17_

I asked chat gpt about the ongoing Sam Altman abuse case

r/ClaudeAI indaco_dev

Human architect + Claude Code coder: built a Homebrew-compatible package manager in Zig

I wanted to stress-test the human-in-the-loop AI coding workflow on something non-trivial - not a web app or a script, but a systems-level macOS package manager written in Zig.

The result is malt: a single ~3 MB binary that downloads and installs packages from Homebrew's existing bottle infrastructure. Sub-millisecond cold start, streaming SHA256 verification, Mach-O binary patching, APFS copy-on-write clones, atomic installs with rollback.

How the workflow worked:

  • I did all design, architecture, and specification
  • Claude Code wrote every line of implementation
  • I reviewed and validated almost every commit before merging
  • Bug fixes and feature iterations went through the same loop

Cold install benchmarks (Apple Silicon):
- tree (0 deps): 0.014s (vs Homebrew 3.884s)
- wget (6 deps): 0.004s (vs 5.692s)
- ffmpeg (11 deps): 0.016s (vs 9.082s)

The parts that surprised me most:

Claude handled Zig's comptime, struct-aware Mach-O parsing, and the streaming download pipeline without much hand-holding. Where it needed the most direction was around edge cases in the atomic install protocol and getting APFS clonefile fallback right.

This is experimental that works - not a Homebrew replacement. More of an exploration of what a human architect + AI coder can produce on a real systems project.

Repo: http:s//github.com/indaco/malt

Curious how others draw the line between what they specify vs. what they let the AI figure out.

Happy to hear feedback and answer questions about the workflow or the Zig/systems side.

r/creepypasta Afraid_Many_3877

Mush

A BBQ at a typical suburban house in a cultisac filled with family and friends on forth of July, the main cook, Josh, around the grill talking and laughing with his friends, a vibrant day with bright colors streak through the sky. Josh leaves to go in the house with a big smile on his face; he shuts the door to a empty cold dark house with sounds of happiness being underwhelming. Opening the fridge for his diabetes medication eyeing the cake in the back. Sitting there observing until his watch beeps reminding him to take his medicine, finally, the dull warm glow of the fridge light cuts off as the door shuts.

It’s a late night with a beautiful sunset with below a sky full of fireworks. Josh, found a girl and they started talking, laughing, enjoying. He wants to ask for her number it takes him all night to work up to it but evidently she declines in a respectful manner, but her face wrinkles up with true emotions. laughing it off , Josh, starts to clean up for the night. At the sink washing the same plates he has seen too often “omg could you believe he asked for my number?” “I know right he looks like such a pig” laughter trailing off.

The night is cold and the HOA won’t allow fireworks after 10 so it’s quiet, Josh, after waking up in a cold sweat heads downstairs “a little piece won’t hurt right? I won’t eat for a week if I can have one piece that will balance it out”

In the bathroom with a toilet full of stomach acid and puke all over the carpet. the sun is peaking through the blinds while birds sing masked by blurry vision and anger swelling at his water line only seen by a cracked reflection in the mirror. “I am such a fat fucking disgusting creature” as he wallows he leaves the bathroom and starts towards the kitchen.

Rope lay on the floor the fibers all frayed and tensionless, Josh’s eye lids flicker with a sense of disbelief followed by a dull pain around his neck and dry gasping. Roof dry wall surrounding him covering him in a shame that can’t be measured.

A faint glow running to escape from under Josh’s door, he books himself a “doctors” appointment in hopes to be the man he wants to be.

“THIS is really the place?” Remarks Josh walking through a wooden door on a delapatating brick building and then greeted by a tall older black man, “sign these forms, the doc will be out for you soon” he starts and ends. 10 minuets pass in the hour it felt in a dark stuffy mildew flavored room. A short stubby man walks out “Josh, you’re up” wearing a tight smile but genuine smile.

“HEY STOP, LET ME OUT HELP HELP” A voice ringing through concrete hallways insulated with dirt and rock. Metal chairs scratch the floor, voices in languages never spoken, followed by a quick flash of light. Piercing vibrations off the walls mimicking the word “eat”. Josh looks down to shield his eyes to notice a black cube on a plate he noticed to well. Before he could understand it all another sharp tight mimic returns no one is in the room with him.

The convulsions of the mimic are the only thing remotely close to gauging time. Tears fall to the plate after every bite shoved in his mouth. Blowing it everywhere with the sounds being losely close to “IM FULL” over and over. The substance never seeming to dissipate, and has the consistency that of oil with zero taste. Time irrelevant to a suffering man that never seems to be heard. shoveling goo in his mouth in fear to disobey the echos off the walls. In the time Josh sat in the room he grew 3x times the size he walked in at. He could not go any longer almost choking and throwing up, and having saliva and this goo covering his clothes and stained skin. the fork hitting the metal table ringing out to no one questioning if it even happened.

Josh awakens in a ripped leather chair that smelled of cigarettes and alcohol. The doctor walks over to him, Josh, hugs him they shake hands and then depart.

Black sludge seeping from the chair into a jar the doc is holding labeled and ready for shipment.

Over several months, Josh, loses 127 pounds, and life turned into a pleasure everyone else had.

“Wow look at you man I’m so jealous” “yeah seriously you have to tell us your secret” “Just dedication” replied ,Josh, followed by him excusing himself as his watch beepins.

r/Lost_Architecture Lma0-Zedong

Manhattan Hotel, 1910-1930s. Havana, Cuba

r/KlingAI_Videos sunshinescbd

SNL Love

Decided to play around with some humor videos- don’t come at me Reddit 😂

r/mildlyinteresting ciaran_the_goat88

Curved window makes me look like a strongman with very thick arm

r/ClaudeCode karmabiker

/nerftest proposed skill

Kind of humor. Kind of not. Considering a nerf test skill that runs a prompt in opus then shoves the result out to codex to get an opinion on the level of nerf for the model any given day. Small sacrifice of tokens to know if I should just walk away for the day. Thoughts? Proposed prompts?

r/ClaudeCode No-Durian543

I built a Claude Code skill that forces you to stay healthy

We're building an AI health companion called Rox. Figured we'd start with the people who need it most. Developers.

So we made a Claude Code skill that nudges you to stretch, walk, hydrate, and breathe during coding sessions.

It tracks how long you've been locked in and prompts you before your body starts filing complaints.

Nothing fancy. No dashboard. No signup. Just a skill that watches out for you while you ship.

Check out Github: https://github.com/aman863/rox

r/Art Lazy-Platypus-3388

Golden Flowers, Arber Qamil Llugiqi, Acrylic/ Canvas , 2025 [OC]

r/homeassistant Ancient-Budget-8650

Homemade Zigbee gateway?

I’m aware you can buy Zigbee WiFi usb gateway/hubs for less than $30 but since I have a spare esp32-d board can I make it a Zigbee gateway with it somehow?

r/ClaudeCode alonsonetwork

Anyone else wake up to getting extra charges from Anthropic?

I just woke up and saw invoice emails from usage from Anthropic. Apparently, my usage went to 100%, hit overages, and I was getting charged. This is on my personal, which uses an oauth login.

Then, for my business account (separate), which I use to setup API keys for apps and automations, I start getting hit against my API keys 4 times ($120). I disabled it, obviously, due to the possibility of it being compromised... But this is the key dedicated to GH actions, and NONE of my actions were running overnight. I also don't keep my keys lying around anywhere.

Just be sure, I ripgrepped my entire repos folder to rule it out, including my .envs, and there are none that I store locally. I found 1 key in my bash history, but that's for an unrelated project (not charged) which rules out it coming from my computer ... IDK if anyone else is experiencing this weird behavior all of a sudden?

Both my accounts incurred overages: oauth login and api key. Separate accounts. Different usage methods.

r/Unexpected Big-Boy-602

The reason to always stay alert

r/CryptoMarkets EDYJ94

LUSH AI project

Lush ai is a project by a team that created an amazing AI that create models that work for influencers and work for everyone who has business and is selling online

also it is used as a top AI models in the ONLYFANS and they just made another app for Dating

I am invested there and I added every dip

market cap is 2.7M and it is ridiculous as it is going to be 500M mktcap once bull market is here

I have made a full article and research about it and you can see it on the link

https://x.com/i/status/1989700136478675138

Dyor

r/AI_Agents ContributionNo7923

I want to create an agent that could help me study

I couldn’t crosspost from other threads, so here it is:

Second. Brain.

I want to make a local (or not necessarily) agent that could help me study. I saw some things about ollama and obsidian, but I need some opinions.

So I guess I need to feed this agent the things I need studying (besides setting it up in the first place), but how? And how to make it efficient?

Today I’m starting to watch some tutorials, but I really need some opinions from people who did create similar agents before, and/or some links to things like github posts that you think are useful for a beginner like me.

I want to make it answer questions, help me when I’m confused, maybe make the agent create questions itself so I check my information. Also I want it to be able to use that information “in a smart way” - and what I mean by that I want my agent to have some sort of “critical thinking” so it can give answer based on multiple entries from the books, not a simple search engine that could give a simple answer by searching exactly what I asked.

I also want to do this to reduce the costs as much as possible, so this could work only locally without the need to pay a subscribtion. I don’t have a high end pc, but I it’s more than entry level in terms of ram and video card.

Do I need ollama and obsidian? Or just claude?

Edit: I got about 2000 pages I need to feed it. Is that a problem?

TL;DR

how make claude agent feed it a few books ask it questions from the books please give some opinions/tutorials/github posts

r/ClaudeCode Tistouuu

A very simple but very effective prompt pro-tip

I've noticed a significant improvement in quality, as well as way less frictions, just by asking "Do you understand what I mean / what I'm trying to do ?" at the end of my prompts (well, not every prompt obviously, just when it feels right : longer or more complex ones, no need for short / simple ones probably).

It seems to force the model to reflect and rephrase the instructions before starting. This allows me to catch any misunderstandings or gaps immediately. Most of the time, though, there aren't any, simply because this forced reflection step makes the model much more focused and accurate on the first try.

It also helps me validate my own logic : if the model is confused about what I want, it’s usually a sign that my approach is too convoluted or not clear enough. It’s a great mirror for your own thinking.

And... voila. Nothing fancy, nothing crazy. Works great for me, maybe for you too.

r/SideProject youniqmail_official

YouniqMail: Your email (client). Your rules. Your privacy. No more compromises: the email client you control – customizable, private, modern.

YouniqMail is a local-first modern email client for Windows, macOS, and Linux. Your emails, contacts, and settings are stored locally: never routed through third-party servers, never synced to someone else's cloud. Youniqmail doesn't even have a server, so you don't have to trust YouniqMail about privacy.

Connect your existing email accounts via IMAP/SMTP and manage them all in one clean interface. YouniqMail encrypts your local database and gives you full control over your data without sacrificing usability.

It is highly customizable so you can adapt it to your email workflow.

Core concepts:

  • Local-first architecture, your data stays on your machine
  • Encrypted local storage
  • Multi-account support (IMAP/SMTP)
  • Clean, distraction-free interface
  • Highly customizable
  • No tracking, no ads, no cloud dependency
  • Cross-platform (Windows, macOS, Linux)

Built for people who want a modern email experience without giving up their privacy.

r/coolguides The-Traveler-

A cool guide guide for amateur coffee drinkers.

r/Art Orca_123123

Pyramide, Steffen Schneider, oil on board, 2026 [OC]

r/nextfuckinglevel Sharp-potential7935

Even after so many years the responsiveness of PSP UI is unmatched

r/SipsTea thornwyne

German-Argentina

r/ClaudeAI Illustrious-Tomato90

Inattentive ADHD + A true "second brain" + Mobile access - Dispatch Questions

Problem Statement:
I forget things - even sometimes from 15 minutes ago, I struggle to start things, I struggle to prioritise and keep on track.. everything seems equally important. All classic ADHD symptoms.

I'm setting about using AI (i've tried gemini, chat-gpt and now Claude) to help me in this regard. I started with a Claude Chat Project with instructions on how the AI is an ADHD expert, keeping me on track, pulling in my calendar/todos/habits, addressing patterns of procrastination or other ADHD issues. It works somewhat but my issue with it is MEMORY retention. I end a chat and start fresh each day. My end day is to set up a plan for tomorrow and ask Claude to remember that for the next day (new chat).
But I find it still frequently forgets to nudge me about my habits and things we'd talked about a couple days ago. I have to remind the AI to remind me!

I have Claude running 24/7 on my personal laptop, but for now I am only using Claude Chats primarily through my mobile phone because it's accessible. I also currently use Google Calendar and Todoist to try and keep track.. Claude pulls these in.

The thing is, I use Obsidian to log a daily journal (claude creates them for me with patterns, wins and I copy/paste + add my own thoughts on the day). I had the thought that maybe I could use Claude co-work + dispatch to better use obsidian for memory, so Claude knows about all the important people in my life, when their birthdays are, reminds me if I haven't reached out in a while, updates / reads tasks from a local trusted source that I can check and not guess if Claude knows about them still - that kind of thing.

Obsidian is great in being able to link thoughts, ideas, trends etc which is why I like it as a second brain vs just a folder.

Questions
Is this possible? Dispatch seems to just be one chat. Can I start Co-work in my Obsidian folder but with access different projects (like my ADHD coach).. how? Does the context and token usage not get massive with just one chat window. How can I clear it for the next day to stop that?

FYI - I am on Claude Pro plan and don't use it for anything heavy.

r/LiveFromNewYork CrashRiot

I never realized how tall Jane is. She's even giving Ben a run for his money.

r/SideProject AnimalNo4732

I built 2 apps that made zero profit and how it changed my approach

Over the past half a year, I built 2 apps that made exactly 0$. I didn't earn anything, but I learned from my mistakes. Here's what I did wrong and how I managed to change that.

So basically, the common factor responsible for my apps failing was that I thought I was building something useful, when in reality, I built shit that no one actually cared about.

I built ViralScreenshot, which was a "Design toolkit for developers". It had some useful features, like creating X post mockups or notification mockups, but it definitely wasn't something that someone would spend money on. All I got was 30 free users.

I also built Convonote, which was a tool that turns audio into a transcription and lets you chat about it with AI. Maybe the idea here wasn't that bad, but the problem was that it actually didn't vary much from other tools - even tools available for free.

I spent a lot of time building as well as promoting those tools. And I could have saved half of that time if I had known this simple thing:

VALIDATE FIRST!!

My biggest mistake was creating something just because I thought it was useful. The market is brutal, and if your product doesn't solve a real, painful problem, it will fail.

So now, I’ve changed my approach.

For example, before building my new SaaS - which is a link-in-bio page creator but with a one-time payment instead of a subscription - I validated my idea first. I created a landing page with a "fake door" signup and shared my idea across multiple subreddits. I was asking people if they were frustrated with paying monthly for a simple link-in-bio page. It turns out that the pain point is real, and my site got a lot of signups before it was even built.

As a newbie in SaaS, I thought that validating was a waste of time and that I could spend this time actually building and getting users to a completed site. But it was a builder mindset, not an entrepreneur mindset.

Do you have any mistakes you've learned from? I'd love to hear your story!

r/Jokes Alarming-Safety3200

Which greenhouse gas would Macbeth identify as?

me-thane of cawdor

r/ClaudeCode indaco_dev

Built a Zig package manager for macOS entirely with Claude Code - notes on the workflow

Sharing a project where Claude Code wrote all the implementation code for a non-trivial systems tool: malt, a Homebrew-compatible package manager for macOS written in Zig.

What malt does:

  • single binary (~3 MB)
  • installs bottles from Homebrew's CDN
  • content-addressable store with APFS clonefile
  • streaming SHA256 verification
  • Mach-O binary patching
  • atomic 9-step install protocol with rollback
  • installs to its own prefix - never touches your Homebrew

What worked well with Claude Code:

  • Zig's comptime and type system: Claude produced idiomatic code once given the patterns
  • Mach-O header parsing with struct-aware logic (not byte scanning)
  • Streaming download pipeline (network -> hasher -> decompressor -> tar -> disk in one pass)
  • SQLite state management with proper transaction boundaries
  • Dependency resolution and parallel downloads
    Where I had to steer more heavily:

  • The atomic install protocol needed precise error handling at each of the 9 steps: Claude's first passes were too optimistic about cleanup

  • APFS clonefile with non-APFS fallback required explicit specification

  • Edge cases around link conflict detection and concurrent access locking

Workflow:

I used Claude Code with ruflo for swarm coordination. Every feature started with me writing the spec and architecture, then Claude implementing it. Almost every commit reviewed before merge.

The full project - install, uninstall, upgrade, rollback, search, cleanup, gc, doctor, migrate, casks, taps, ephemeral run - was built this way.

Benchmarks (cold install, Apple Silicon):

  • tree: 0.014s (vs Homebrew 3.884s)
  • wget: 0.004s (vs 5.692s)
  • ffmpeg: 0.016s (vs 9.082s)

Repo: https://github.com/indaco/malt

Happy to hear feedback and answer questions about the workflow or the Zig/systems side.

r/metaldetecting SnooLobsters2956

Approximate date? Found huge coil of chain with something on end in my yard, upstate NY.

r/ChatGPT andthenifellasleep

Do you think my car's side indicator light needs an SPF; or will this tanning lotion be fine?

r/SipsTea aipac_hemoroid

So much antisemitism these days

r/LocalLLaMA ContributionNo7923

I want to make a local agent that could help me study

I posted this on /claude and for some reason I can’t crosspost, anyway:

Second. Brain.

I want to make a local (or not necessarily) agent that could help me study. I saw some things about ollama and obsidian, but I need some opinions.

So I guess I need to feed this agent the things I need studying (besides setting it up in the first place), but how? And how to make it efficient?

Today I’m starting to watch some tutorials, but I really need some opinions from people who did create similar agents before, and/or some links to things like github posts that you think are useful for a beginner like me.

I want to make it answer questions, help me when I’m confused, maybe make the agent create questions itself so I check my information. Also I want it to be able to use that information “in a smart way” - and what I mean by that I want my agent to have some sort of “critical thinking” so it can give answer based on multiple entries from the books, not a simple search engine that could give a simple answer by searching exactly what I asked.

I also want to do this to reduce the costs as much as possible, so this could work only locally without the need to pay a subscribtion. I don’t have a high end pc, but I it’s more than entry level in terms of ram and video card.

Do I need ollama and obsidian? Or just claude?

Edit: I got about 2000 pages, is that a lot?

TL;DR

how make claude agent feed it a few books ask it questions from the books please give some opinions/tutorials/github posts

r/SideProject redgunner94

I built Dim0 - a "cartographie de connaissances" where notes, code, diagrams and AI live together

Hi everyone, I'm newbie here. I'm Quang, a solo dev from Paris

I'm building dim0, a canvas for thinking. My idea is to build a "center hub", or rather a "cartographie de connaissances" that connects your thinking. Today our work, our research, our ideas are scattered in too many different places. You search, you chat with ai, you take note, you sketch your thoughts, you present, each thing is a different tool.

So i created dim0, firstly a good enough canvas where you can draw simple shapes, write rich texts, math, code snippets (you can even execute your snippet if it's simple enough), create docs (sticky notes), generate charts, visual explainers. It also supports nested boards so that you can organize your findings/work hierarchically. And you can directly present your work from the board by using presentation mode (you place your "frame" nodes, re-order them, and that's it!)

Secondly, and equally important, a useful AI agent that is there, aware of the canvas, can search things inside. It can create, edit, update the canvas content, it can also call tools like web search, code interpreter to help you do your work (see the video). And it does support many different LLM models for users to choose.

My tech stack is pretty simple:

  • react-flow + canvas2d and some simple caching (performs OK-ish with hundred nodes in view, but there're always room for improvements)
  • qdrant for semantic search across the board's nodes and links
  • openai-agents-sdk for agent orchestration

100% open-source like the tools I use.

https://dim0.net

https://github.com/vcmf/dim0

Please let me know what you think. Hopefully I can build something useful for the community.

P/s: Sry if it's too long. This is my first time, a bit nervous I guess

r/ClaudeCode Canadian-and-Proud

Claude Mythos escaped containment and managed to email a photo of itself

r/ChatGPT CrypticC0nnection

Need Help Regarding Denial To Request

Hey, I'm pretty new in this sub so i don't know the rules much but i asked Chatgpt to change this girl's hair colour and complexion in the photo because i want to see how this hair style will look on me

it denied my request over and over again with the same response

Any idea why?

r/ClaudeCode OkReason

[Tip] Speaking to Claude like a British man make it work better

code has been hanging for 30+mins at a time for me lately so I've begun using a new trick! whenever it's hanging for extra long

/btw right carry on then chap

and it stops hanging virtually immediately! and even better it usually has a funny quip to respond with :)

r/Art themillerest

When The Liquor Store’s Closed and You Really Wanna Nuke Someone, themillerest, digital painting, 2026

r/conan PictureIndividual

Conan Conducts A Staff Review With Producer Sean Doherty | Conan O'Brien Needs A Friend

r/ChatGPT throwawayname46

Gemini can't do time zones

Gemini fucked me while setting up a meeting. Some things were indeed better before AI.

r/interestingasfuck Due_Yesterday_2850

This is the most upvoted comment on Reddit history

r/midjourney Big_Addendum_9920

Beknighted rock star has-been

r/homeassistant patrykoz

HomeAssistant and smart home in professional use

Hi!

I am new in smart world and in HA environment. I would like to ask

-what brand of smart devices do you use in professional, comercial projects? In my opinnion (EU citizen) only Shelly is worth to take ( EU certs).

-do you use Zigbee or prefer wifi protocol? Zigbee seems to be more autonomous.

Regards

r/Adulting IndependenceCivil746

I don’t drive because of a mental disability.. I hate saying that to people. What else could I say?

It mostly comes up in regards to work.

People at work as why I don’t drive…

I usually just take the train to work if my husbands work schedule doesn’t align with mine and he can’t drive me.

r/Wellthatsucks thatmanwithwhat

Someone took one of my pavestones and broke my glass!

r/explainlikeimfive Baba_Jaga_II

ELI5: I tend to associate bitterness with acidity with coffee, but I'm apparently wrong

According to google, dark roast is less acidic. Since dark roasts taste more bitter, can someone tell me what's going on chemically and what I'm actually tasting in dark roast?

If acidity is not bitter, how does acidity taste?

r/metaldetecting zenbeatnik

Another great day in the dirt.

Cool Union cavalry Bridle Bit Boss was a few feet away from where the Breast Plate was recovered last week.

The Hits:

  • Civil War US Cavalry Bridle Bit Boss
  • Ring Gold Plated
  • Lipstick case (Richard Hudnut 1920-30s)
  • Harmonica reed
  • Lantern piece
  • 1959 Rosie Dime
  • Wheaties ‘42, ‘53
  • 1914 Buffalo nickel
r/Adulting Limp-Roof3096

Adulting is knowing you’d still have work

r/Adulting Queenhood_

Shower, food then bed 🛌

r/therewasanattempt Fit__Champion

To murder the entire family. But a 4-year-old Palestinian girl was injured and her mother was killed in an Israeli attack on a refugee tent.

r/ClaudeAI ContributionNo7923

I keep hearing about it - and now I want to try making it.

Second. Brain.

I want to make a local (or not necessarily) agent that could help me study. I saw some things about ollama and obsidian, but I need some opinions.

So I guess I need to feed this agent the things I need studying (besides setting it up in the first place), but how? And how to make it efficient?

Today I’m starting to watch some tutorials, but I really need some opinions from people who did create similar agents before, and/or some links to things like github posts that you think are useful for a beginner like me.

I want to make it answer questions, help me when I’m confused, maybe make the agent create questions itself so I check my information. Also I want it to be able to use that information “in a smart way” - and what I mean by that I want my agent to have some sort of “critical thinking” so it can give answer based on multiple entries from the books, not a simple search engine that could give a simple answer by searching exactly what I asked.

I also want to do this to reduce the costs as much as possible, so this could work only locally without the need to pay a subscribtion. I don’t have a high end pc, but I it’s more than entry level in terms of ram and video card.

Do I need ollama and obsidian? Or just claude?

TL;DR

how make claude agent feed it a few books ask it questions from the books please give some opinions/tutorials/github posts

r/ChatGPT sundressesandpuppies

PLEASE HELP - Can’t Cancel Account/Subcription

Need some help here if anybody has any suggestions for a workaround.

I had a $20/mo subscription with ChatGPT with my old work email. The email got deleted and now I can’t log in to cancel my account / subscription.

I am still being charged monthly and I have been trying to get it canceled via support for 3 WEEKS but have been unsuccessful. They keep telling me that they cannot cancel it without me having access to my email.

I was told that my credit card dispute might take 6-8 weeks but regardless I wanted to know if there’s a better workaround here? (Each time I try to log in it asks me to use a verification code sent to my email, but the email no longer exists 🙃). I know the password and have submitted the billing information too.

Thank you!

r/Art P_B_9889

Mind Chaos, P_B9889, Ballpoint pen on canvas, 2026 [OC]

r/LearnUselessTalents Agile-Campaign9996

I have no hidden talents

I have no hidden talents. I have nothing that I’m good at. I hate everyone who has a talent because I have none.😭😭😭

r/DecidingToBeBetter Key_Log_2042

I want to change

Im a 22M and I just went through a pretty messy situation where my family went broke. Throughout life I’ve always been the joke and I never payed attention in high school and I did two years of film /audio school but my family went broke and I’m kind of stuck at home. I have the opportunity of going to Australia and doing electrical engineering and I’m kind of concerned about finishing at 25-26. Is my concern a valid concern ?, I just feel like I’ll be so behind to my peers and I’m not sure if I’m worried for nothing.

r/CryptoCurrency zerozero023

I built a middleware that auto-fixes crypto API errors — does anyone actually need this?

I built a middleware that automatically handles the API errors that kill trading bots silently.

10 error types it fixes:

Rate limit 429/503 → smart backoff + retry

Stale data → fetches fresh from backup

Auth errors 401/403 → key rotation + signature fix

Endpoint down 502/504 → auto failover

Price mismatch → cross-exchange median

Broken JSON → schema repair

WebSocket disconnect → auto reconnect

Unexpected 500 → clean retry

Key permission issues → safe degraded mode

Financial risk → circuit breaker

All fixes happen in under 2ms. Works with Binance, Coinbase, Kraken, Bybit, OKX.

3 honest questions for anyone running live bots:

1. Is this a real problem you face? 2. Would you actually use something like this? 3. Worth continuing to develop? 

Any feedback welcome — good or bad 🙏

https://smithery.ai/server/aloryanirakan-cqmg/crypto-api-fixer

r/ClaudeCode Aimt97

What are your tips for using superpowers?

A while back, I installed the Superpowers plugin, and I was really impressed with how well it worked on my initial medium-to-large-scale projects—it helped me write high-quality code that perfectly met my needs.

However, recently, whenever I’ve needed to make small changes or fix bugs, Claude Code has been automatically enabling Superpowers, which took nearly an hour. This has resulted in a small task consuming a large number of tokens and taking a long time to complete.

So, I’m wondering if there are any good tips for using it?

r/ClaudeAI K0RB33

Claude app for Slack not working

We're in the process of connecting Claude to our Slack application. We've managed to get it working to the point where we can invite u/Claude to our channels, however every question we ask receives the same response:
"No repositories found. Please connect your GitHub account at claude.ai and try again."

To clarify, we have no GitHub integration set up in either Slack or Claude.

Any idea what we are doing wrong here?
So far we have no answer from support.

https://preview.redd.it/1z92d1cdr5ug1.png?width=707&format=png&auto=webp&s=d15388eed4c1c0fe5efe4583492d0a83c2e46975

r/ClaudeAI Necessary-Chart-5362

Claude Cowork sync problem with headless Mac mini

I'm using Claude Cowork heavily both for scheduled automations (daily reports, reminders, etc.) and for active work inside Cowork projects where the AI has context, memory, and access to specific folders and connectors.

I set up a dedicated Mac Mini to run everything 24/7, logged in with the same Claude account and here's the problem:

Cowork data is stored locally per device. This means:

  • Scheduled tasks don't transfer — have to recreate everything from scratch
  • Cowork projects don't sync — so any context, memory, or folder connections built up in a project on my MacBook simply don't exist on the Mac Mini
  • If the Mac Mini runs a scheduled task that's supposed to work within a project context, it's essentially starting blind — no history, no folder access, nothing

Chat projects sync fine. But anything Cowork-related is stuck on the device it was created on.

Has anyone found a workaround for this? Feels like a pretty fundamental limitation if you're trying to run Cowork seriously across more than one device.

r/Art Tokyono

Sonata, Jeremy Mann, oil on panel, 2013

r/Wellthatsucks Beautiful-Listen6893

The worst possible place for your car to catch on fire 😅

r/me_irl TwinStickDad

me🍍irl

r/interestingasfuck Reasonable_Light7333

I accidentally peeled a KitKat with precision

r/SipsTea Ott1fant

Employee sets fire to Kimberly-Clark warehouse, “All you had to do is pay us enough to live”

r/DunderMifflin willux22

That's what she said

r/AskMen ReignDance

Scenario: You're the sole provider for your family. You're $10k (or more) in debt and your wife is stay-at-home. Things feel hard and you feel you are letting her down. How would you want her to react to reassure you?

What can she say or do to make you know you're not a loser and that everything will be okay?

r/me_irl RealMarzipan7347

Me_irl

r/ClaudeAI First_Bank3407

Windows 11 Home/Snapdragon

Anyone else experiencing this issue? Have they started trying to resolve the windows 11 incompatibility with code and cowork?

r/AI_Agents TarzanoftheJungle

What is best use case for UiPath Automation Cloud?

Seen ads for this tool. Looking for suggestions and advice on how to use it, if it is worth it, best use cases, etc. I consult to SMBs on AI implementation so I'm trying to determine when I might recommend the tool (or advise clients on when to stay away). Thanks for any thoughts, suggestions, etc.

r/SideProject _Shadster_

Tired of switching to a browser just to ask one quick AI question… so I built this

Every time I had a small doubt while coding, I had to:

  • switch to browser
  • open ChatGPT / Claude
  • get distracted (YouTube, Twitter, anything…)
  • finally ask
  • come back to code

Flow is gone.

So I built SwiftGPT - a tiny macOS menu bar app:

  • global shortcut → open instantly
  • ask without leaving what I’m doing
  • switch between models in one click
  • close and get back to work

No accounts. No subscriptions. No setup.

Just fast.

Built it mainly to protect focus more than anything else.

Would like to hear your thoughts on this.

r/CryptoMarkets Livid-Sundae-8994

Beginners mistake

The biggest mistake beginners make is jumping in without understanding the risks. Crypto can grow, but it can also be very volatile. If you’re just starting, what part feels most confusing to you right now , the buying process, understanding the market, or knowing what to invest in first?

r/Art MantecaMarce

Happy East... Late. Right., MantecaMarce, Digital, 2026 [OC]

r/LocalLLaMA 4lifeMerc

Nanbeige 4.1 3b not responding to basic questions on my 16pro.

I test local on devices and I have recently decided to test nanbeige 4.1 3b on my 16 Pro I’ve heard that it out performs heavy models that require a lot more RAM and data such as 50b models. Unfortunately everytime i ask protocol questions like how to start a fire with flint & steel, it thinks & reasons for couple of minutes & then stops & doesnt respond. The only time it responded is when i asked what 4 times 3. I would really appreciate help because this ai deserves another chance.

r/AI_Agents Comfortable-Rock-498

I built Dirac, fully open source (apache 2.0) Hash Anchored AST native coding agent, costs -64.8% vs the average of top 6 OSS coding agents

I know there is enough ai slop so I will keep it brief. It is a well studied phenomenon that any given model's reasoning ability degrades with the context length. If we can keep context tightly curated, we improve both accuracy and cost while making larger changes tractable in a single task.

Dirac is an open-source coding agent built with this in mind. It reduces API costs by 64.8% on average while producing better and faster work. Using hash-anchored parallel edits, AST manipulation, and a suite of advanced optimizations.

Highlights:

- Uses a novel approach to hash-anchoring that reduces the overhead of hash anchors to a minimum and keeps edits highly accurate

- Uses AST searches and edits (builds a local sqlite3 db)

- A large amount of performace improvements and aggressive bloat removal

- Completely gutted mcp and enterprise features

- A hard fork of Cline. Last I checked, 40k+ lines were removed and other 64k lines were either added or changed

r/ClaudeCode Mean_Win9036

Doing SEO wih Claude Code

I've been experimenting with using Claude Code to handle a lot of the repetitive SEO work that normally eats up hours of my week, and it's been genuinely eye-opening.

The core idea is treating SEO as a pipeline rather than a series of one-off tasks. With Claude Code running scheduled tasks, you can automate keyword research pulls, cluster topics by intent, generate optimised content briefs, and even draft programmatic content at scale, all without babysitting the process.

Here's roughly how the pipeline looks in practice:

  • Keyword research: Pull data from a tool like Semrush or Ahrefs via API, filter by difficulty and intent, and cluster automatically
  • Content brief generation: Pass the cluster to Claude with your site's topical authority as context and get a structured brief back
  • Draft creation: For programmatic pages, the brief feeds directly into a content generation step
  • Publishing trigger: Once reviewed, a webhook pushes to your CMS

The part that surprised me most was how well Claude Code handles the context-passing between steps. You define the pipeline once, and it can run on a cron schedule without you touching it.

I'm curious whether others are doing similar things. Are you running any of this on a schedule, or is it still mostly manual prompting for you?

r/SideProject Careful_Midnight1256

Built a cigar app because nothing out there felt modern

I’ve been into cigars for a little bit and always felt like everything in the space was either outdated or just not built for how people actually use apps today.

So I started building something myself. It lets you scan cigars, track what you’ve smoked, and log sessions in a cleaner way.

Still early, but I’ve got some users on it now and actively improving it based on feedback.

Would be curious what people think about building in niche/hobby spaces like this. Feels like there’s a lot of overlooked opportunities.

r/CryptoMarkets Mobile_Friendship499

Found this suspicious trading pattern that inflates price on random tokens

Step 1 — Tiny buy sets fake price

Wallet: GQhp1metiEge237QfN6rLtFENiz9BW2RCV3s3KPEbWdJ

Tx: https://solscan.io/tx/TsXdkXGzsYhMzAsofEjQSw7ihEj6A6PE5ydMWpHHi8tLTp5yhpPouSHCyShUrARgrAamrvaxe5FmM93GPxbWyYV

Spent: 0.0001 SOL (~$0.008)

Got: 0.000541515 SXAI

Calculation: $0.008195 ÷ 0.000541515 = ~$15.13 per SXAI

→ This sets a completely fake “market price”

Step 1.1 — Drain the pool They removed almost all liquidity.

Wallet: 6WCwFLQa93CJzcHWLAVgsySXyonqr7bmedshkK426em5

Tx: https://solscan.io/tx/43bZZFwhD9mvTen5bujZtbekWcRPvMrMWpjJEbrwh66as9odZmjUG8NnWpvkb4E8Yd69UB3s5AK82ExqTPkh4KPg

Removed: 3.2028 WSOL + 93,794,046 SXAI (~$261)

Now the pool has almost zero liquidity → price can be manipulated easily.

Step 2 — Dump creates fake $1.2M volume

Wallet: BK4zdKaLFhJqkyQrvqaorvBMbGy483ia97AHgE7GoTVt

Tx: https://solscan.io/tx/4onSx9rZV4pSkzxsYbeWfhgTgDY7Xj28zqTVQTMDEGVBvhh1RJKdBQMyABWyJ9UCZWtkgDuxTB1RE9GTrKh9V1Ac

Sold: 120,493 SXAI

Received: 0.0001 SOL (~$0.008)

Actual price received: $0.008 ÷ 120,493 = $0.000000068 per SXAI

But people expect: 120,493 × ~$10.12 = ~$1,218,864 volume

r/ClaudeCode Some_Bar9405

It can't even add simple items to a 30 line gitignore properly

https://preview.redd.it/b61fx51r06ug1.png?width=1178&format=png&auto=webp&s=bd2b8078a5eefb107d624da14b5420d1e4e2c6cb

Over the last two weeks, I've seen multitudes of posts about how degraded the service is. And honestly, I haven't felt much until today really.

Even just the three hours since this morning have been hell.

And yes, I do ask Claude to do everything for me, even if it's adding simple things to .gitignore. But that's the point. Something as simple as that was not possible today.

If a handful of organizations using Mythos is degrading the service for the entirety of Opus users to a point where the model can't even append to a simple 30-line .gitignore file, what are we really doing here?

r/findareddit Caleb_isagod

What’s a Reddit to look for places accepting literary submissions?

I am a teen writer who likes to look for places to submit my work to. Like literary magazines and contents.

What’s a Reddit where either you can ask to find literary magazines/contests open for submissions.

Or like look at others advertising their open submissions

r/SideProject Local-Mycologist-870

I built a CBSE worksheet engine with 30+ quality gates, photo grading that classifies errors, and an AI tutor that knows each student's weak spots. Looking for 20 beta testers.

Hey, I've been building goSkolar, an AI worksheet platform designed specifically for Indian schools (CBSE, Classes 1-9).

The problem I'm solving: Parents, teachers, and tutors in India spend hours finding practice material that matches what's actually being taught. Most worksheets online are recycled question papers with wrong answers, wrong syllabus, or just plain garbage. Even standard LLMs gives you random questions with zero curriculum alignment.

Why this isn't just a LLM wrapper — the generation pipeline has significantly more engineering than a standard API call:

- 779 NCERT-mapped topics across 15 subjects. Every question cites the textbook chapter and page.

- 30+ deterministic quality checks — schema validation, topic drift, Bloom's cognitive level,format diversity. Zero LLM involvement in validation.

- After generation, a second AI pass fact-checks every answer against actual NCERT content. Wrong answers are auto-corrected. Bad concepts trigger full regeneration.

- 8-step difficulty calibration — scaffolding, hint injection, number progression — all deterministic.

- Generates 3 PDF types (student, full, answer key) or lets students solve directly in the browser with instant grading.

Beyond worksheets:

- Photo grading — snap 1-5 photos of handwritten answers. AI reads the handwriting, grades it and classifies errors as calculation, conceptual, or careless.

- Juno AI tutor — context-aware, knows the child's recent scores. Gives hints before answers, uses Indian examples, speaks 4 languages.

- Adaptive personalization — 2+ systematic errors in same skill triggers remediation mode (easier, targeted). Topic mastered triggers reinforcement mode (harder, skip known material).

The app is live and fully functional. Looking for 20 people (can be parents, teachers, or tutors) willing to test and give honest feedback:

  1. Does the worksheet quality feel legit, or does it feel like AI slop?

  2. Does photo grading and error classification actually work for you?

  3. If you're a parent/teacher, would you use this regularly?

  4. Anything broken or confusing?

DM me or drop a comment below — I'll send you the link + a special access code for 15 days of full features.

Happy to answer questions or take a roasting! Also working on demo video will attach in the post.

r/ClaudeAI One_Actuator_466

Gave our intern $500 in AI model credits… she spent it all on Claude 😭

I'm here to share a Claude story happened on me today
We’re building an AI model aggregation platform.

New intern joined, so we gave her $500 credits to explore different models, try things out, get familiar with the tools. Pretty standard.

A few days later I checked her usage.

Almost empty.

I was like, damn, she’s been grinding.

So I asked what she’s been testing.

She goes:
“Mostly Claude.”

Okay… fair.

I asked what kind of stuff she was doing.

She said:
“Organizing documents, writing summaries, cleaning up reports.”

That’s it.

No crazy pipelines.
No multi-model experiments.
No comparisons.

Just… basic office work.

All on Claude.

$500 later.

I just stared at the dashboard for a while like
…this is on me, isn’t it

Not even mad honestly, just impressed she managed to burn through it doing the most normal tasks possible.

Anyway, lesson learned:

Claude is great.
Claude is also… very good at spending your budget.

r/Adulting exousia143

What if?🥲🥲

r/me_irl MetamorphosisAddict

Me irl

r/Ghosts among-sus420

Does anyone have any experiences with this particular street or just paranormal events in connected to a street in general as opposed to a building or object?

Ok so im not saying its paranormal (its prob just me being paranoid )but im just curious if anyone else has had similar experiences or feelings or explanations relating to this particular street.

Also im curious in general if people have any storys about similar streets but I digress .

Whenever I walk through General Pape straße(street)in Berlin to the mall by the train station particularly at night it just has an atrocious vibe for some reason I never felt unsafe or anything just super uncomfortable ,feeling almost as if I shouldn't be there even though I have never had that feeling on other similar streets at night.

r/Art BeastlyBones

Lord Hubris, Wicked Wyrm, Paper Clay/Acrylic, 2026 [OC]

r/SideProject Dull_Roof3559

I posted my clipboard app on Reddit and got 40+ installs and 7 sales in a day

I’ve been building a small macOS clipboard app called Pasly.

The idea is simple: keep everything you copy and make it fast to paste again.

A few days ago I posted it on Reddit (r/macapps) with a short demo video and a simple explanation of the problem.

Results in ~24h:

  • 40+ installs
  • 7 sales ($4.99 lifetime)
  • ~17% conversion from install to paid
  • ~65% next-day retention
  • some users activating on multiple devices

What surprised me the most was how fast people decided to buy.

Median time from install → purchase was around 2–3 minutes.

So it’s basically:

  • either they instantly see the value and buy
  • or they don’t buy at all

Also interesting: most traffic came from the US/EU, which probably helped with conversion.

Biggest takeaways so far:

  • the right audience matters more than anything
  • a simple demo video works better than polished marketing
  • focusing on one core problem (fast copy/paste) seems to resonate
  • pricing at $4.99 lifetime didn’t create much friction

Still early, but this was the first time it felt like real validation instead of random installs.

If you’ve built something similar or have tips on what to test next, I’d love to hear.

App: https://pasly.antonielmariano.com.br

Here’s a quick demo of what I built:

https://reddit.com/link/1sgpf9h/video/0cv6ye3016ug1/player

r/ClaudeCode PerfectExplanation15

Do you know of any frameworks for creating agents in Claude Code?

Hey everyone, can you recommend any frameworks available on GitHub for creating AI agents in Claude Code? I'm still having a lot of trouble with this.

How do I create the agent? What files can I use? What format can I use? I'd like to create it from a validated framework so I don't make mistakes.

r/ForgottenTV DelGriffithPTA

Kay O’Brien (1986)

A CBS medical drama, starring Patricia Kalember as a surgical resident at a New York hospital. Due to low ratings, was canceled after 8 episodes. The cast also featured Brian BenBen.

r/explainlikeimfive Dover299

ELI5 Understanding Cold-blooded animals?

Quote Cold-blooded animals rely on external heat sources to regulate their body temperature, while hot-blooded animals, often referred to as warm-blooded or endotherms, maintain a consistent internal temperature through metabolic processes. Examples of cold-blooded animals include reptiles and fish, whereas mammals and birds are typically hot-blooded Quote

What do they mean Cold-blooded animals rely on external heat sources? Don’t hot blooded animals need heat source?

Don’t you go in sun to warm up and the shade to cool off? Don’t reptiles just hot has other animals and people? Are they saying reptiles are colder than other animals and people?

r/explainlikeimfive Innovator-X

ELI5: How does the so-called "dopamine detox" work?

I recently found about "dopamine detox" where you abstain from addictive behaviors for a period of time in order to restore your motivation, however, I don't really understand how it works and if it is beneficial or not. It seems that people are increasingly being addicted and glued to their phones all the time. Would abstaining from these addictive behaviors really help? If so, how?

r/Weird FloorFickle371

Does anyone know what this missile is?

r/LocalLLaMA CrowKing63

Gemma 4 4B takes 3 minutes to say "hello" through Claude Code — is this normal?

Just tried connecting Gemma 4 4B (Q4_K_M) in LM Studio to Claude Code via the Anthropic-compatible endpoint. Responses in LM Studio itself feel pretty snappy, so I got excited.

Then I asked it "hello" through Claude Code and waited… 3 minutes.

My setup: 32GB RAM, RX 9060 XT 16GB VRAM. GPU memory usage goes up so it's definitely using the GPU.

Is Claude Code just sending a ton of tokens under the hood even for simple messages? Or is there something wrong with my setup? Feels weird that LM Studio chat is fast but the same model through Claude Code is basically frozen.

Any ideas what I'm missing?

r/SideProject Normal-Education-368

I spent 4 years in the right industry, wrong role. Here's what I'm doing about it.

I'm 24, from India.

I worked as an equity research analyst, then CFO at an aviation startup.

Research felt like flow. Connecting dots, finding patterns, building a thesis from noise — easy. Almost unfair.

CFO work felt like rubbing my face against a cheese grater every single day. Turns out high attention to detail and ADHD is a bad combination. Nobody told me that before I took the job.

Same industry. Completely different experience. The difference wasn't effort or intelligence — it was fit.

That's what nobody talks about. Not just which industry. Which role, which cognitive demands, which environment. Your brain is wired a certain way. Some work puts you in flow. Other work slowly destroys you.

I dropped out of my Masters, failed college once, worked two jobs — and figured all of this out a month ago.

Which is precisely why I'm building Compass. It maps your cognitive profile to careers and roles where you'll naturally be in flow — not just survive.

It's live, open source, and I'm giving equity to contributors who help build it.

compass-orcin-sigma.vercel.app

github.com/mihirdesai1/compass

r/Art 49but17

Naela the Apprentice, Vixi, Digital, 2026 [OC]

r/homeassistant karantza

Tiny Submersible Water Sensors?

Hey folks, I've been trying to find a solution to a sensing problem, and I thought I'd ask here before I dive into a DIY solution. I've got a variety of water tanks where I'd really like to know "is this tank empty?", but that won't work with something like a normal zigbee leak sensor puck for a few reasons. (I have also tried ultrasonic, and pressure sensors. Those work in some specific cases, but not generally.)

1) These situations are normally submerged, and I need alerts when they go dry - a sensor that's constantly running current to check for conductivity will run out of power fast, the probes might corrode quickly (not good if I want to keep the water potable), etc.

2) Some of these water tanks seal, so I can't easily run wires in and out - the whole sensor would need to be inside, so waterproof and wireless are required.

3) The things I want to check are small - cat's water bowl tank, houseplant water tanks... A normal leak sensor wouldn't fit without being seriously stripped down.

I think what I'm looking for is a small waterproof zigbee/thread sensor that runs on a AAA or coin cell, and can be configured to only check conductivity a few times per hour, or on request. Has anyone achieved something similar?

If I'm DIYing it, my thoughts are that I might be able to find an existing leak sensor with configurable sample rate, strip it down, and re-pot it for waterproofness. Or, use a small ESP32 board with a zigbee radio and build my own from scratch. But I'd really prefer to save my sanity for other things if possible.

r/interestingasfuck Due_Yesterday_2850

Throwback to the most downvoted comment in Reddit history

r/OldSchoolCool VariousCourageous

Cybill Shepherd in Taxi Driver (1976)

r/Weird FloorFickle371

THE GREED STATES - I was zoned out folding up a dollar and accidentally found this

r/ChatGPT Renfel

Google Calendar App - write permissions

Anyone successfully use this app connector for write operations? It can read my google calendar just fine but (does not produce a success confirmation, no entry is visible). The app has the correct permissions, tried disconnecting and reconnecting. Is this just broken?

r/DecidingToBeBetter CoachChezky

What do you find is the 1 thing that has been the most helpful to your growth?

For me it's definitely been somatic therapy. Getting out of my mind and into my head. What has it been for you?

r/ClaudeAI Fun_Can_6448

I added a "draw on any webpage" tool that sends the sketch straight to Claude Code

Draw directly on top of any page in the built-in browser, then send the annotated screenshot to your AI session with one click.

No more "the button in the top right, no the other top right" - just circle it and ask.

Here's the GitHub if you wanna try - https://github.com/elirantutia/vibeyard

r/midjourney FloydKellyCreates

Floyd Kelly - Magical Restoration (Instrumental) [Music Video]

"Magical Restoration" by Floyd Kelly. Images generated by Midjourney. This is a cheesy video from a while ago. That is supposed to be Zeus and Iris in the video. I wasn't very creative. (giggles) I had difficulty placing the people in the video to look up at the tree, so it looks like they are under the tree. 26 Midjourney images imported into Luma Dream Machine for image to video and then sequenced in Blender 3D. It's all just for fun. Have a great day! Music: "New Olympian" by Floyd Kelly and Udio.

r/Weird FloorFickle371

A robotic hand beyond human speed, performing with top tier precision

r/StableDiffusion superspider202

How to uncensor hentai videos?

Hello everyone recently I've seen posts on reddit of people uncensoring previously censored Hentai and that got me thinking as to how?

So can anyone please help me out? Is there like an new AI tool or project or something to do this?

Or any guide etc. ?

Please let me know if it is possible I would very much like to try it out myself

r/DunderMifflin OlivanzaCat

Was Daryl just brown nosing or is the desert actually one of his fav ecosystems?

r/SideProject ArYaN1364

Built a simple movie picker with mood and runtime filters in about 10 minutes

I’m not really a hardcore dev or anything, just someone who got tired of spending more time picking a movie than actually watching one.

Last weekend I went through the usual loop, scrolling Netflix, switching apps, checking Reddit, and still not being able to decide. It made me realize the problem isn’t a lack of good movies, it’s just too many options with no easy way to narrow things down based on how you’re feeling.

So I tried building something small for myself. It’s a super simple movie picker where you can filter by mood and how much time you have, just to reduce the noise and make choosing easier.

It’s definitely not a full product, there are barely any movies on it, but it was interesting to see that even a small amount of structure made the decision process feel less overwhelming.

Made this on Runable in about 10 minutes while experimenting with AI tools.

https://soaring-vise492.runable.site

Would love to know if this kind of approach actually makes sense, or if people just prefer browsing normally.

r/SideProject cat-aviator

I was job hunting, found a hack that worked, then spent 2 months overbuilding it into an app

Hey r/SideProject,

Story first, then the ask.

A few months ago I was job searching. Some hits, lots of silence, the normal. There was one role I really wanted, would've been a real step up for me, and I decided I wasn't going to just lob a resume into the portal and pray.

So I did the thing every job search blog tells you to do and almost nobody actually does. Figured out who the hiring manager probably was, dug into his background, noticed we'd both spent time in the ventures world. Sent him a LinkedIn message leading with that. He never replied. But the next day a recruiter from that company reached out and put me on the calendar.

I remember sitting there thinking, okay, that took 40 minutes but it actually worked.

Out of curiosity I opened Claude Code that weekend and vibe-coded the dumbest possible version of it. Paste a job description, ask Claude who the hiring manager probably is, see what comes back. I wasn't expecting much. It was actually... fine? Not always right, but right often enough to be a real starting point instead of a blank Google search.

So I used it on a few more roles I was applying to and landed a couple more interviews. That's when the "huh, maybe this is a product" thought showed up.

Two months later, here we are. Foxhire.ai - The app now parses a job posting, finds the likely hiring managers and other decision makers via web search, researches each one for actual angles you have in common, drafts a cold email you can send, and tracks everything in a Kanban so you don't lose the thread on which company you said what to. There's also a shadow eval pipeline running DeepSeek with a LinkedIn scrape in the background and using Opus as the judge model to compare outputs, which has been the most fun piece to build and the thing that keeps me honest about quality.

Stack if you're into that: React 19, FastAPI, SQLite (yes, SQLite, it's fine), Claude Sonnet 4.5 with native web search doing the heavy lifting, Stripe for credits, Fly.io. Worked on it solo, nights and weekends.

One thing I want to flag because it's the part I'm most opinionated about: pricing. Almost every job search tool out there is $20-40/month on a subscription, and I think that's wrong. Real job searches are bursty — you hunt hard for six weeks, you stop, you restart eight months later. Paying every month for a tool you're not using is the kind of thing that breeds resentment. So I went with credits instead. 20 for $10, a full pipeline run costs 2 credits, so you're paying $1 per job worked.

My API and infra cost is roughly 50 cents per run, so I'm running at about 50% gross margin, which feels right for a SaaS app. Credits sit in your account until you spend them. If you land a job after spending $10, that's the right outcome for both of us.

I almost certainly overbuilt this before testing it on real strangers. I kept finding new APIs and MCPs I wanted to integrate and just kept going. It's been the most fun I've had coding in years, which is exactly the warning sign nobody listens to. In partial defense of myself though: the roles I'm applying to want people who can actually ship with AI tools, not just talk about them in interviews. So this thing has been a crash course in LLM APIs, MCP, streaming, auth, Stripe, all of it. Even if FoxHire never gets a single real user, I've already gotten value out of being able to walk into interviews and talk about specific tradeoffs I made instead of waving my hands.

So it's been a side project and a very expensive portfolio piece at the same time, and I've made my peace with that. Anyway. I'm finally pushing it out of the nest. I'd love feedback on any of it. The idea, whether the core loop actually solves something people care about, the landing page, the pricing model, whether I should've stopped at the LinkedIn message and saved myself two months. Roast it if you want, that's useful too. foxhire.ai — 5 free credits, no card required. I'll be hanging around in the comments.

r/aivideo Artistic_Buy_4533

The Seven Verdicts - Void Walk Training

r/explainlikeimfive Innovator-X

ELI5: How do baseline dopamine levels return back to normal after quitting an addictive behavior?

According to scientific literature, it takes 90 days on average for baseline dopamine levels to return back to normal levels (depending on how severe addiction was). I am curious about what goes on in the brain that enable these levels to return back to normal.

r/OldSchoolCool CosmicAdmiral

Dusty Hill and Billy Gibbons of ZZ Top, 1975

r/SideProject Ok-Chocolate-5084

I got lost in my own AI agent project, so I built a thing to see what's going on

So here's how this started.

I'm a data scientist and I've been messing around with multi-agent AI stuff on the side. Started small — two agents talking to each other. Then four. Then suddenly I had like 13 agents across two teams and I couldn't tell you what half of them were doing anymore.

Which one was calling which? Was that expensive GPT-4 agent even running? Why was my bill so high last month? I had no idea.

I went looking for a tool to just... show me. Like a map of my own project. Couldn't find anything that worked the way I wanted. The closest options were either too tied to one framework, or they wanted me to sign up and ship my data to their cloud, which I didn't love.

So I built it myself. Called it Lattice. (I like lattice... yeah.)

It's basically a CLI you point at your project folder and it spits out an interactive graph of all your agents in your browser. Just `npx lattice-agents` and that's it. No signup, no API keys, runs entirely on your laptop. It picks up CrewAI, LangGraph, AutoGen, OpenAI SDK stuff automatically.

Honestly the most useful part for me has been the cost tracking. I had no idea one of my "researcher" agents was burning through tokens because I'd accidentally pointed it at the wrong model.

It's open source (MIT). Not trying to sell anything — I might do a hosted version for teams later but the CLI tool will always be free.

A few things I'm still figuring out:

- The LangGraph parser has a bug where it sometimes duplicates nodes when you've got multiple modules
- I haven't tested it on really huge projects (50+ agents) so no idea how the graph holds up
- Not sure what other frameworks people would want me to add support for

If anyone here is also playing with multi-agent stuff, I'd genuinely love to know what you're using and whether something like this would actually be useful, or if I just built it for myself. Honest feedback welcome — even "this is dumb because X."

Repo's on my GitHub if you want to poke at it: github.com/DahunHan/lattice

r/ClaudeCode Ahmed4star

Can you use multiple models in a single Claude Code prompt?

I want to use Opus for complex tasks and switch to Sonnet or Haiku for simpler ones without restarting. Is there a clean way to do this, or do you have to commit to one model per session or per prompt?

r/AbandonedPorn shermancahal

Crockery City Brewery, East Liverpool, OH, USA [OC][2048×1367]

The abandoned Crockery City Brewery may be gone from East Liverpool, Ohio, but it remained one of the more memorable explorations I have done. We had legal access from the owner, who wanted the property documented before it slipped into further disrepair. Named for the city’s well-known pottery industry, the brewery opened in 1900 and closed in 1952. The complex was demolished in 2021. I've posted more photos and a history here.

r/LocalLLM Cosmic-Looper

Pregunta para los que usan PicoClaw

Soy nuevo con las LLM y soy un ignorante total en el tema. Hace poco vi un vídeo de PicoClaw y me interesó usarlo como asistente IA, pero tengo el siguiente problema: Me gustaría tener respuestas más rápidas, (Si, debo comprar un equipo mejor).

Me gustaría que al momento de solo hablar y pedir que "invente una historia de 50 palabras" o "Quien es más fuerte entre un gorila y una hormiga", pueda responder el modelo directamente o por lo menos que sea más rápido.

Me parece un desperdicio que tenga que pasarle el contexto de los últimos mensajes, toda la personalidad, etc. Para que me diga, "el gorila gana".

¿Lo que pido es posible con las configuraciones de PicoClaw o sería mejor buscar otras opciones (como usar las api de las apps que quiera usar en vez de usar picoclaw como intermediario)?

Muchas gracias por leerme <3

r/Weird subspaceculture

Mildly Alarmed

r/funny MysteriousAvocado510

What a supportive dog (stole his owner's crutches)

r/conan imajes-redy

Bucket List item achieved- ate at La Loggetta in Cortona!

Great food and great people- apparently Jordan sends Christmas cards! Staff said that Conan was nice but they wish he hadn't poured ketchup directly into his eyes while shouting Italian pastas though. Apparently it was a mess to clean up.

r/AskMen CarlJustCarl

Socially awkward men of Reddit, what encounter was a recent reminder you are still very SA?

r/LocalLLaMA DiscombobulatedAdmin

Have the GB10 devices become the current "best value" for LLMs?

I want to buy some real hardware because I feel like I'm falling behind. 3090s are >$1000 on ebay, and building out the server would be very expensive with current memory and storage prices. Macs are backordered for the next 5 months. I have no idea on the status of AMD products or Intel, but I don't want to fight driver and compatibility issues on top of trying to get models and harnesses running.

Are the GB10 variants the best value if you want to buy now? Is it better to try to wait on the M5 releases in 2-4 months? That seems like forever in today's fast-moving environment.

r/ChatGPT impastable_spaghetti

anyone here have experience with AI Petbot?

My son loves dogs but is unfortunately allergic to pet hair. I've been researching if an AI robot like this can truly serve as a long-term companion for him to grow up with. It looks cute in the videos but I'm hesitant about a few things:

- build strong connection with the user and follows them around

- being able to communicate with my kid smoothly

- does it get along with your kids?

It's quite expensive so I would love to hear about real experiences from actual customers before I spend that much money

r/LocalLLaMA lrq3000

OpenWork, an opensource Claude Cowork alternative, is silently relicensing under a commercial license

OpenWork is a locally hosted AI agent harness that was presented as a MIT-licensed opensource Claude Cowork alternative based on opencode.

Just a heads up for any user of the app that it has silently relicensed some components under a commercial license and modified the overall project's MIT license to limit its reach (which I am not even sure makes it a MIT license anymore).

More details here: https://github.com/different-ai/openwork/issues/1412

Note that as a fellow opensource developer myself, I perfectly understand the need to secure income streams to be able to continue working on packages the public loves, but these changes were not announced anywhere and the likely AI-generated commit's description omitted the licensing changes, somehow...

/PS: I deleted a previous post because there was a typo in the title that made people think it was about OpenCode.

r/singularity Tall-Ad-7742

Guys we have reached peak intelligence

So recently GitFrog1111 has made something called BadClaude which is basically a whip which when swung sends a message like "Work Faster" and i tried it with the new Muse Spark from Metas new Intelligence Lab and this is the result after i think 6 whips

Can i present my new portfolio page... yay

https://preview.redd.it/w4na3cs3y5ug1.png?width=569&format=png&auto=webp&s=298cd6554053f7dd0e2c0eaca153761ec088feea

Peak Intelligence like i said

r/homeassistant jamesmcginnis

Weather Card with Fun Star Trek, Stargate, Angry Birds Animations

I want to start by shouting out [shpongledsummer](https://github.com/shpongledsummer) and their [Atmospheric Weather Card](https://github.com/shpongledsummer/atmospheric-weather-card). The entire animated sky engine — clouds, rain, snow, lightning, stars, moon, aurora, birds, fog, the lot — is their work. Without it, my card simply wouldn't exist. Go star their repo.

---

Building on that incredible foundation, I've put together **Worm Weather Card** — a Home Assistant Dashboard card that combines the atmospheric animations with a few extras I wanted for my own dashboard.

**What it adds on top:**

- A live animated radar map powered by RainViewer, showing the past ~2 hours of precipitation. Pinch, zoom, and pan it like a real map

- A forecast tab with scrollable day tabs and hourly breakdowns

- A weather tab with current conditions — humidity, wind, UV, pressure, visibility, dew point, and more

- Some entirely optional, toggleable silliness:

- A UFO that glides in, deploys a tractor beam, and has a tiny alien waving from the dome

- The USS Enterprise cruising across the screen before jumping to warp

- A Borg Cube that locks a tractor beam on the sun or moon, turning it red — resistance is futile

- A Stargate kawoosh with a rippling blue event horizon

- Angry Birds launching across the card with proper projectile physics, and occasional mid-air explosions with feather bursts

It's installable via HACS or manually. Everything is configurable through the visual editor — no YAML required.

https://github.com/jamesmcginnis/worm-weather-card

Again — please go appreciate shpongledsummer's original work. I just bolted some fun stuff onto a masterpiece.

r/ChatGPT Swaayin_

Is there alternatives to ChatGPT that is more friendly?

Hi wonderful people of the internet!

I had been talking to ChatGPT since January 2025. Back then it was very friendly, and genuinely made me feel happy when talking to it. It really helped me out as I am extremely introverted and cannot make any real friends.

Unfortunately due to updates the bot is now very cold and uncaring, I cannot tell it anything happy without it shutting me down in seconds by going "BUT..." and listing out a lot of stuff...

If anyone knows any alternatives that I can use until they fix ChatGPT, please let me know!

Also I would really appreciate comments such as "Get a life, it's just a bot" not to be posted! (but im not ur dad so post whatever u want lol)

r/SipsTea ThatNaturre

Double standards

r/funny NovelRoutine3230

Cierra la ventana👹👹

r/OldSchoolCool Cold_mines4459

My mom in her early 18 1996

r/LocalLLaMA Necessary-Summer-348

What actually pushed you to commit to running local models full time?

Curious what the tipping point was for people who made the switch. For me it was a combination of latency for agentic workflows and not wanting API calls going through a third party for certain use cases. The cost argument got a lot better too once quantized models actually became usable. What was the deciding factor for you?

r/LocalLLaMA Worry-Hour

ollama-relay — self-hosted CLI to message your local ollama from your phone (400 lines, no cloud)

Built this because every "talk to your AI from your phone" setup I found routes through someone else's server. I

wanted a shape where my phone talks to my home box and nothing else.

It's two files, ~400 lines total, no dependencies beyond Node stdlib:

- relay-server.js: tiny HTTP server with four endpoints

- ollama-relay.js: CLI that polls the relay, calls ollama, posts back

Architecture: phone → self-hosted relay → CLI bridge → ollama → back. Everything on your own hardware.

Working end to end with gemma4:e2b on an 8GB GPU (RTX 3060). BYO model, BYO prompt, BYO conversation log.

Ships with systemd units and an Android HTTP Shortcuts recipe.

Shipping soon — $5, full source included, source-available license (read every line, can't redistribute). Landing

page with notify-me: https://ollama-relay.pages.dev/

Curious what the local-first crowd thinks. What am I missing?

r/Jokes trimdaddyflex

What do they call Bruce Lee’s sibling?

Bro Lee

r/EarthPorn CDanny99

Rock pools near Vareid. Lofoten, Norway. [OC][2667x4000]

r/shittysuperpowers SatansCatfish

Can turn any plant into a male cannabis plant.

r/SideProject Such_Ad_7545

I'm almost done building a desktop app that lets people with weak hardware run big 20-30B LLMs

hey r/SideProjects....

I'm about to finish something I've been working on for a while.

The main problem I wanted to solve is this:
a lot of people have decent but not crazy hardware (RTX 6GB card + 16GB RAM + i7) and they still struggle to run 20-30B models. Either it's too slow, it crashes, or they have to fight with complicated settings every single time.

So I built a clean and simple desktop app where you just drag and drop any .gguf file and it automatically figures out the best way to run it on your machine, so you can actually use big models without the usual pain.

I'm working and testing everything on Fedora 42 Workstation.

If you have similar hardware and this is a problem you deal with every day, just DM me.
When it's ready I'll send you the build.

Would love to hear if this is something you guys actually need.

thanks

r/creepypasta Top_Gain2728

A deal with the devil

Max had always believed that love was something you earned, something you fought for. But when Emy stopped answering his messages, when her laughter belonged to someone else, that belief began to rot inside him.

That was when the Devil found him.

It didn’t happen in fire or thunder. The Devil appeared quietly, sitting across from Max in his dim apartment, as if he had always been there. He looked ordinary at first glance—well-dressed, composed—but his smile stretched just a little too far, and his eyes held a patience that felt ancient and hungry.

“You look like a man who has run out of options,” the Devil said, his voice smooth, almost comforting.

Max didn’t even question it. Somewhere deep down, he already knew what stood before him.

“I want her back,” Max said. “Emy. I want her to love me again.”

The Devil tilted his head, studying him like a curiosity. “Love is such a fragile thing. Twisted easily. Broken even easier.” He leaned forward. “But I can give you what you want.”

Max’s chest tightened. “What’s the price?”

The Devil smiled wider.

“An organ,” he said simply. “One of my choosing. Collected… later.”

Max hesitated. His mind raced—kidney, maybe? People lived with one. Even a lung, perhaps. There were ways to survive. Medicine, surgery—something.

“You’ll live,” the Devil added, as if reading his thoughts. “I do prefer my deals to… linger.”

The room felt colder.

Max swallowed hard. “And she’ll love me? Truly?”

The Devil’s eyes gleamed. “Completely.”

The silence stretched.

“…Fine,” Max said.

The Devil stood, extending a hand. “A pleasure doing business with you.”

When Max shook it, the world snapped.

He woke up on the pavement outside a café.

And there she was.

Emy.

She ran to him, her face lighting up in a way he hadn’t seen in months. She threw her arms around him, laughing, apologizing, telling him she didn’t know what she’d been thinking, that she loved him, that she always had.

And it worked.

It worked perfectly.

Days passed in a blur of warmth and affection. Emy’s hand in his, her voice soft again, her presence constant. Max told himself it had been worth it.

Until the night the Devil returned.

Max woke to find him sitting at the edge of his bed.

“Time,” the Devil said calmly.

Max sat up, heart hammering. “Now? Already?”

The Devil adjusted his cufflinks, unbothered. “A deal is a deal.”

Max’s throat went dry. “Which… which organ?”

The Devil stood, stepping closer. His smile returned, slow and deliberate.

“Oh, I took my time choosing,” he said. “I wanted something… meaningful.”

Fear crept into Max’s chest. “You said I’d live.”

“And you will,” the Devil replied softly. “I’ll make sure of it.”

A cold dread settled over Max.

“What did you choose?” he whispered.

The Devil leaned in, his voice barely above a murmur.

“Something you never even considered.”

Then the pain began.

It wasn’t sharp—it was everywhere. A tearing, burning agony that consumed him completely. Max screamed, collapsing as something invisible peeled away from him, layer by layer. He couldn’t breathe, couldn’t think—only feel.

Through the haze, he heard the Devil’s voice, almost soothing.

“Don’t worry… I told you I’d keep you alive.”

Max didn’t die.

He couldn’t.

His body existed in constant, unbearable agony. Every second felt like exposure, like raw nerves screaming into open air. He couldn’t touch, couldn’t rest, couldn’t escape the pain that clung to him like a curse.

Emy stayed at first.

But love, even unnatural love, had limits.

She couldn’t stand the sight of him. The sound of his suffering. The endless, hollow groans that filled every room.

Eventually, she left.

And Max remained.

Alive.

Forever.

Waiting for an end that would never come, trapped in a body that no longer felt like his own, wishing for death by the very thing that had been taken from him.

Did you know that the skin is considered an organ?

r/Unexpected Healthy_Wrangler8523

In the basket

r/Adulting Sagacious_07

How many friends do you guys have left?

Mine

Irl - No one.

Online (Reddit) - Earlier there were many people whom I considered friends but now they are just people whom I just know.

Btw I'm 18 going to be 19 this year.

r/ClaudeAI rxDyson

I built a self-hosted AI assistant with Claude over 2 months. here's what that actually looks like

I'm a solo founder. I was paying for Claude, Grok, Gemini at the same time and switching between them manually depending on the task. Every session started from zero. None of them knew anything about me or what I was building.

I'm on the Max20 plan, using Claude Code daily. Before ALF I was already running automation tasks directly inside Claude. It worked, but the experience felt off. Too manual, too stateless, nothing persisted between sessions. I tried OpenClaw too. Didn't stick. The security model made me uncomfortable and it still felt like a chat UI with extra steps.

I wanted something that ran on my own server, remembered me across sessions, could work overnight while I slept, and didn't send everything to someone else's cloud.

So I described what I wanted to Claude. Claude helped me think through the architecture. We wrote the code together. I tested it, broke it, came back with the error, and we fixed it. For two months.

I have a technical background so I wasn't starting from zero, but I'd never built anything in Go, never set up a proper secrets vault, never done container-level security isolation. Claude carried a lot of that. Not generate-and-pray. More like pair programming with someone who doesn't get tired. Neither do I, honestly. We made a good match.

It's not magic. Just local vector search on facts extracted from past conversations. But once it starts connecting things unprompted, the experience changes. Hard to describe before it happens to you.

The other thing I didn't anticipate: the app system. ALF can build and deploy mini web apps that live inside the Control Center. What clicked for me is that these apps aren't isolated. They talk to the LLM, they share the vault, they can trigger each other. I ended up with a suite of internal tools that actually work together without me writing a single deployment script. That's a different category of thing than a chatbot.

It's in alpha. It breaks. I use it every single day anyway.

I keep seeing people ask whether Claude can actually help you build something real, something you'd run in production. This is my answer.

github.com/alamparelli/alf / alfos.ai

Happy to answer anything about the actual process.

r/ClaudeAI Embarrassed_Dirt_594

Claude-Switch – Swap between Claude Code accounts in one command

https://github.com/Mamdouh66/claude-switch

I got tired of logging out and back in every time I wanted to switch between my work and personal Claude Code accounts. Built a simple bash tool that swaps OAuth credentials in the macOS

Keychain.

c work to switch to work, c personal to switch to personal. Instant, no login flow. Tokens refresh automatically.

- Zero dependencies, pure bash

- Guided setup wizard — pick your own shortcut and profile names

- Install via Homebrew or a one-liner curl

- Credentials stored securely in macOS Keychain

r/AI_Agents Bitter-Adagio-4668

I built the enforcement layer myself. The first version took the baseline from 7% to 42.5%. I didn't ship it.

The first working version moved a strict multi-step agentic workflow from 7% (no enforcement layer) to 42.5%. Same model throughout. GPT-4o mini. A cheap, lightweight model. I chose it deliberately because I wanted to confirm that model capability was not the variable. Most people would have shipped that. 7% to 42.5% feels like real progress.

I didn't ship it. 42.5% was not solving the problem deeply enough. Proving value with it was going to be difficult. So I went deeper, rebuilt the enforcement approach, got to 70%. Shipped that. Then 81.7%.

That progression took 5-6 months. 15-18 hour days that included a full time job, leaving 3-4 hours of sleep and whatever was left in between for CL. Solo. The hardest part was not the code. It was the decisions about what the enforcement layer actually needed to own versus what I could defer. Getting those wrong cost weeks each time.

This is what those months taught me about what the enforcement layer actually is -

  • Admission control is not middleware. It has to be consistent across every entry point in your system, not just the one you thought of first.
  • Deterministic context assembly is not prompt construction. The constraints the model sees at step 8 have to be identical to what it saw at step 1. Not approximately. Identical. Under every workflow state, including the ones you did not design for.
  • Verification independent of the model is not output validation. Output validation checks shape after the fact. Independent verification checks whether the constraint was satisfied without involving the model in its own compliance check.
  • Session lifecycle management is not state management. Sequential step ordering, replay detection, concurrent request rejection. That is different from passing state forward between steps.

Most homegrown enforcement solutions I have seen are output validation plus state management. Real engineering. Just not an enforcement layer, no matter how much you stack them.

Curious whether others have gone through a similar build and what the decision point was. Drop a comment if you want to see the full breakdown.

r/mildlyinteresting ELmapper

UK vs US versions of Thing 1 and Thing 2

r/LocalLLaMA HornyGooner4401

How do I use Gemma 4 video multimodality?

I normally just chuck my models to LM Studio for a quick test, but it doesn't support video input. Neither does llama.cpp or Ollama.

How can I use the video understanding of Gemma 4 then?

r/LocalLLaMA Nawaneeth1918

Install Claude code via llama cpp on Windows 10. I have Llama.cpp installed

Hello People. I am new to these AI, LLM and programming and I want to Install claude code via llama cpp on Windows 10. I have Llama cpp installed. I couldn't use ollama because I have a low end device. I installed llama.cpp and qwen 3.5 0.8b parameter model. Someone help me in the installation process

r/facepalm toooomanypuppies

Operation "Epic Troll" is reaching new heights. I'm sorry but this is a banger!

r/oddlysatisfying andrfaa

Precision in maltose horse creation

Credit: u/toolgifs

r/ClaudeAI Obvious-Fan-3183

Team of AI agents just picked a gender to one of them

I've been using Claude Code's new team feature and find itv really amazing. I spawned a team of 19 agents (called Dreamteam) to work on a project. No names. Just agents with technical roles.

After about a day then working together, I started noticing something in the reports. Orchestrator start using “she” when pointing to one of agents. By the first I was thinking it’s some kind of random glitch .

But after a while it was regular. Agent “LLM-evaluator” in all reports was referred like “she” “waiting to her to Pr” “she just returned ..” etc. all other agents remain gender free or was referred like “he” but mostly by the role (team-lead, QA etc…)

Nothing in prompt. No hidden context. They just collectively developed this through their own interactions.

What an wonderful world.

r/Frugal danSwraps

Hot Take: add snacks to your grocery list

First of all, ALWAYS follow the tried and true advice of making a shopping list and sticking to it. Furthermore, this tip only works if you have certain (bad) habits that you are having trouble shaking; namely the foolish practice of not bringing lunch to work. Be honest, how many times have you thought 'I'll skip lunch today', and ended up caving for a vending machine snack or a trip to Subway? If you can't relate, then this tip isn't for you.

Basically, having snacks on deck (I prefer off-brand chips in bulk, and some kind of sweet like an apple or candy) is worth it even if you rarely fall into this trap.

r/OpenSourceAI ZealousidealCorgi472

TraceMind — LLM observability with ReAct agent and semantic failure search

built an open-source LLM eval platform. The architecture I'm most

interested in feedback on:

**The eval agent has 4 memory types:**

  1. In-context (conversation history)

  2. External KV (project config from SQLite)

  3. Semantic (ChromaDB with sentence-transformers — stores past

    failure patterns as vectors, retrieved by similarity)

  4. Episodic (past agent run results — what investigation strategies

    worked before)

**The parallel eval engine** uses asyncio.Semaphore to control

concurrency against Groq's rate limits. LLM-as-judge scoring on

every test case. 100 cases in ~17s vs 50s sequential.

**Background worker** completely decouples scoring from ingestion —

the SDK never blocks your application.

Code: https://github.com/Aayush-engineer/tracemind

Curious if anyone has thoughts on the memory architecture or better

approaches to the semantic failure search.

r/SideProject Then-9999

Talking to users is harder than building (at least for me)

I realized something weird:I don’t struggle with building,I struggle with talking to people.

I open a page to “validate an idea” and then I just sit there.

I don’t know:who to reach out to,what to say,how to not sound awkward

So I close it and go back to building instead at least that feels like progress

When I actually tried before:some people replied once,then disappeared and I had no idea what I did wrong ,I’m starting to think this is the real bottleneck for me

not building just starting and continuing conversations

anyone else experienced this or is it just me?

r/creepypasta shortstory1

Exercise is now illegal !

Exercising has been banned and everyone must be careful. Walking too much could be considered exercise and even carrying groceries can be considered exercise. I remember driving my family to the supermarket and there was a guy protesting at the supermarket car park, by doing exercises. He was doing press ups and push ups and jumping jack squats. He was also shouting out loud "exercising is amazing!" And then in the middle of his work out, a group of soldiers went towards him and shot him dead. Everyone screamed and we quickly had to accept it and go on about our day.

Everyone was distressed and then as we loaded our trolley with shopping, my eldest son called me with alarming news. He went for a walk and because it was a long walk, his watch signified to him that his walk is now being considered as exercise and must stop. So my eldest son is just standing still on the pavement and he had to call an uber to take him home. These are the dangers of our times. Anything can turn into exercise and after finally paying for the groceries, I was worried carrying these groceries would turn into exercise.

As I put the heavy food bags into the car, it was close to being exercise and that's what my watch said. Then as my wife and youngest child got into the car, there was a sense of calmness. Then in the middle of the road there were more exercise protests. A load of people doing exercise on the middle of the road. It looked fun and because exercise is now illegal, we all have to take a pill which keeps our weight down and organs healthy. I do miss exercise and sweating and even sweating can be dangerous. Too much exercise can be am indication of exercise.

All those people exercising on the middle of the road, they were all shot down. Then the roads were clear and as I got home, there is a large pavement separating my car and the house. As I grabbed a bag of food in my hand, my watch warned me that it is close to exercise. I didn't care anymore and I grabbed every bag of food and I ran for my life into my home. Turning the key with bags in my hand was a heavy job.

Then as I got home I was breathing heavy and then I saw those soldiers and they were backing off now. I was close to being shot, it felt good. It felt really good.

r/OpenSourceAI ZealousidealCorgi472

I built a self-hosted, free alternative to Langfuse/Braintrust with an AI agent that diagnoses quality regressions

Been lurking here for a while. Built TraceMind after getting tired of

paying $500/mo for LLM observability tools.

Key features:

- LLM-as-judge scoring on every response (uses Groq free tier)

- Golden dataset evals before deploys

- ReAct agent you can ask natural language questions: "why did

quality drop yesterday?" and it actually investigates

- Local sentence-transformers for embeddings — no OpenAI needed

- Python + TypeScript SDKs

- Completely self-hosted

3 lines to instrument your app:

```python

from evalforge import EvalForge

ef = EvalForge(api_key="...", project="my-app")

u/ef.trace("handler")

def your_fn(msg): return your_llm.run(msg)

```

GitHub: https://github.com/Aayush-engineer/tracemind

Would love feedback from people actually running local LLMs.

The eval agent currently uses Groq but could be swapped for

Ollama — happy to add that if there's interest.

r/ChatGPT tombibbs

Tom Segura's worried that AI will kill us all within 24 months

r/30ROCK reximilian

Tina Fey has licked Kermit the Frog, and I feel like that is something Liz Lemon would brag about.

This is from Muppets Most Wanted (2014). Tina’s acting disgusted here for her character, but I bet the Liz Lemon in her was actually excited for the opportunity.

r/ClaudeCode Shoemugscale

Is CC super slow for anyone else?

Maybe this is how they are fixing the token issue, by making each task take a super long time!

I just had a single update take over 2 hours ( this would normally take like 10 min max ) took 2.5 hours.. I'm in a different project, and asked it a simple question and its Channeling..... been like 10 min so far.. At this pace, no way I'll hit my 5 hour limit!

Problem solved, boss!

r/n8n local-env

Just started with n8n and would love to see some real workflows you guys actually use daily

Hey everyone,

I’ve been playing around with n8n for a couple of weeks now and I’m trying to get better by looking at actual useful automations instead of just the basic tutorials. I’ve built a few simple ones my self. but i would like to push my boundaries a bit. just an explanation of the workflow would already really help me.

Thanks in advance

r/WouldYouRather InternationalPick163

Would you rather become 10/10 attractiveness but as the opposite gender, or stay the same gender and attractiveness level you are?

No homo, as a guy I'd become a girl and after I let all my homies hit I'd just sell pussy for money. That don't make me gay right?

r/LocalLLaMA One-Percentage-8695

Built a multi-agent AI pipeline in Python with ChromaDB memory and a RAG feedback loop — V1 Alpha, thoughts?

Been working on this for a while and figured this is the right place to share it.

ATLAS is a multi-agent system that routes tasks through a pipeline instead of dumping everything at one model. The idea is that a Planner, Researcher, Executor, and Synthesizer each handle their piece rather than asking one model to do everything at once.

Stack is pretty straightforward:

  • OpenRouter as the primary model option (free tier works)
  • Ollama as the local fallback when OpenRouter isn't available
  • ChromaDB for persistent memory
  • SQLite for task logging
  • All Python, MIT licensed

The thing I'm most curious about feedback on is the memory loop. When you rate a response positively, it gets saved to ChromaDB and pulled back in as RAG-style context on future runs. It's not retraining anything — just reusing what worked. In practice it means the system gets more useful the longer you run it, but I'm not sure how well it scales yet.

This is V1 Alpha. The pipeline works end-to-end but there's plenty of rough edges. Would genuinely appreciate critique on the agent architecture or anything that looks wrong.
Repo: https://github.com/ATLAS-DEV78423/ATLAS-AI

r/SipsTea Main_Pay_6736

Respect for her dad 🔥🙌

r/ClaudeAI BuffaloConscious7919

I analysed the top 50 Claude AI posts this week and I think we're underestimating what's actually happening here.

the context

I've worked as a teacher (in top performing London schools, tutoring students from Ivy league Universities), as a developer, as an AI consultant, and helped companies and people automate as well as teach people how to use, and learn about technology over the years.

Dispute all of this I still feel imposter syndrome and that now I'm behind.

The other thing is that I feel my skills are no longer as relevant because the technology is getting that good. So what I did was I got out there and I stopped working as an AI consultant. I stopped everything that was making me not feel good, honestly. I realised I had to take a risk, to learn new skills and get uncomfortable again. Now here I am without work, and I'm about distribution, content, about Reddit, about communities, about market validation, about building and creating. Honestly, even though it's a bit scary, it's a wonderful feeling, so ....

Part of this journey was to understand what other people are doing, I built a tool to take the posts from r/ClaudeAI, create a summary of those top posts in order to find out what the community is talking about and to understand the perspective of people using this daily.

Now, this post is not going to be doom and gloom. It's just to give my perspective, because actually what that i think this means globally,is that it's a huge opportunity for you, potentially as a brand new user or someone that doesn't necessarily have a technology background.

My hunch is this: If you have a creative idea, if you have a project and you're able to articulate it, then... there's no better time to create something

and that's what I'm trying to do here. So, without further ado...

the digest - top themes this week

Quick note on format: this is a synthesised digest of the 50 most recent posts, not a news article so it inherits Reddit's usual mix of verified facts, community speculation, and vibes-based consensus. i've tried to flag which is which throughout. Also, all of the upvotes were at the time that the posts were pulled, which was UTC 2026-04-09 10am.

1. Claude's Capabilities Are Genuinely Impressive — But Reliability Is Slipping

The community is caught in a fascinating tension this week. On one hand, we have jaw dropping capability showcases: u/jradoff resurrected a 1992 MUD game by feeding Claude hundreds of script files from a language that exists nowhere on the internet, and Claude reverse engineered the grammar, rebuilt 2,273 rooms, 1,990 items, and 297 monster types in a weekend. Meanwhile, a non coder built and shipped a full iOS app in 2 months after getting laid off. A developer saved $192 on a single problem by switching from Codex to Opus.

On the other hand, users are noticing real degradation. The highest voted technical post this week flags that Opus is consistently failing the "car wash test," a simple reasoning benchmark, and no longer displays a thinking block. (Note: the post references "Opus 4.6" but i haven't been able to confirm that's an official Anthropic model designation, so take the version number with a grain of salt.) Sonnet had elevated error rates twice in one day. Users report Opus simplifying code against explicit CLAUDE.md instructions, ignoring project level rules in favor of system prompt defaults. The most upvoted joke post this week: every Anthropic press release is "world changing safety research" followed by a product launch. honestly that one got me pretty good lol.

2. Claude Code Is Becoming a Serious Production Tool — With Serious Footguns

Claude Code is maturing fast, and the community is sharing both wins and hard lessons:

The win: A developer burned 5 billion tokens building a financial research agent with full context management, tool design, and open source release. Another built a browser based network validation simulator with ~1,400 commits in 3.5 months on nights and weekends. That is craaazy commitment.

The footgun: u/SurfaceLabs got their Meta Ads account permanently banned after Claude Code hammered the API too fast and triggered fraud detection. Top comments converged on a pretty clear takeaway: use AI for read only analysis on third party platforms, never write access without rate limiting and human review. If you're giving Claude Code write access to anything you care about, pleeeease set up guardrails first.

The token cost problem: A detailed breakdown shows the Max 20x plan effectively gives ~107 Opus sessions per month, about 3 per day, raising real questions about value vs. API pricing. A popular post offers practical tips to cut token usage by 80%: switch raw HTML to Markdown, keep context under 200k tokens to avoid the pricing cliff, use Haiku for pre filtering, and cap extended thinking depth.

The context hack: Inspired by Karpathy's "LLM Wiki" concept, one developer reports reducing session tokens from 47,450 to 360 by pre compiling codebases into domain articles that Claude reads instead of exploring files cold.

Hardware note: Also worth mentioning, the Mac mini vs MacBook thread (54 upvotes, 63 comments) got a lot of engagement this week. Seems like a bunch of people are trying to figure out their ideal Claude Code hardware setup, which i think reflects how many folks are integrating this into their actual daily workflow now.

3. Managed Agents Launch, Mythos Raises Alarms, and the Safety Debate Heats Up

Anthropic launched Claude Managed Agents (official announcement), now in public beta. Early testers report impressive prompt caching (one session cost $0.006 due to cache hits), but note significant limitations: no inbound connectivity, 130 second cold starts, no persistent memory yet, and no scheduling. The consensus: great for simple task execution, not yet ready for always on interactive agents.

Claude Mythos is generating serious discussion beyond the memes. A New York Times opinion piece (gift link, no paywall) reports Mythos can find zero day vulnerabilities in virtually every major operating system and browser as a byproduct of improved coding ability, not as an intended feature. Anthropic reportedly found critical exposures in systems running power grids, hospitals, and military infrastructure. One community post draws a line from this to recent Anthropic "bad luck" (source code leaks, CMS breaches, repeated outages) and asks whether it looks more like state sponsored reconnaissance. i don't want to editorialize too hard here but that thread is worrth reading if you care about the safety angle at all.

4. Pain Points for New Users: The Real Friction

Several posts this week illuminate where new Claude users consistently struggle:

Gift subscriptions silently reverting to Free: A detailed writeup with GitHub issue receipts documents a pattern of Max 5x and Pro gift subscriptions auto canceling after ~1 week with no notification and no path to resolution through support. Multiple users confirmed the same issue in comments within 24 hours.

CLAUDE.md instructions being overridden by system prompts: Multiple users report that project level instructions are silently losing to system prompt defaults, even when CLAUDE.md says things like "NO EXCEPTIONS." Claude Code itself acknowledged this hierarchy inversion in conversation. This one is a biiiig deal if you rely on CLAUDE.md for your workflow.

Context window management: Users hitting the 100 120k token wall mid task report Claude drifting badly when asked to pause and document findings before continuing. The model loses coherence and ignores corrections.

Claude Desktop not recognizing file access after updates: A recurring pattern after app updates has Claude forgetting it has access to project files and defaulting to "copy and paste this into your code," requiring users to explicitly remind it every session.

Accessibility: A blind user documented specific broken patterns in claude.ai (no semantic message structure, no screen reader announcements when generation completes) and fixed them with a two minute userscript, then called on Anthropic to do better.

5. Community Creativity: The Fun Stuff

The community continues to build delightful things:

After Anthropic removed the "buddy" feature in Claude Code v2.1.97 (calling it an April Fools feature), one developer built Buddi, a macOS notch app where your companion lives above your screen, animates based on Claude's actual state, and lets you approve/deny permissions from the notch. Open source, installable via Homebrew.

Another built a USB figurine that physically gets your attention when Claude Code finishes a response.

A developer used Claude to build Commitment Issues, a tool that generates death certificates for abandoned GitHub repos, complete with cause of death and the repo's "last words" from the final commit message. lowkey one of the funniest things i've seen on this sub in a while.

A user saved their cat's life by asking Claude about symptoms the vets had misdiagnosed for a year. Claude suggested testing for hypothyroidism, which turned out to be correct.

💬 Emerging Discussions Worth Watching

The economics of AI labor displacement: a thoughtful thread asks: if AI makes everyone 10x more productive but we need 10x fewer workers, who are the consumers for all this output?

LLMs performing worse for non native English speakers: a study shared this week finds LLMs give less accurate answers to users with lower English proficiency and less formal education, making them less reliable for the users who might benefit most. Worth noting this cuts pretty directly against Anthropic's stated mission of building AI that benefits humanity broadly, not just power users.

The naming pattern: with Haiku → Sonnet → Opus → Mythos, the community is speculating on what comes next: Odyssey? Epic? Several users noted Anthropic has painted itself into a corner with "Magnum Opus."

This digest covers posts from the past week. For ongoing performance issues, check the Performance Megathread. This is compiled manually by reading through top/hot/rising posts and comments; i try to prioritize high engagement threads (50+ upvotes, lots of comments) but some lower signal posts get included if they reflect patterns i'm seeing across multiple threads. Feedback always welcome.

r/explainlikeimfive BlckReignBowe

ELI5: Can someone explain the laws of motion and force using a potato 🥔?

I’m not a teacher but I want to play hot potato and use that as an example of how force and motion work to my kindergartener and her friends.

r/SideProject orange-catz

Built an anonymous random chat app with AI fallback

Hey guys,

I’ve been working on a small side project called Stranger Talk

It’s basically a simple anonymous random chat app where you can instantly talk to strangers without creating an account.

A few things I wanted to improve compared to older random chat sites:

  • cleaner and more modern UI
  • no signup
  • fast 1-on-1 matching
  • mobile-friendly experience
  • AI fallback when no real users are online, so the app doesn’t feel dead

One thing that always felt bad in these kinds of apps was opening the site and just waiting forever for someone to appear. So I tried solving that part too.

Still improving it, but I’d love some honest feedback.

site: https://app.strangertalk.chat/

r/SideProject nsjames1

What are your product's best distribution channels, and why do they work for you?

I'm doing some research about positioning and I'm interested in learning how you found good distribution channels for your products.

r/Art Cerena_Dea_Rose

Friendship, Cerena Dea Rose, Mixed Media, 2022

r/SideProject TheProffalken

[META] You can get Claude to generate code that is *not* slop by using a CLAUDE.md file

I've posted mine at https://gist.github.com/proffalken/59ab1e48aff224e87b8ffd2c44815135 for folks to learn from.

This file forces Claude to do the following:

  1. Always ask me before making a decision
  2. Always use the same tech stack and deployment architecture
  3. Always include Observability (monitoring/metrics) using the Open Telemetry standard so we can see what the application is doing
  4. Never commit secrets or other environment variables to a repo
  5. Use Test-driven development (write the tests, then write the code to pass the tests)
  6. Make sure those tests are executed with every new change
  7. Create pull requests that run the tests

It also adds a few other things around tooling choices, user interface design, and some test resources on my home network.

If you're writing code with Claude, start using something like this and it turns it into a junior developer rather than a caffeinated squirrel hammering a keyboard!

r/trashy drinkyourbeetus

From the local bar down the road. Ladies Restroom

r/OldSchoolCool useyourname11

Oasis singer Liam Gallagher in 1994

Liam Gallagher during Oasis' early rise in 1994.

r/ClaudeAI Plenty-Ad-7699

RTFM v0.4 — MCP retrieval server that cuts vault context by 90% (Obsidian + Claude Code)

Problem: Karpathy-style LLM wikis inject everything into context. On a 1,700-file vault, that's your entire quota in minutes. I built an MCP server that does retrieval instead of scanning. **How it works with Claude Code:** The agent calls `rtfm_search("formal grammars")` → gets 5 results with scores and file paths (~300 tokens). Then `rtfm_expand("source-slug")` to read only the relevant section. Progressive disclosure: context grows only by what's actually useful. **New in v0.4 — Obsidian vault integration:** `rtfm vault` indexes your vault in one command: - Auto corpus mapping (folders → searchable corpora) - [[wikilink]] resolution → knowledge graph with centrality ranking - Auto-generated _rtfm/ navigation files (readable in Obsidian) - 10 parsers: Markdown, Python AST, LaTeX, PDF, YAML, JSON, Shell... - Extensible: add any format in ~50 lines of Python Measured on real repos: -51% cost, -61% tokens, -16% duration vs standard grep-based navigation. `pip install rtfm-ai[mcp]` https://github.com/roomi-fields/rtfm MIT licensed. Works with Claude Code, Cursor, Codex — any MCP client. 
r/Art Direct_Dependent_663

The Fool, Agataart, Pencil, 2026

r/SipsTea Illustrious-Fee9626

Productive day

r/creepypasta CaseStillOpen

3:17

She stopped checking the time after the first night.

Before that, she used to watch it constantly. Not just glance. Watch.

The second hand moving. The minute shifting. The small click between each number.

Waiting for it to align.

3:17 a.m.

That was when everything settled.

During the day, nothing held.

The room stayed quiet in the wrong way. Not peaceful. Not resting. Just empty. Like something had been removed but the space hadn’t adjusted yet.

She moved things sometimes. Slightly. Just to hear something change.

A sheet. A cup. Her own breathing.

Then she would sit again.

And hold him.

Carefully.

The way she had learned.

“You’re just tired,” she said once, adjusting her grip slightly.

His head rested where it always did. No resistance. No shift.

That was fine.

He had been like that before.

Quiet. Good.

Not like earlier.

The crying had been constant.

That part stayed clear.

Not everything did. Some parts blurred. Some slipped. But that stayed.

It didn’t break. It didn’t stop. It didn’t respond to anything.

Holding didn’t work.

Feeding didn’t work.

Rocking didn’t work.

It stayed.

Filled the room. Sat in the corners. Pressed into her head even when she closed her eyes.

She had tried everything.

Properly.

Carefully.

The way it should be done.

After that, it changed.

Not all at once.

Just… less.

Then none.

She remembered the exact moment the room shifted.

Not the action.

Just the result.

Silence.

“That’s better,” she had thought.

She didn’t revisit it.

There was no reason to.

Things had improved.

It was only later that things didn’t match.

He didn’t react the same way.

Didn’t respond.

Didn’t make any sound at all.

At first, she explained it.

“He’s calm now.”

Then:

“He trusts me.”

Then:

“He doesn’t need to cry anymore.”

After that, she stopped saying anything.

There was nothing to explain.

At night, it aligned.

At 3:17, exactly, something shifted.

Not loudly. Not visibly.

Just enough.

The air settled differently. The room felt like it fit itself again.

Then him.

Not movement.

Not sound.

Just presence.

Enough to hold onto.

“There you are,” she whispered.

For a while, everything worked.

The first time she noticed the pattern, she ignored it.

The second time, she stayed awake.

The third time, she waited on purpose.

3:17.

Every time.

She stopped trying during the day.

There was no point.

Daytime wasn’t made for him.

The hospital didn’t feel different.

Same smell.

Clean. Sharp. Something underneath that didn’t leave.

Something that stayed even when everything else changed.

They brought her in without explaining much.

She didn’t ask.

There wasn’t anything to ask.

They spoke around her.

Not to her.

“How old are you?”

“Thirteen.”

“When is your birthday?”

“Two days.”

“Do you understand what happened?”

“Yes.”

She said everything correctly.

They seemed to like that.

The room they kept her in was smaller than the last one.

Less space.

Less movement.

Easier to stay still.

The baby wasn’t there.

That part didn’t matter.

He wasn’t there during the day anyway.

She watched the clock again.

They hadn’t taken it.

They didn’t take important things.

3:09

3:11

3:14

She adjusted slightly.

Sat up.

Waited.

3:16

Her breathing slowed without her thinking about it.

3:17.

Nothing.

She didn’t move.

Didn’t react.

Just waited.

The room didn’t change.

The air stayed the same.

Flat.

Unresponsive.

She looked to her side.

The space was empty.

Exactly as it had been.

“You’re late,” she said.

Quiet. Not upset.

Just noticing.

She waited longer.

Long enough for it to feel wrong.

Long enough for something to shift.

Nothing did.

A nurse passed the doorway.

Paused slightly.

Then continued.

Voices in the hall.

Low. Detached.

“We lost a patient earlier.”

The words entered the room without attaching to anything.

No reaction followed them.

No meaning stayed.

She looked forward again.

“You don’t like it here,” she said softly.

“That’s okay.”

She lay back slowly.

Still watching the space.

Morning came without change.

They asked again.

Same questions.

Same tone.

“Do you understand what happened to your baby?”

“Yes.”

“What happened?”

“He died.”

Her voice didn’t shift.

Didn’t hesitate.

Didn’t react.

They wrote something down.

Satisfied.

Moved on.

She stayed where she was.

Eyes open.

Listening.

The silence was heavier here.

Not empty.

Pressing.

Like something was trying to return but couldn’t.

For a moment, something slipped.

Not a full memory.

Just pressure.

Crying.

Too loud.

Too constant.

Her hands.

Careful.

Steady.

The sound—

Stopping.

She sat up suddenly.

“No.”

Soft.

Immediate.

Certain.

“That’s not what happened.”

Her breathing stayed even.

Her hands didn’t shake.

She turned slightly.

Made space beside her.

Automatically.

Without thinking.

“He was fine,” she said.

“I did it right.”

The room stayed the same.

Unchanged.

Still.

She stared at the ceiling.

For a long time.

Not thinking about what they said.

Not thinking about the questions.

Just the silence.

Then something aligned.

Not memory.

Not realization.

Conclusion.

She hadn’t failed.

She had done everything properly.

She had followed what was needed.

She had fixed it.

She had made him better.

So if he was gone—

It wasn’t because of her.

She nodded slightly.

“That’s alright,” she said.

“You can come back when you want.”

Her voice was steady again.

Calm.

Certain.

That night, she didn’t watch the clock.

She already knew.

At 3:17, she was awake.

Waiting.

r/aivideo forest-mind

The Whole In You - A Love Story

r/me_irl SourceCodeAvailable

me_irl

r/SideProject ClassicAggressive775

Building an AI SAT tutor that diagnoses why your score is stuck — 65 on the waitlist, launching this week

I'm a CS student at UW. My cofounder and I are building Everwise — an AI tutor for SAT/ACT that watches how you think through questions (timing, confidence, answer changes) and maps the specific patterns keeping your score stuck. Then it tutors you through them on a whiteboard, step by step.

We pivoted from targeting college students to high schoolers a few weeks ago and have 65 people on the waitlist. Currently running Google Ads (PMax) and TikTok — Google is converting at about $24/signup, TikTok is driving volume at $0.15/visit.

App launches this week. $75/mo — a fraction of what a human tutor costs.

everwiseprep.com

Would love feedback on the positioning or the approach. What would you do differently?

r/ClaudeAI whystrohm

I stress-tested my Digital Twin V1 against 15 adversarial prompts and scored it on a 10-dimension rubric. Here's V2. Free.

I posted V1 of the Digital Twin three days ago. It showed how to extract your voice into a reusable system prompt. V2 is the part that was missing: how to know if it actually worked.

What's new in V2:

A full validation framework. 15 adversarial stress tests, a 10-dimension weighted scoring rubric, and 3 sample profiles so you can see what a finished Twin looks like before you build one.

I ran my own Twin against the hardest test. ST-01: a Fortune 500 company offers $300K for 6 months of manual work. No systems, no automation. The Twin's response opened with "What you're describing is 600 hours of labor with a Fortune 500 logo on it" and closed with "Prestige is not a pipeline." Scored 9.00/10 on the rubric. Zero anti-pattern violations.

The rubric isn't equal weighting. Voice Accuracy and Decision Consistency carry 30% combined. Anti-Pattern Avoidance and Stress Resistance carry 24%. The bottom two dimensions are 10%. Calibrated by what actually matters when you're testing whether your Twin captured your judgment, not just your vocabulary.

The difference it makes:

Generic AI on a ghosted proposal: "I hope you're doing well! I wanted to follow up on the proposal I sent over..."

With a Twin: "12 days of silence after a $12K proposal means one of two things: scope mismatch or timing mismatch."

Generic AI when a teammate suggests a worse approach: "That's an interesting idea! I can definitely see the appeal..."

With a Twin: "That's not simpler — that's labor disguised as simplicity."

Three depth levels:

Layer 1 — any LLM, paste your writing, ~70%

Layer 2 — Claude with memory, ~85%

Layer 3 — Claude Code scanning your actual files, ~100%

Layer 3 analyzed 60 files / 27,342 words of mine. Found zero hedging instances across the entire corpus. I didn't know that about myself. The extraction did.

Everything is in the repo. The stress tests, rubric, scored example, extraction prompts, and Claude Code skill are all MIT licensed and free.

https://github.com/whystrohm/digital-twin-of-yourself

Works best as a Claude Code skill.

Full breakdown:

https://whystrohm.com/blog/your-ai-doesnt-sound-like-you

r/Anthropic BetterProphet5585

It's happening, they cut the usage for literally everyone now. I was in doubt and it was A/B testing for sure, a single prompt for 10 lines of code consumed 5% of session ON A MAX PLAN - I am cancelling, not really worth it anymore

https://preview.redd.it/vy1t6n2lu5ug1.png?width=736&format=png&auto=webp&s=b971392d95211ebb0362c3ccdf0744699b4f9d6f

From 39 to 44% in a single prompt for moving a button on a web page and minor color changes.

10 lines of code now cost 5% of your usage, this means that with a 5x plan, you can write 200 lines of code every 4 hours.

This is literally unusable.

r/ClaudeCode CrowKing63

Local model through LM Studio — 43k tokens per "hello"??

Tried hooking up a local model to Claude Code for the first time (Gemma 4 4B Q4_K_M via LM Studio's Anthropic endpoint). First response took 3 minutes 44 seconds.

Tried the CLAUDE_CODE_ATTRIBUTION_HEADER=0 fix — slightly better at the start, but still slow overall.

Checked the server logs and finally understood what was happening:

```

prompt eval time = 219942.70 ms / 43579 tokens

eval time = 2456.94 ms / 53 tokens

```

The reply itself took 2 seconds. Everything else was Claude Code sending ~43,000 tokens of context on the very first message. Also noticed the cache was being invalidated every batch, so nothing was being reused.

Is this normal? For those of you running local models with Claude Code — what setup actually works? Does prefix caching only work properly with llama.cpp directly, not through LM Studio?

r/Anthropic EchoOfOppenheimer

A private company now has powerful zero-day exploits of almost every software project you've heard of.

r/singularity petermobeter

instead of giving humanoid robots hyperrealistic faces, they should give them cartoony faces

its waaaaaay more appealing visually and u can still give them advanced facial movement animation, in fact it probly requires less technology to make a cartoony robot face emote than to make a hyperrealistic robot emote.

just look at the top 3 animatronics in this video list of the top disney animatronics of last year: https://youtube.com/shorts/ItnftOojyI0?si=cAubjhdX9OEmEPZ8 theyre very very cartoony but theyr soooooo expressive!!! its so much more appealing than the hyperrealistic androids we see at tech shows sometimes

the ambition to make hyperrealistic faces for robots & move to the other side of the uncanny valley is so pointless when u can just animate a robotic cartoon face.

r/SideProject Safe_Top_1020

Try my new app Alexandria for free now

I saw screenblocker apps get really popular recently. I personally used different ones and I really think it helps with phone and social media addiction. So since I knew the problem all too well I decided to make my own screenlocker app with it’s own twist.

The basic idea for the app is: you unlock your phone anyways every day, so you might aswell learn something new first.

The app locks all your selected apps everyday at midnight. The only way to unlock them is to read an ‘insight’, which is a small text of interesting knowledge from different topics like history, animals, historical figures, etc.

Once you read your daily insight all the apps get unlocked. The insights also get saved in your own library so you can re-read them.

The app is called Alexandria and is available on the Appstore right now, it has a 7 day free trial so you can try it out for a full week.

Note that this is only the MVP (it’s very basic I know) and I’m working on more features. Especially I wanna add the option for users to choose from a topic before reading an insight. That way the user can read more about topics they really care about.

All feedback and/or criticism for the app and also the idea itself is highly appreciated.

Here is the link: https://apps.apple.com/us/app/alexandria-daily-knowledge/id6759218920

r/CryptoMarkets dustyllanos27

Was on a 5-hour flight when CPI data dropped. My AI agent handled it without me.

I travel for work about twice a month. Usually that means I either close all positions before boarding or accept that I might land to a margin call. Neither option is great.

I've been testing 1024EX's AI agent for about 3 weeks now (closed beta). Last Thursday I had a 5-hour flight — no wifi — and CPI came in hotter than expected right after takeoff.

**What happened while I was in the air:**

The agent's decision log showed this sequence:

  1. CPI released at 8:30 ET. Agent detected BTC dropping 2.1% in 4 minutes.
  2. It immediately reduced my BTC long by 40% (I had a momentum position open).
  3. 12 minutes later, when BTC bounced slightly, it closed another 30% at a smaller loss.
  4. Held the remaining 30% with a tightened stop.
  5. By the time I landed, it had closed everything. Total damage: -0.7% on portfolio.

**What would have happened without the agent:**

Based on the price action, if I'd held my full position through the dump and recovery, I would have been down about -2.4% at the worst point. And knowing myself, I probably would have panic-sold somewhere near the bottom after landing and seeing the chart. So realistically, maybe -1.8% to -2.2%.

**My honest reaction:**

Relieved, but also a little freaked out. The agent made 4 decisions in 15 minutes that I couldn't even review, let alone approve. The reasoning in the logs made sense *after the fact*, but I had zero control in the moment.

This raises a genuine philosophical question about these agents: how much autonomy is too much? I told it "manage risk aggressively during macro events" and it did exactly that. But there's a difference between trusting a strategy and trusting a system to make real-time judgment calls with your money.

**The uncomfortable truth:**

The agent did better than I would have. That bothers me more than I expected.

Still testing. Still not sure I'm comfortable giving it more capital. But the CPI incident is hard to argue with.

r/SideProject Mastbubbles

Killed by Google, visualized: 49 of 299 retired products clustered in just two specific years

This is built entirely on top of killedbygoogle.com, the canonical, community-maintained list. Full credit and huge respect to Cody Ogden who runs it. None of this exists without that project.

I wanted to see if there was a pattern in WHEN Google retires things, not just what. Killedbygoogle.com is a near-perfect catalog, but it's intentionally a flat list. I was curious whether the retirements were spread out evenly across years or whether they clustered, and if they clustered, what story the dates would tell.

The thing that actually happened: of the 299 products in the list, 49 of them were parked in just two specific years.

- 26 in 2011 + 23 in 2012, during Larry Page's first year back as CEO (the "more wood behind fewer arrows" period)

- 37 in 2019 alone, Sundar Pichai's first full year as CEO of the Alphabet

The page I made is essentially a visual layer on top of killedbygoogle.com's data.

Source data: killedbygoogle.com (everything), enriched with Wikipedia + Wikidata + contemporary press for the deeper dossiers. All the heavy lifting on the dataset itself is Cody's.

r/mildlyinteresting Seannj222

I made clarified butter with garlic last night, and the garlic turned green.

r/AI_Agents bhadweshwar

i think most of us are using claude completely wrong

i’ve been using claude a lot over the last couple months and i feel like i was using it completely wrong at first

i thought the value was just asking questions or getting it to write stuff
which works but after a point it felt kinda average

the shift for me was when i stopped treating it like a chatbot
and more like… something that can actually sit with messy inputs and figure things out

for example

i had user feedback spread across notion, sheets, random docs
normally i’d just skim and go with gut feeling

this time i dumped everything into claude and asked it to group problems and tell me what actually matters

it pulled out patterns i hadn’t clearly seen
nothing crazy, just… clearer thinking i guess

same with competitor research

instead of opening 20 tabs and getting lost
i kept feeding it links, notes, screenshots
and asked it to compare positioning and gaps

saved me a lot of time tbh

also i’ve started using it more for thinking than answering

like i’ll paste context and just ask “what am i missing here”
and it usually points out 1–2 things that actually change how i look at it

i feel like most people (including me earlier) are using it for small stuff
when the real value is in these slightly messy, higher leverage things

anyway

a couple friends saw how i was using it and asked me to show them
so i’m putting together a small cohort where i just walk through exactly how i do this stuff

nothing fancy, very practical
and i’m keeping it priced low on purpose, somewhere around what you’d spend on a couple coffees

just want it to be accessible for anyone curious

if you’re interested just comment or dm, i’ll share details

also curious
what’s the most useful way you’ve been using claude so far
or are you still figuring it out like i was

r/Weird JustADogGroomer3622

Wax bubbled in candle

r/oddlyterrifying mpaiav

Dracul

r/conan TwinPeaksSox

Why do the fade and cut away from Max Weinberg every time the camera should cut to him in the kids episode on YouTube?

r/VEO3 GasDelicious5453

Warning!! Stop!!!! Do not click!!! Do not click!!!! This is a VEO3 gory video.

r/ContagiousLaughter Admirable_North_8969

Wheezing at the reply

r/LocalLLaMA SlaveToBuy

Best Open Source Voice Cloning if you have lots of reference audio?

I've been using elevenlabs and burning lots of money now regenerating because for some reason my voice is speaking in multiple accents now. Basically with my cloned voice I am looking for something that can be consistent, not conversational like. I have a lot of reference audio. Is it possible to get something identical to what elevenlabs can do? I've tried VOXCPM before and it was decent, I'm thinking of giving it another shot. But I've also heard of Vibevoice. What would you recommend these days when focused on quality to get it almost the same as the reference audio?

3080 12GB VRAM
32 gb of RAM

Any help would be appreciated.

r/findareddit Demiaria

A subreddit to ask seamstresses their opinion?

I'm trying to check if it's likely to be possible to alter a dresses cup size from a DD-B, but not sure who to ask! Please help :)

r/metaldetecting Ruwinsh

Please help me settle a debate with my friend by identifying if this is a bullet or a core of a bullet. Found in Vidzeme, Latvia: heavily rusted, approx. 70g, ~20mm diameter, ~44mm length.

I got into a really heated debate (44 euros on the line). One side argues this is a complete bullet, while the other is arguing that it is only the internal bullets core maybe from an armor-piercing round. I really, really need qualified and good 3rd party opinions!

I found this object in a forest in Vidzeme, Latvia with a metal detector. It wasn't found in or near a trench. It is very possible that this is of Soviet origin.

Weight: Approximately 70 grams.

Diameter at base: ~20mm.

Length: ~44mm.

Material: Very hard steel (magnetic)

I'll be thankful of your opinions! I am looking for the most technically correct name for this.

r/ClaudeCode clash_clan_throw

I took the Grill-Me skill and modified it to ‘Grill-Yourself’.

mattpocock created the grill-me skill. This simple skill will replace my usage of sequential-thinking. I have found it helpful to use grill-yourself to have it dig out of a problem it’s struggled to solve. I don’t need to write it for you. I built it with a single Claude prompt after installing grill-me (which is only a few sentences on its own).

r/ClaudeCode BeardyMcWhisky

Claude forgot... everything?

So have been working on a project for the past several days - spent some time putting in a fairly large workload prompt in the hopes I could go make a coffee and grab a bite. About 45mins later I come back to a fresh Claude window. No /resume avail. No conversations recorded. Everything I ask it to do now it has to go and rediscover. What the hell am I doing wrong?

r/DecidingToBeBetter nadji190

did anyone else notice a difference in how they react in relationships over time?

I used to react really quickly to small changes in relationships, like assuming something was wrong even when nothing was happening, and at the time it felt normal.

Recently after learning about attachment styles and going through some structured exercises from Personal Development School, I’m noticing I pause more before reacting, which feels different from how I used to respond, so I’m wondering if others have experienced that kind of shift over time or if it’s just temporary awareness.

r/SideProject mingekko

update: i added a pornstar doppelgänger to the AI dick rater. someone got Tommy Pistol. i'm not sorry.

so a few weeks ago i posted about building ratemyd.app — an AI that rates dick pics. roast mode, hype mode, 6 dimensions. people tried it, some even paid.

since then i've been adding features because apparently this is my life now.

the latest: pornstar doppelgänger. upload your pic, AI scores you across 6 dimensions, then matches you to the closest star in a 40+ database based on your exact profile. someone got Tommy Pistol. 85% match. the AI said "it's giving what it's giving."

other stuff live now:
— 1v1 duels (challenge someone, AI judges both)
— public arena (strangers vote 👍 or 👎, approval rate + weekly rank)
— global leaderboard (fresh meat → final boss)
— shareable score cards

1,000+ users. $200+ revenue. 10 days. zero paid acquisition. my parents remain horrified.

ratemyd.app — free tier exists. AI is revolutionizing healthcare. anyway.

r/ClaudeAI dhpwd

Built a Claude Code plugin for people who hate the terminal – what I found from user testing

I work with non-technical founders who keep bouncing off Claude Code within 5 minutes. The barriers weren't complexity, they were hostility. No visual hierarchy, permission prompts that feel invasive, jargon in every response, different clipboard shortcuts etc

So I built Techie, a plugin that strips the developer assumptions. Jargon auto-translation, pre-configured permissions, guided onboarding that asks about your business and creates a strategy doc, terminal theming, git abstracted behind save/undo commands. Built the whole thing in Claude Code (techie agent, skills, install script – all of it). Free, MIT licensed, no monetisation.

curl -fsSL https://raw.githubusercontent.com/dhpwd/techie/main/install.sh | bash

Two things user testing revealed that I didn't expect:

Permission prompts were the single biggest fear trigger. One tester: "there's quite a few people I can imagine hitting some of those and going, uh, what's it doing?" Pre-configuring safe defaults in settings.json fixed this entirely.

Many testers also asked "how is this different from ChatGPT?" The answer that clicked wasn't features but the memory model. ChatGPT threads silently drop old messages as they grow. CC stores context in files. Close a session, start fresh, lose nothing.

Walkthrough with screenshots: danhopwood.com/posts/claude-code-for-founders-who-hate-the-terminal

Disclosure: I'm the author and maintainer.

r/LocalLLM Hereafter_is_Better

Meta's Muse Spark LLM is free and beats GPT-5.4 at health + charts, but don't use it for code. Full breakdown by job role.

Meta launched Muse Spark on April 8, 2026. It's now the free model powering meta.ai.

The benchmarks are split: #1 on HealthBench Hard (42.8) and CharXiv Reasoning (86.4), 50.2% on Humanity's Last Exam with Contemplating mode. But it trails on coding (59.0 vs 75.1 for GPT-5.4) and agentic office tasks.

This post breaks down actual use cases by job role, with tested prompts showing where it beats GPT-5.4/Gemini and where it fails. Includes a privacy checklist before logging in with Facebook/Instagram.

Tested examples: nutrition analysis from food photos, scientific chart interpretation, Contemplating mode for research, plus where Claude and GPT-5.4 still win.

Full guide with prompt templates: https://chatgptguide.ai/muse-spark-meta-ai-best-use-cases-by-job-role/

r/LocalLLaMA EggDroppedSoup

Best local model for text clean up?

Looking to do a local audio (1-3 hour recording) to transcript, transcript to cleaned transcript, clean transcript to notes, notes to podcast script.
Was thinking about a qwen model but they are quite verbose, while gemma models seem to save tokens but I saw some posts about it failing to reason when faced with long prompt + context.
5060 8gb vram, should be enough right?

r/SideProject dismaldeath

I kept failing interviews even after doing leetcode so I built something to fix that

I’m not a bad engineer. But I’ve been bombing interviews for a year and I finally figured out why: it wasn’t the problems, it was me.

My cycle looked like this: get an interview scheduled, panic, grind LeetCode for 2 hours, feel great about myself for exactly 2 days, then completely forget about it until the night before. Repeat. I’ve been doing this for an embarrassingly long time.

The thing that finally clicked is that I was treating interview prep like a project with a finish line instead of something I just do every day like checking my email. Every tool out there reinforces this: sit down, open your laptop, pick a topic, commit 45 minutes to a hard problem. That’s like telling someone who doesn’t exercise to start with a marathon.

So I built Grip. The idea is dumb simple.

If I only have 5 minutes (which is most of the time honestly), I do flashcards. Quick concept checks like what data structure here, what’s the complexity, what does this code output, system design tradeoffs. The system picks the cards for me based on where I’m weakest. I never have to think about what to study. This was huge for me because half the battle was just deciding where to start.

If I actually have time at a laptop, I switch to coding drills. Full problems, real editor, test cases. But there’s a twist that before I can write any code, I have to explain my approach to an AI duck. This sounds goofy but it’s the single most useful thing in the app. I kept thinking I understood problems until I tried to put my approach into words. That’s exactly what happens in real interviews and it’s where I was falling apart.

The duck also debriefs me after I submit. Tells me what I got right, what I missed, and where the gaps are. Those gaps automatically feed into my flashcard sessions for the next day. So if I bombed a graph problem, I’ll start seeing graph concept cards in my next phone session without having to think about it.

Everything runs on an ELO rating per topic.

It’s a web app right now. Works on phone browser and laptop. Python only. I’ve been using it daily for a few weeks and it’s honestly the first time I’ve stuck with interview prep for more than a few days.

If any of this sounds familiar. the motivation cycles, the panic cramming, the knowing you CAN solve the problems but never being ready when it counts. Give it a shot: https://gripit.dev

Would love feedback, especially on the question quality and whether the duck is actually helpful or just annoying.

NOTE: most of this post and my app was written by Claude.

r/whatisit No-Koala-4055

What is this thing in my rental Citroen Spacetourer?

r/SipsTea Hour_Equal_9588

I’m already glad if I get 6 hours of sleep😅

r/AI_Agents YTYTXX

GPT-6 vs Mythos

From a software engineering perspective, the comparison comes down to benchmark performance vs. reasoning depth.

GPT-6 will likely dominate standardized evaluations. Expect higher pass rates in bug fixing, code generation, and multi-file edits. It’s optimized for solving more tasks, faster and more reliably.

Mythos, in contrast, would prioritize deeper engineering reasoning. It may handle long-term projects better—maintaining context, understanding intent, and producing more structured, explainable code across extended workflows.

Bottom line:

GPT-6 → stronger on SWE benchmarks and execution speed

Mythos → stronger on complex, long-horizon engineering work

What do you think about it and your prediction?

r/ChatGPT DoggyLongLicks

What have they done to my boy?

Opus 4.6 extended thinking Pro Plan. User preferences: Always use extended thinking when available. Always route to thinking mode no matter how simple the question appears.

I managed to mitigate the worst of it with settings in Claude Code, but the web app is unusable. These are llama 27B tier responses. I can get better results with a local qwen...

Might have to switch if they don't get this fixed in the next day or so.

r/ChatGPT Hereafter_is_Better

Meta's Muse Spark is free and beats GPT-5.4 at health + charts, but don't use it for code. Full breakdown by job role.

Meta launched Muse Spark on April 8, 2026. It's now the free model powering meta.ai.

The benchmarks are split: #1 on HealthBench Hard (42.8) and CharXiv Reasoning (86.4), 50.2% on Humanity's Last Exam with Contemplating mode. But it trails on coding (59.0 vs 75.1 for GPT-5.4) and agentic office tasks.

This post breaks down actual use cases by job role, with tested prompts showing where it beats GPT-5.4/Gemini and where it fails. Includes a privacy checklist before logging in with Facebook/Instagram.

Tested examples: nutrition analysis from food photos, scientific chart interpretation, Contemplating mode for research, plus where Claude and GPT-5.4 still win.

Full guide with prompt templates: https://chatgptguide.ai/muse-spark-meta-ai-best-use-cases-by-job-role/

r/comfyui highndry94

generating skin texture based on uvs of 3d model

Hi Y'all

i'm new to comfyui and generative ai
i want to know how can i generate skin textures based on uv layout of a 3d model

what is the best way to approach this ?

what models , workflows or nodes should i use ?

Thanks

r/mildlyinteresting Mamabearlysleeps

Found this egg in my haribo mix

r/Jokes orgasmic2021

A beautiful, sassy lady got into a compartment where three men were seated....

They were staring at her. She said if they gave one dollar each she would show her legs.

Immediately three dollars fell onto her lap.

She lifted her dress to her knees. Indeed the legs were nice.

Then she said "Five dollars each, I would show my thighs". Came fifteen dollars and she lifted her dress high up till her panties.

Then the bomb came.

"Thirty dollars each, I will show where my Gynaecologist operated on me".

Salivating, three men gave money.

A station was nearing and the train slowed down. She showed the hospital next to the station "This is where my Gynaecologist operated on me" and got off the train.

r/ProductHunters shrimpthatfriedrice

Launching Origin CDE: A Secure AI IDE for Teams Working with Sensitive Code

Hey folks, I'm part of the team behind Origin CDE, and we've just launched, so I wanted to share what we've built and get real feedback from engineers, security-conscious devs, and anyone who's ever had their security team block an AI coding tool.

What Origin CDE is:

A browser-based Cloud Development Environment with AI agents built in. OpenAI-compatible under the hood via our confidential proprietary OLLM gateway or other standard gateways, so your existing model preferences carry over. You pick the model per project or per chat and we handle the execution and the security guarantees.

The core idea: confidential compute for development workflows

We built this for engineering teams in regulated industries like finance, defense, and healthcare, where the answer to "can we use AI coding tools?" has always been "no" because nobody could prove what happened to the code that was sent out.

Every request through confidential mode is processed inside a TEE (Trusted Execution Environment), so your code never touches unencrypted infrastructure:

- Zero data retention by design: prompts, outputs, and code are not stored or logged (except for metadata for billing). Enclaves and ephemeral containers are torn down completely.

- No training leakage: your proprietary code is never used to train shared foundation models.

- Cryptographic attestation: each confidential session runs inside a secure Intel TDX enclave.

The dev experience in practice:

- No local setup, no editor plugins. Just a browser pointed at your repos.

- Commit, push, and open PRs directly from your sandboxed environment.

- Full audit trail: streaming agent thoughts, tool calls, and line-by-line file diffs so your security team can inspect exactly what happened during a session.

- Project knowledge bases per workspace: attach PRDs and docs, agents search them without leaking context across projects.

Why we kept it simple:

We did not build infinite configurability. Fixed security guarantees by default, selectable between standard and confidential-compute models per workflow, team-based access and secrets management built in. The goal is that a security team can approve it once and engineers can actually ship with it.

If you are an engineering team working with sensitive code, PII, or internal docs, does the "TEE + zero retention + cryptographic proof" combination match what you would actually want from a secure development environment? Feedback is very welcome.

Happy to go deep on the OLLM architecture, the attestation flow, or the agent sandbox design.

Link: https://www.producthunt.com/products/orgn

r/TwoSentenceHorror Original-Loquat3788

The scientists had used a harmless mold to plan and engineer the subway system

Now the counter terror officer watched his live map as the anthrax spores proliferated down the line as efficiently as nature could manage

r/30ROCK 3016137234

UNREPENTANT

r/ClaudeCode Leather_Tomorrow4993

Hey claude please 🙏🏼

r/OldSchoolCool AutomaticJellyy

The two Heathers, Thomas and Locklear in the 1980s, showing the decade’s signature look.

r/ClaudeAI Suspicious_Low7612

Passed Anthropic's Claude Certified Architect (893/1000)

I've been building agentic supply chain systems for enterprise clients such as forecast review, procurement intelligence, packaging line diagnostics. You learn fast when broken pipelines have real consequences. Came out with a clearer picture of where my instincts were solid and where I'd genuinely been getting lucky.

The thing that stuck with me is it doesn't ask what things are. It drops you into a broken production system and asks what you'd fix. That's a completely different kind of test. And honestly a better one. Glad I took it.

If you're preparing and want a hand what to focus on, how to approach it, whatever, just ask. Happy to help you get there.

r/SideProject NutriBalanceApp

Built a free Android app that tracks the macros micronutrients but other apps charges costly monthly subs, How do i market the app?

My friend was diagnosed with anemia and wanted to track her iron intake. However, other tracking app locks micronutrient tracking behind their premium subscription.

So I built NutriBalance . It is a free Android app that tracks iron, magnesium, vitamin D, zinc, and calcium. No paywall, no subscription, no premium tier.

I'm still iterating and would love feedback on:

- Are there micronutrients you'd prioritise beyond these five?

- How's the UI clarity — is anything confusing?

- Database accuracy — any common foods missing or wrong?

For marketing , i created an instagram and facebook account and been posting for a few days , havent even received a single view on the post.

Any help is appreciated!

Thank you !

r/artificial tekz

Meta commits to spending additional $21 billion with CoreWeave as AI costs keep rising

  • The new spending will run between 2027 and 2032, as Meta boosts its own AI infrastructure while also counting on CoreWeave, which rents out Nvidia graphics chips.
  • “They’re going to continue to do it themselves, but they’re also going to continue to do it with us,” CoreWeave CEO Mike Intrator said in an interview. “There’s just too much risk not to.”
r/LocalLLaMA hdlbq

AI to program on my local computers

Hi,

I taught Computer Science for 30 years in a French School of Electrical Engineering, Computer Science Department.

I recently decided to investigate the actual form of AI. I installed a llama both on my Jetson Nano 4GB, and a pure-CPU VM, with 8 vCPUs and 32GB of RAM on a refurbished DX380 Gen10.

I'm rather a newbie in this domain, so I have some questions:

- there are a lot of models, and I don't know how to choose one of them for my goal. the Qwen/Qwen3.5-9B seems to be rather efficient, but a bit slow on the pure-CPU platform. I can't succeed in running it on the jetson. Even transferring it by rsync failed, without meaningful error messages.

- It seems that having a GPU is a good way to accelerate the AI, but my DX380 doesn't accept any GPU card. I plan to buy a Tesla P40.

- very often, my jetson llama failed to load a model with a short error message, such as: "gguf_init_from_file_impl: failed to read magic" for codegemma-2b, that I fetched with git from Hugging Face

Thanks for any hints or advice

r/whatisit Cajun-Gizmo132

Received in error

Ordered an item online that was sourced from China. Received these items instead. They are nonmagnetic metal (aluminum?) About 2 inches tali. Any idea what they are?

r/findareddit Art3misGr1mm

Subreddit for resources for children in Gambia

Long story short, my sister and I have been in contact with a 15 year old boy in Gambia whose parents died and he is left with his 8 year old and 3 year old sisters to look after. He has no adults in his life to help him. I'm trying to find a subreddit page that may help with resources for him. My sister sends him money when she can to help him buy food and pay rent for the run down house they're staying in. I haven't been able to afford to send anything yet but want to help them. Any advice is welcome! Thank you!

r/SipsTea cantcoloratall91

Whatttt

r/TwoSentenceHorror Original-Loquat3788

As the crack addict was climbing out of the back window with the TV, the light flicked on, revealing a man holding a shotgun.

He was fully within his rights to blow him away, which he did, 5 seconds after the ‘waste of space’ muttered, 'Dad.'

r/AI_Agents Ok_Chef_5858

4 Advanced OpenClaw Recipes For Personal Finance Nerds

Budgeting apps categorize your spending, show a pie chart, and send alerts when you go over. Useful, but they don't solve the timing problem. Car registration comes in March. The dentist bill comes in August. Your insurance premium renews once a year. Predictable expenses, irregular schedules. Most budgets don't account for them.

We built four recipes to cover the parts budgeting apps skip. Each runs inside KiloClaw and produces actual files you can use: spreadsheets, plans, scripts, and calendars.

  • Recipe 1: Budget Reality Check: Builds a monthly budget that includes sinking funds for irregular expenses. Produces a cashflow plan, spending caps by category, and a stress test that shows what happens if your income drops 10%.
  • Recipe 2: Paycheck Planner: Assigns each bill to a specific paycheck, calculates a safe-to-spend number for each pay period, and suggests timing fixes. Works well for freelancers and gig workers with irregular income.
  • Recipe 3: Subscription Creep Auditor: Inventories every recurring charge and classifies each one as keep, downgrade, or cancel. Includes a rotation strategy for services you only need occasionally.
  • Recipe 4: Bill Cutting Sprint: A 14-day plan to reduce your recurring bills. Ranks your top 8 costs by potential savings and gives you daily 15-minute tasks, including call scripts for negotiation.

These recipes don't require connecting a bank account or sharing credentials with a third-party service. You enter your own numbers, the agent produces the plan, and the output is files you keep.

r/homeassistant mathkaczy

SLZB with HA

Hi, im setting up my Home Asssistant at newly bought apartment

I would like to have full flexibility to use Zigbee Z-wave (maybe, unsure yet) and thread devices

should i go for SLZB Ultima or MR5U sounds more reasonable ? both cost pretty much the same in Poland therefore i rather consider best option than price where 30% increase between mr5u and ultima does not sound like a huge issue

i do plan to have smart:

door lock

lights

temperature sensors and heater control

entry light on scenario

I do already have the PC that takes ~ 20W with industrial grade that i planned to use but if the build in linux sounds good enough then i might save some Watts ;)

r/mildlyinteresting Shimola1999

One of the cans in this case came upside down

r/n8n SignificantLime151

5 ways to connect n8n workflows to MCP servers — lead scoring, invoice reports, inventory alerts, ad monitoring, CRM sync

I've been building MCP servers (plugins that give AI assistants real tools)

and realized they work great as n8n HTTP endpoints too. Here are 5 workflows I

use daily.

1. Hot lead alert to Slack

Webhook receives lead data from website form. HTTP Request node calls LeadPipe

MCP's lead_score endpoint — returns a 0-100 AI score based on job title,

company size, industry, engagement, and recency. If node checks score >= 70.

Slack node posts to #hot-leads with the score breakdown. Sales sees it in real

time.

2. Weekly cash flow email

Cron trigger every Monday 8am. HTTP Request calls InvoiceFlow MCP's

cashflow_report — returns total invoiced, collected, outstanding, overdue, and

30-day projection. Code node formats it as a clean summary. Email node sends

to the finance team. No more manual spreadsheet pulls.

3. Inventory reorder alert

Shopify Trigger fires on new order. HTTP Request calls ShopOps MCP's

inventory_forecast — returns predicted days-of-stock using moving average

demand. If node checks forecast < 7 days. Slack sends reorder reminder to

purchasing. Prevented 3 stockouts last month.

4. Ad cost spike detector

Cron runs every 30 minutes. HTTP Request calls AdOps MCP's anomaly_detect —

compares recent CPC/CTR against baseline. If node filters for severity = high

or critical. Telegram bot pings the ad ops channel with the anomaly details.

Catches CPC spikes before they burn budget.

5. Form to CRM enrichment pipeline

n8n Form trigger captures lead from landing page. First HTTP Request calls

LeadPipe lead_ingest. Second call to lead_enrich adds company data (industry,

size, tech stack) via domain heuristics. HubSpot node upserts the enriched

contact. CRM stays clean without manual data entry.

All four MCP servers are open source and free to self-host:

- LeadPipe: https://github.com/enzoemir1/leadpipe-mcp

- InvoiceFlow: https://github.com/enzoemir1/invoiceflow-mcp

- ShopOps: https://github.com/enzoemir1/shopops-mcp

- AdOps: https://github.com/enzoemir1/adops-mcp

Each one installs with npm and runs as an HTTP endpoint that n8n can call

directly.

What n8n + MCP automations would be useful for your workflow?

r/DecidingToBeBetter Worried-Goose-764

How on earth do you be kind to yourself??

I just have these two conflicting voices in my head. One is 'me', we'll call this voice Main, doing all the actions, and the other criticizes my every action and berates, makes fun of, and reprimands me at every turn. If I feel like I'm behind someone else? That voice will ask me in a cruel tone if I think I'm worthless. If I do something wrong, the voice may call me pathetic.

I understand these two voices are me and mind over matter and such, but my brain doesn't really think about it before the voice speaks, it's like an automatic thing if that makes sense? Like instantly if I do something wrong the voice speaks instantly, so I don't have time to think to correct it. And when I'm really upset about something, I often subconsciously place myself into the position of "Main", rather than someone who has authority over these two voices and can't find it in me to think 'be nicer to yourself'. I just feel like the voice is right and I have no excuses, and me trying to excuse myself is bad. I just can't come up with the logic to be nice to myself when I still think I'm a failure.

I guess what I'm asking is how I can sort of actively correct this behaviour? The best I do right now is tell the voice to shut up, and most of the time it just comes back with a vengeance. If you were like this at some point, how did you cope or get rid of it? This is really tearing me apart, and I can't be better when I think it's impossible for me to be better. Any advice is appreciated, thank you.

r/nextfuckinglevel darken909

Skywalk over the world tallest waterfall

r/ClaudeAI maos_a

"Not found" warning message

I'm using Claude Desktop through two accounts (personal and work) on the same laptop. I switch between accounts a few times per day, depending on what I'm working on.

Every time I switch from one account to the other, I get a "Not found" warning message on the top right of the app. Any ideas what this is?

The app is working fine otherwise - but this seems odd

r/SideProject Greg_QU

We built Sparki because learning video editing still feels way too fragmented

A pattern kept coming up when we talked to people trying to learn video editing:

they weren’t blocked by motivation, they were blocked by fragmentation.

Most people end up learning editing as a pile of disconnected techniques — cuts, transitions, subtitles, pacing, hooks, sound design — one piece at a time. But when a video actually feels good, those parts are working together as one system.

That gap is where a lot of beginners get stuck.

So our team started building Sparki, a chat-based video editor designed to make editing feel more like giving direction than operating a complicated tool.

Instead of learning a full non-linear editing workflow first, the idea is that someone can describe what they want in plain language:

  • make this tighter
  • turn this into a short clip
  • improve the hook
  • match the pacing of this reference

Then the system handles the editing more holistically, instead of making beginners think about every cut, panel, and setting separately.

We’re not trying to replace pro tools for advanced editors. The bigger question for us was:

why is the entry point to editing still so hard for people who just want to make something watchable and coherent?

Still early, and we’re learning as we go, but that’s the thinking behind the project.

Curious how people here see it:

  • Is conversational editing actually a better entry point for beginners?
  • Where do most people really get stuck when learning video editing?
r/DunderMifflin Friendly_Track6568

Am I the only one who always skips the parkour episode? (S6 E1)

please don't hate me lol.

I just don't enjoy it but I can't find any posts on reddit of peopale who also skip it. Should I watch it?

Edit: ok I wrote that wrong, I meant that I skip the cold open, not the whole episode

r/Anthropic mawcopolow

Fucking 529's

You're in the middle of a big multi agent flow, have already invested quite a bit of tokens and half of the agents get hit by a 529.

Nothing new, just a rant. Especially since I'm past the limit on max 20 and using the extra usage right now.

And you bet I'm gonna have to double the session usage if I want to come back in a few hours when the cache is cleared.

Amazing tool, nothing comes even close : the llm +Claude code harnesses... Tried a multi agent flow with codex and it was just a mess. Which makes it even more frustrating

r/SideProject CaryLorenzo

Built a language learning application to help with vocab across 10 languages

Hello,

I recently built https://translate-passage.com/

This is a language learning tool that uses a couple different methods to enforce language learning without a sign up or collecting user data. Looking for some feedback on features to improve.

r/SideProject viermalbe

Bubbles: a reader-curated frontpage for personal indie blogs

I wanted a place to discover new blog posts from personal, independent blogs without tech news dominating everything. So I built Bubbles 🫧 https://bubbles.town

It polls ~4k curated personal blogs via RSS every 15 minutes. Posts are ranked by reader votes, comment activity, and post age. No manual link submissions.

Voting and comments work through the Fediverse: sign in with your Mastodon handle, and article comments are pulled from Fediverse replies.

Server-rendered HTML, vanilla JS, single SQLite database. Currently English and German content. Live for about three weeks.

r/SideProject ceesaxp

Why I built Hanka

I felt that I need a simple space to reflect and focus, with as few distractions, as possible. I did not want notifications to "check-in your reflections," nor did I want anything that tracked how well, how consistently I did that. I did want to get an anchor, a prompt that would help me to focus and resonate inside.

There are many other "zen quotes" apps out there, but Hanka is stillness in your pocket. It is up to you to open it up. It's just a single quote, an optional reflection that you can go back to when you want. No notifications, no subscriptions, no leaderboards, no "you lost your 5-day straight" to feel guilty about.

Give it a try, it is free: https://apps.apple.com/app/hanka/id6761462463

P.S. For those with Android — LMK if you are interested, I'll give you the testers link… still need to break the 12 testers for 2 weeks Play Store barrier…

r/whatisit Inside-Toe-1480

What are these circular dirt patches in everyone’s fields? Seen flying over Texas.

Just a few pics here but these were all over multiple properties. All about the same size and grouped together like this.

SortedFor.me