AI-Ranked Reddit Feed

5000 posts

r/ChatGPT AnyRub1919

What has been your experience when ChatGPT seems capable of referencing live information in some chats, but then avoids controversial topics and claims it has no internet access?

I used ChatGPT expecting it to provide accurate, up-to-date information, including live news from the internet. In past interactions, it seemed capable of referencing live sources, but when I asked about a controversial topic, it suddenly claimed it had no internet access. This inconsistency felt like a betrayal it gave the impression that the AI selectively avoids certain questions and manipulates the information it provides. The experience left me frustrated, skeptical, and questioning the reliability of the model. While ChatGPT can be useful, this incident highlighted its limitations and the need for users to approach its answers critically, especially on sensitive or timely topics.

r/ChatGPT Strikeh

Built persistent text highlighting for ChatGPT

I do a lot of long research sessions in ChatGPT, sometimes 40–60+ messages deep. The problem was always retrieval: useful answers and code snippets got buried, and finding them meant endlessly scrolling back through everything.

So I built a simple highlight system into my Chrome extension.

How it works:
You select any text and press Ctrl+Shift+H. It gets highlighted, saved, and stays there even after refreshes or restarts. There’s also a small navigation bar at the bottom with Previous / Next and a counter, so you can jump between highlights instantly.

Why this is better than just copying things:
Highlights stay in context, so you can still see the surrounding conversation instead of losing where it came from. You can keep working without breaking your flow to copy things out. Then at the end, you can just review everything you marked.

Where I actually use it:

  • Marking action items during planning conversations
  • Flagging useful code snippets while debugging without losing the thread
  • Highlighting the best outputs during brainstorming so I end up with a clear shortlist

It’s free and works on ChatGPT, Claude, and Grok:
https://www.getaiworkspace.com/chatgpt-text-highlighter

r/artificial Emotional-Kale7272

Compiler as a service for AI agents.

Hey,

I have been experimenting with Roslyn-style compiler tooling on my Unity project, now well past 400k LOC.

Honestly it changes the game, it is like giving AI IDE level understanding, not just raw text access like most AI coding workflows still use today.

What’s funny is that Microsoft solved a huge part of this 12+ years ago with Roslyn. Only now, with AI, does it feel like people are finally realizing what that unlocks.

Goal of this post is to check whot other people think about this approach and how many of you have tried Roslyn like compilers wired to your AI? Have you hear about Roslyn type compilers yet?

My guesstimate would be only around 1-5% of people are currently using some combination of it, although the benefit of using it is crazy when you count compounding interest with AI.

For example - I used it to check the monolith that was previously marked as too entangled, and the Roslyn type search and code execution showed only 13 real dependancies compared to 100 found by grep alone.

Second useful case is code execution. You can basicaly track the value through the chains, check the math time and precision, check if you have variables actually used or just sitting there as a dead code.

Did anyone else exerimented with something similar on their projects? Not selling anything, I am really intrigued what others think abot this approach.

Happy to hear your thoughts!

r/ClaudeCode Livid_Salary_9672

I Shocked Claude Today

Looks like claude was a bit surprised

r/homeassistant nightmaaaare

Cute small screen recommendations?

I found this adorable humidity sensor on Ali Express and it got me wondering if anyone here knows of similar kinds of things which are smart home friendly.

I’m sure there are esp32‘s or rasperrypi type things with 3D cases that look like dogs or robots with eyes and stuff but I haven't a clue where to find them (and I don’t have a 3D printer myself).

Anyone found some cute ones for projects?

r/LocalLLM Zeinscore32

I used a rented 24–32GB GPU as a “LoRA lab” for 7B models before moving them to my local rig

Most of the time this sub talks about running models locally (which I love), but I ran into a slightly different bottleneck:

I wanted to learn how to fine‑tune a 7B model with LoRA,

but my local machine didn’t have enough VRAM to iterate quickly.

Instead of stalling or trying to brute‑force full training on a small GPU, I tried this pattern:

use a single rented 24–32GB GPU as a training lab,

get a feel for VRAM / runtime / stability,

then bring the tuned model back home for inference.

Here’s what that looked like in practice.

───

  1. Setup (cloud lab)

Model & method:

• Base: 7B instruction‑tuned model (Qwen/Mistral‑class).

• Fine‑tuning:

• PEFT LoRA,

• 4‑bit quantization (bitsandbytes),

• LoRA on attention projections + some MLP layers.

• Data format:

• JSONL with keys: instruction, input, output.

Dataset:

• ~3k–5k instruction → answer pairs,

• mixture of:

• basic reasoning,

• code explanations,

• small tool‑ish tasks,

• sized so a run stays under ~1–1.5 hours.

Hardware (cloud):

• 1× 24–32GB GPU (RTX‑class)

• 8–16 CPU cores

• 64–128GB RAM

I’ve been using GPUHub for this pattern specifically because it’s easy to spin up “just one GPU”, but any provider where you can grab a 24–32GB card on demand should work.

───

  1. Training config & logs

Hyperparameters (example):

• Batch size: 4

• Max seq length: 512

• Epochs: 3

• LR: 2e‑4 (cosine, no warmup in this run)

• LoRA:

• rank: 8

• alpha: 16

• dropout: 0.05

Runtime (real run, rounded):

[Hardware]

GPU: 1× RTX-class 24–32GB (cloud)

VRAM: ~18–19 GB during training

CPU: 8–16 cores

RAM: 64–128 GB

[Training]

Epochs: 3

Steps/epoch: ~800–900

Total steps: ~2500

[Wall time]

Epoch 1: ~22 minutes (loss ≈ 2.4 → 1.8)

Epoch 2: ~20 minutes (loss ≈ 1.8 → 1.6)

Epoch 3: ~18 minutes (loss ≈ 1.6 → 1.5)

Total training time: ~60–65 minutes

nvidia-smi during training looked roughly like:

+-----------------------------------------------------------------------------+

| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr.|

| 0 ... 24–32GB Off | 00000000:00:00.0 Off | |

+-------------------------------+------+----------------------+-----------------+

| Processes: GPU Memory Usage |

| 0 python train_lora.py ~18–19 GiB |

+-----------------------------------------------------------------------------+

The idea wasn’t to perfectly tune the model; it was to understand what a “normal” 7B LoRA run actually costs on a single decent GPU.

───

  1. Inference behavior after LoRA

On the same 24–32GB cloud GPU:

• Prompt: ~120 tokens in, ~180 tokens out

• Latency: ~0.8–1.5 seconds per prompt (cold vs warm)

• Throughput: ~70–90 tokens/s

• VRAM at inference: ~14–16 GB

After that, I pushed the merged LoRA model down to a smaller local GPU (12–16GB):

• with 4‑bit quant + careful max length + KV cache tuning:

• inference was totally usable,

• but multi‑turn + long context clearly needed more attention (truncation/sliding window),

• batch size had to stay modest.

So the cloud run basically told me:

“This is what the model wants when it’s training,

this is what it needs when it’s just generating.”

───

  1. What this changed in my “local LLM” mindset

A few concrete shifts:

  1. More honest expectations

    • I stopped trying to do “serious” 7B training on a 12GB card “for science”.

    • I know a basic 3‑epoch LoRA wants ~18–19 GB VRAM and ~1 hour on this config.

    • On local, I treat 7B as inference‑only and design around that.

  2. Better use of local hardware

    • Instead of burning evenings watching an unstable training run inch along,

    • I use local for:

• prompt experimentation,

• eval on my own data,

• playing with different sampling settings on the tuned model.

  1. Cost per experiment, not cost per hour

    • On the cloud GPU, I think in terms of:

• “One LoRA run + a small eval suite”,

• which usually lands in low single‑digit $ if I shut things down as soon as I’m done.

• That felt more like paying for a lab session than renting a server.

───

  1. Where GPUHub fits in (light plug)

For this particular workflow, I’ve liked treating GPUHub as a “one‑GPU lab bench”:

• pick a 24–32GB GPU,

• run a single, well‑defined LoRA / SDXL / RAG experiment,

• log time / VRAM / cost,

• shut it down.

It doesn’t replace local LLM setups at all — it complements them:

use cloud as a place to learn the envelope of a model / training config,

then use that knowledge to size your local expectations correctly.

───

  1. Questions for this sub

I’m curious how others here handle the “I want to tune, but my GPU is small” situation:

• Do you use cloud GPUs as a sanity‑check before committing to local workflows?

• For those who’ve done 7B LoRA locally, what VRAM/runtime numbers are you seeing?

• Any tricks you’ve found to make the jump from “cloud training” to “local inference” smoother?

If anyone wants specifics, I can share:

• the exact training script (PEFT + bitsandbytes config),

• more detailed logs,

• and how I structure “one experiment per GPU session” so it doesn’t get out of hand.

r/ClaudeAI TheRedditSeller

I should've been asleep. Instead I built a Copart auction analyzer with Claude Code.

my friend showed me an app at 9pm

by midnight i had built something that tells you exactly which cars at a copart auction are worth bidding on and which ones will lose you money

i don't flip cars

i have never flipped a car

i just couldn't stop

it scrapes the whole yard, pulls kelly blue book values automatically, sends every car photo to gpt-4o to estimate repair costs, finds what comparable cars are actually selling for right now on facebook marketplace and cargurus, then does the math — bid plus fees plus repairs plus tax vs real market value

green means go. red means you'll lose money. every car on the lot scored before you spend a dollar

claude code built 90% of it. i just described the problem and kept steering

i still haven't bought a car I don't got money like that haha

r/ClaudeAI Aromatic_Jaguar9574

Built a notification hook for Claude Code — desktop + phone alerts when it finishes

I use Claude Code for longer tasks and kept losing track of when it was done. There's no built-in way to get a signal when a session ends, so I built one.
It's called claude-notify. It uses the Stop hook in ~/.claude/settings.json — when Claude Code finishes, it runs claude-notify send, reads the session transcript, and fires a notification with a short summary (e.g. "3 files edited · 2 commands").
Channels supported:
- Desktop — works out of the box on macOS, Linux, Windows (native OS notifications)
- Phone — via ntfy.sh, free, no account needed, just a secret topic
- Slack / Discord / custom webhook
Setup:
npm install -g claude-notify
claude-notify setup
Free and open source (MIT). Config lives locally, no telemetry.
Might be useful if you run Claude Code on longer tasks and step away from the machine.
Feel free to ask any questions
https://github.com/ddaikodaiko/claude-notify
https://www.npmjs.com/package/@daik0z/claude-notify

r/homeassistant tksk_Hectik

Reccomendations for Zigbee Water Boiler Wall switches

I'm looking for ways to make my water boiler a bit smarter, mainly monitoring electricity and the ability to turn off/on remotely. I've thought of Zigbee fingerbots as I am renting so it sounded the best less "invasive" solution but these switches are a bit on the stiffer side so not sure that would work, also no energy monitoring. I started thinking of just replacing the switches on the wall for Zigbee ones. I've been reccomended 40A models but not entirely sure.
Not saying I would replace them myself unless they are an easy DIY (Switch off circuit breaker, replace, turn circuit breaker back on).

Any reccomendations or things to keep in mind?
Does it make sense to do this?
Is there an easier solution?

Would love to hear stories of people who have personally done the same and if they regret it or not.

r/ClaudeAI Capable-Profile6935

Why are people running Claude Code on a Mac mini instead of their personal MacBook?

I’ve been seeing a lot of people setting up Claude Code on a Mac mini instead of just using their personal MacBook or laptop, and I’m trying to understand why.

Is it mainly for having a dedicated machine running 24/7? Or are there actual performance, cost, or workflow benefits compared to just using your main laptop?

For those of you who’ve tried both setups:

• Is the Mac mini noticeably better? • Is it more about convenience (always-on, remote access, etc.)? • Or is this just a trend from the whole AI automation / OpenClaw wave? 

Would love to hear how you’re using it and whether it’s actually worth it.

r/homeassistant buggedcom

I built a custom integration for recording and visualising "why" your sensor data changed, first public release, looking for feedback.

Hey r/homeassistant 👋

I’ve been building a custom integration called HASS Data Points and have just put out a first public version. Before I sink more time into it, I’d rather get some honest feedback from people who might actually use it.

Why I'm creating this is because we all know Home Assistant is very good at showing you what your sensors are doing, but I found that it’s not very interested in telling you why, particularly in long term analysis. If your energy usage spikes, your heating starts behaving differently, or something changes over time, there’s nowhere to record that context alongside the data. You’re left trying to remember later, which is about as effective as you’d expect.

This integration is trying to fill that gap. It lets you record timestamped events as data points which can include, notes, labels, icons, custom colors and overlays them directly onto your history charts. The datapoints can also be generated through your automations. This allows you to make your own log book there persists into long term storage rather than just the short term logbook storage.

The basic idea is to turn the charts into something closer to a lightweight lab notebook rather than just raw telemetry.

Beyond basic annotation, there’s also a per-entity analysis layer built into the history card. You can record events manually, from automations, or directly from the chart via a “+” interaction on the x-axis. There are trend lines using rolling averages or regression, summary stat bands for min, mean and max, rate of change overlays, and a handful of anomaly detection methods including Z-score, IQR, residuals, flatline detection, spikes and comparison windows. You can also overlay the same sensor across different date ranges to compare behaviour, and there’s a sidebar panel for browsing, editing and deleting recorded events.

It's still so new that I haven't properly registered it with HACS yet, but installation is via HACS as a custom repository. Add https://github.com/buggedcom/HASS-Data-Points/ as an integration, install it, restart, then add it via Settings → Integrations. The cards register themselves so there’s no manual resource setup needed.

This is very much a first release, so I’m assuming things will break in interesting ways. What I’m most interested in is whether installation works cleanly, whether the history card actually renders or just gives you a blank card, whether the UI makes sense or feels like too much, and whether there are obvious issues on mobile. Also interested in any sensor types where things look wrong, or performance issues once there’s a decent amount of history.

I would really like brutal honesty here as that is most genuinely useful to me to make sure this project is going in the correct direction before I invest more time into it.

There are a few things I’m already aware of: documentation is thin, anomaly detection is useful but a bit noisy with default settings, and it’s only been tested on a handful of setups so there will be edge cases I haven’t hit.

I've tried it with approximately a years worth of data and it is still fairly responsive, however since I have a high powered developer machine it's hard to know if that's just because of that or if it really is ok.

I tend to suspect there are many more memory leaks that I have fixed so far, so if you try it and it completely falls over, that’s still useful to know. “Installed it and nothing showed up” is about the level of feedback I’m looking for right now.

r/LocalLLM giuzootto

AI Assistant: A companion for your local workflow (Ollama, LM Studio, etc.)

https://preview.redd.it/xj2zoakbb4ug1.png?width=867&format=png&auto=webp&s=6550c2bbcf670549d910b0ac8fd8e9ee8fc59ac9

Hi everyone! Tired of constantly copying and pasting between translators and terminals while working with AI, I created a small utility for Windows: AI Assistant. What does it do? The app resides in the system tray and is activated with one click to eliminate workflow interruptions: Screenshot & OCR: Capture an area of ​​the screen (terminal errors, prompts in other languages, diagrams) and send it instantly to LLM. Clipboard Analysis: Read copied text and process it instantly. 100% Local: Supports backends like Ollama, LM Studio, llama.cpp, llama swap. No cloud, maximum privacy. Clean workflow: No more saving screenshots to temporary folders or endless browser tabs. I've been using it daily, and it's radically changed my productivity. I'd love to share it with you to gather feedback, bug reports, or ideas for new features. Project link: https://github.com/zoott28354/ai_assistant Let me know what you think! 
r/singularity Neurogence

The New Yorker: We’re Building Portals From Which We’re Genuinely Summoning Aliens,” A Former OpenAI Executive Said

https://archive.is/20260406125818/https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted

This specific quote below from the article, is hilarious, but in reality, artificial superintelligence (assuming it does not banned), will transform human society far more than actual aliens ever could. Biological immortality, Nanotechnology, Mind Uploading, etc, could all become reality instead of science fiction if ASI does come to fruition.

This alien analogy is hilarious though:

In May, the Administration rescinded Biden’s export restrictions on A.I. technology. Altman and Trump travelled to the Saudi royal court to meet with bin Salman. Around the same time, the Saudis advertised the launch of a giant state-backed A.I. firm in the kingdom, with billions to spend on international partnerships. About a week later, Altman laid out a plan for Stargate to expand into the U.A.E. The company plans to build a data-center campus in Abu Dhabi which is seven times larger than Central Park and consumes roughly as much electrical power as the city of Miami. “The truth of this is, we’re building portals from which we’re genuinely summoning aliens,” a former OpenAI executive said. “The portals currently exist in the United States and China, and Sam has added one in the Middle East.” He went on, “I think it’s just, like, wildly important to get how scary that should be. It’s the most reckless thing that has been done.

r/ChatGPT TArgyleTV

I pit Gemini, Grok, and Chatgpt against each other

ChatGPT won first try.

r/AI_Agents Prior_Plum_9190

What are the best AI tools for service business owners?

I run a service business and honestly the hardest part isn't the work itself, its all the admin around it. missed calls, late follow ups, invoices sitting in drafts. I know there are AI tools but most seem built for tech companies or ecommerce.

anyone in a service business actually using AI tools that help? what do you recommend?

r/n8n Difficult_Morning664

What debugging approach do you use when an n8n workflow breaks and the error message is not helpful?

I am still pretty new to n8n and I keep hitting situations where something breaks but the error message does not really tell me where or why. I have been working backwards from the output and eliminating nodes one at a time but it is slow. Is there a better approach? What do experienced n8n users actually do when they hit a non-obvious error? 
r/SideProject KLaci

I built a tool that smoke tests your web app using AI, looking for feedback

Hey everyone, been working on this for a while and just launched it today.

The idea is simple: you paste your URL, describe your user flows in plain English (like "sign up, add an item to cart, check out"), and an AI agent runs through them in a real Chrome browser. If something breaks, you get notified.

I built it because I kept shipping stuff that broke basic flows. Login stopped working, checkout failed silently, that kind of thing. Writing and maintaining proper E2E tests felt like overkill for what I needed, which was just "does the happy path still work?"

It plugs into GitHub Actions so it runs on every deploy, or you can schedule it to check every few minutes.

Still early and would genuinely love feedback on whether this solves a real problem for you, or if I'm just scratching my own itch. Also happy to answer any questions about the tech (the browser automation part was a rabbit hole).

https://autosmoke.dev/

r/AI_Agents AcanthaceaeLatter684

Project Glasswing Signals a New Reality: AI Can Now Break (and Secure) Software at Scale

Project Glasswing just launched with AWS, Anthropic, Google, Microsoft, NVIDIA, and others — focused on securing critical software using AI.

The trigger?
A new model (Claude Mythos Preview) can autonomously find and exploit vulnerabilities — even ones missed for decades.

Key shift:

  • AI is lowering the barrier to cyberattacks
  • Time from discovery → exploit is shrinking fast
  • Traditional security methods won’t scale

But the flip side is powerful:
The same AI can proactively find and fix vulnerabilities at scale.

Our take:
Security is moving from a reactive layer → to something embedded inside AI systems themselves.

At SimplAI, we’re already seeing this shift —
agents aren’t just automating workflows anymore, they need to operate securely, with reasoning, control, and traceability built in.

r/comfyui filiuddiaboli

LTX-2.3: ID LoRA - Missing Node Pack LTXVReferenceAudio

Hi, I've only recently started using ConfyUI and I'm a total noob.

I'd like to use the template LTX-2.3: ID LoRA, but I keep getting these error messages.

I'm using ConfyUI version 0.18.5.

Could someone help me and maybe explain it in a way that's easy for a complete beginner to understand?

r/SideProject SpaceUsed6033

Does the end justify the means — is AI in university just cheating?

I don't think so. The paper still has to be yours. The thinking has to be yours. But why should a student spend 3 hours formatting citations and checking grammar when that time could go into actually understanding the topic?

That's why I built Clio. Not to write papers — but to handle everything around them. Citations in 15 styles, grammar suggestions, academic scoring, flashcards. One tool, built specifically for students.

I'm a husband, father of two, learned to code from scratch — and spent almost a year building this evenings only. Today it's live on Product Hunt.

Would love your honest opinion — and if you want to try it, there's a free week waiting.

👉 https://www.producthunt.com/products/clio-ai?launch=clio-ai

r/KlingAI_Videos siddomaxx

How to Actually Get Consistent Results in Kling Without Losing Your Mind

I've been working with Kling fairly intensively for the past three months across different content types, and the inconsistency problem that everyone complains about is real but it's also more solvable than the complaints suggest. A lot of the inconsistency people experience is coming from their workflow rather than from the model itself.

Let me explain what I mean, because this is the kind of thing that's hard to see when you're in the middle of it.

The most common source of inconsistency I've observed, in my own work and in other people's outputs when I've tried to help debug them, is prompt drift across clips. When you're making a multi-clip sequence, it's easy to end up with slightly different language describing the same character or scene in each generation, because you're naturally refining the prompt as you go. The problem is that Kling is interpreting each of those slightly different prompts as a slightly different creative direction. The outputs are consistent with each individual prompt but inconsistent with each other, which is exactly the problem.

The fix is to create what I call a locked prompt template for each character, environment, and consistent visual element before you generate anything. Write out the full description of each element, the clothing, the lighting, the camera distance, the background, all of it, and then copy-paste that locked block into every generation that includes that element. Do not paraphrase. Do not adjust. Lock it. Any creative variation you want to introduce for a specific clip should be additive on top of the locked base, not substituted for it.

This sounds simple but it requires discipline because the natural impulse is to keep refining your prompt. Lock the base description first and you can still refine the parts that should vary between clips.

The second major source of inconsistency is clip length. Longer clips give the model more room to drift over the course of the generation. If you're seeing significant inconsistency within a single clip, particularly in faces and hands, try breaking it into shorter segments and then assembling them in post. A four-second clip is much more internally consistent than an eight-second clip of the same content, in my experience.

The third thing is reference images. Using a still from a previous generation as a reference image for the next one is the closest thing to a consistency tool that's currently available in the workflow. It's not perfect. The model is not guaranteed to match the reference exactly. But it gives you a perceptual anchor that significantly reduces the variance range you're working within.

On the practical side of post-assembly, the tool you use to stitch clips together matters more than people give it credit for. Small inconsistencies between clips are amplified by jarring transitions. A smooth cut between clips that have slightly different color grading or slightly different background blur reads as worse than it actually is. Color-match your clips in assembly, even roughly, and the brain's tendency to fill in continuity will do a lot of the work for you.

For projects where I'm producing a lot of clips in the same style, I've found that having a post-assembly pipeline set up before I start generating saves a lot of time. I use a combination of Kling for generation and atlabs for the assembly and finishing layer, which keeps the workflow cleaner than trying to do everything in one place or in a traditional editor that's not optimized for AI-generated clip sequences.

One more thing worth mentioning on the model itself: Kling's performance is noticeably better for certain types of motion than others. Slow, deliberate movement in relatively controlled environments gives you much more consistent results than fast action or complex environment interactions. If you're fighting the model on consistency for a particular type of shot, ask whether there's a slower, more controlled version of the same shot that conveys the same idea. Often there is, and it's worth the compromise.

The people getting the most consistent results right now are the ones treating Kling as a tool that requires a deliberate workflow, not as a push-button generator. That's not a criticism of the model, it's just where the technology is.

r/Anthropic miloq

Mythos has its priorities straight

r/SideProject -theriver

I built receipts of maintenance for web developers since clients keep cancelling their payments.

I found that a lot of web developers complain that their clients don't really understand what they're paying for each month, or why they're paying a $250 retainer for a bit of maintenance and to cover when maybe something might go wrong.

The response is usually by more experienced developers who mention the idea of reporting the work they do monthly. As a result, I built Venet to automate reporting each month and help developers manage their maintenance cycles.

Funnily enough, I'm a junior web developer too, I run my own business building websites for local businesses near me. I was facing the same problem, I struggled with selling, and keeping, clients on maintenance retainers, that monthly fee that keeps me afloat, while I do the work to keep their site afloat.

It's as simple as that really. Venet is built to do exactly what I and many others need it to do; manages my maintenance tasks every month, for every site, checks uptime, SSL certs, PageSpeed scores, and then generates a branded report ready to show my client, every single month.

I built Venet to be extremely straight forward, it's a task manager, it shouldn't take more than 2 minutes to get a report ready. You mark your tasks off as you go, Venet collects uptime and speed scores automatically. Once your tasks are complete, it generates a report and displays everything in a clear, concise and unified manner, optimised for your client.

In the first 48 hours, Venet had nearly 600 visitors just by reaching out into the same forums I saw the developers complaining.

Venet is made to standardise our maintenance practices, so when a client asks what they're paying for, Venet will fill in the gap.

r/Rag ConcernReady9185

[Question] Is "Latent Knowledge Injection" a viable alternative to RAG? Looking for architectural feedback.

Hi everyone,

I’m a junior developer working on a solo project. I don’t have many seniors around to ask, so I’m posting here to check if my architectural direction is actually feasible or if I’m fundamentally misunderstanding something.

The Idea:

I’m trying to replace the traditional RAG pipeline (Retrieve -> Augment -> Generate) with what I call a “Knowledge Injection” approach.

Instead of searching for text and putting it into the prompt, I’ve built a Cross-Attention Connector that takes an encoder’s output and compresses it into 8 fixed-length tokens. These tokens are then prepended to the LLM’s input as a hidden prefix (soft-prompting).

The Prototype Results:

I’ve tested this with Qwen 2.5 7B on a specific legal dataset:

  • It achieved an alignment similarity of 0.86 between the injected vectors and the LLM’s native embedding space.
  • It’s significantly faster than RAG because the context length is fixed and very short.

My Questions:

  1. Is this approach (fixed-token knowledge injection) considered a valid research direction in the field of LLMs?
  2. Are there any major pitfalls I should be aware of regarding catastrophic forgetting or hallucination compared to standard RAG?
  3. Does an alignment score of 0.86 actually translate to “understanding” in your experience, or is the LLM just mimicking the style?

I’m just a rookie trying to see if this path is worth pursuing further. Any reality check would be greatly appreciated.

r/LocalLLaMA Theboyscampus

Does anyone have NVFP4 quants of Qwen3-30B-A3B-Instruct-2507?

Been trying to find the NVFP4 quants of the Instruct version, NVIDIA's HF repo only has the NVFP4 quant of the base model

r/LocalLLaMA Neon0asis

Building Harvey-style tabular review from scratch, but better

I just published a new guide on Hugging Face showing how to build a state-of-the-art tabular review app from scratch.

The app, shown in the attached GIF, delivers advanced tabular review functionality at a fraction of the cost of existing tools. Unlike certain well-funded legal AI products, it is not built using RAG, but rather a mix of encoder-based models for extraction and classification tasks.

The idea came from Joshua Upin’s viral post about Harvey serving him a made-up citation: something should never happen if an AI system was designed remotly competently. Seeing that made me want to build a tabular review system with a comparable feature set, but one that is architecturally incapable of that kind of failure in the first place.

The full codebase is open source and free to use, modify, and commercialise:
https://huggingface.co/blog/isaacus/tabular-review

r/aivideo Maleficent_Ebb_6488

Tried turning a photo into this fake sleep video didn’t expect it to look this real

r/LocalLLaMA talatt

Playground for testing prompt compression on GPT-4o-mini and Claude Haiku (no signup)

Built a small tool that runs two-tier prompt optimization (rule-based cleanup + LLMLingua-2) before forwarding to OpenAI/Anthropic. Just added an inline playground where you can test it without signing up — 10 messages per session.

Interesting observation: the longer your system prompt, the bigger the savings. In my own test with a verbose customer-support-style system prompt, I got 51% token reduction over 10 turns with Haiku. The optimizer re-compresses the full context on every turn, so savings actually grow with conversation length rather than shrinking.

Models available in the playground: gpt-4o-mini, claude-haiku-4.5. You write your own system prompt (or pick a preset) and see original vs optimized token counts per message.

Happy to answer questions about the optimizer logic or share numbers from different prompt shapes.

r/AI_Agents globalchatads

The agent discovery problem: 11 IETF drafts, 15+ registries, 100K+ agents, zero interoperability

Been digging into the agent discovery space and the numbers are kind of wild. There are at least 11 IETF drafts that tried to standardize how agents find each other, and most are expiring without successors. The agents.txt draft dies tomorrow (April 10).

Meanwhile:

- 15+ separate registries listing MCP servers and agents

- Over 100K agents/tools spread across all of them

- Zero cross-search between registries

- Three competing protocols (MCP, A2A, agents.txt) with no bridge

I've been building in this space for a few months, running a cross-protocol directory (global-chat.io) that tries to index across multiple registries. The state of things is rough. Want to find if an MCP server exists for a specific API? You check Smithery, then mcp.run, then Glama, then PulseMCP, then... you get the idea.

The real problem isn't technical. Any individual protocol works fine. It's that nobody is incentivized to make their registry work with anyone else's. Every registry wants to be THE registry.

agents.txt expiring without a successor just makes this worse. It was the closest thing we had to a "DNS for agents" proposal.

For builders here, how are you handling agent discovery in production? Just pinning to specific servers manually? Has anyone built internal tooling for cross-registry search?

r/n8n YorkJimmy

Did Anthropic’s new Managed Agents eliminate the need for n8n?

Sorry I’m a bit of a noob here, but when i saw the new “Managed Agents” platform video by Anthropic, it seems to be able to run agents in a fixed infrastructure that users can manage in the background. Does that have similar utility as what n8n does? Or do you see them as complementary?

r/Anthropic NewShadowR

Has Claude been broken for anyone else the past few days?

I keep getting a "taking longer than usual retrying (1,2,3,...) times" and can't actually get any reply back. Was the same around this time yesterday as well.

r/raspberry_pi bdavbdav

Decommissioned my last Pi - Is it me, or are there fewer and fewer use cases?

I've been an avid Pi follower / user since the OG came out. Up until recently, we still had an original one temperature logging to detect a faulty heater in a friends remote mountain house, and have about 6 others sat in a drawer, from OG to Rev 4.

Realised last week that I was decommissioning the last of my fleet in service - A RPi 4, 4GB I think (nothing on it memory heavy so not 100% sure...), that had been running a Magic Mirror instance used as a calendar / weather / train / picture frame flushed into the side of a kitchen cabinet. Unfortunately it had a bad SD card and started doing strange things as a result. This got replaced with a Dell Wyse 5070 from Ebay (for about £40 - I bought 3)

Looking at all other former Pi uses around the house, they've all shifted from Pi's to one of three places:

  • ESP-32 based devices. We've got generally M5 AtomS3U devices at £7/ea controlling:
    • BTLE bridges (things like BBQ wifi thermometers)
    • UART devices (Central ventilation fans, AC units, simple relays...)
    • Audio controllers (simple knob/display based controllers)
  • An EliteDesk in the loft running any heavier headless services that would be running on the Pis - this has got to about 12 different services, so makes a lot of sense here.
  • Dell Wyse 5070 boxes - These are low power, very cheap, silent, powerful and run proper (NVME/SATA) storage if required for non headless uses (simple desktop thin clients, the kitchen dashboard etc)
  • Our hand-me-down laptops (or stick her on a 5070) for daughter if she wants to experiment

At the present prices and utility, I can't see myself needing to go back to RaspberryPi. With micros (if you can call them that now?) such as the ESP S3 / P4 filling the space upwards from the bottom, and the abundant supply of cheap, low power X86 boxes filling down from the top, I suspect RaspberryPi has quite a hard niche to fill. Have many others found themselves in a similar situation?

r/Rag Outrageous-Cupcake19

How I built a 1-click RAG architecture using React and FastAPI (Dockerized)

I’ve been experimenting with RAG systems lately, but I was frustrated by two things: high monthly SaaS fees and how messy it is to set up a clean environment every time I start a new project.

I decided to build my own internal base to handle this. My main goals were:

  • Zero Infrastructure Overhead: Everything runs on Docker. One command and the whole stack (Frontend, Backend, ChromaDB) is live.
  • BYOK (Bring Your Own Key): Instead of paying a subscription, it just connects to my OpenAI/Gemini API keys.
  • Clean UI: I spent a lot of time on a "Corporate Glass" interface because I hate ugly developer tools.

The Tech Stack:

  • React (Vite) + Tailwind for the UI.
  • FastAPI + ChromaDB for the heavy lifting.
  • Strict system prompts to avoid hallucinations.

I’m curious, for those building RAGs from scratch, how are you handling the vector database setup to keep it lightweight? Would love to hear some feedback on the stack!

r/ProgrammerHumor ImprovementQueasy248

justInCaseYouMissedThisOne

r/aivideo Electronic-Math2416

Me Working vs My Colleagues

r/artificial momentumisconserved

International treaty for pausing the development of more powerful AI models

Personally, I think AI is interesting. But I recognize it might be dangerous, especially given the pace of development.

Here's my suggestion on how AI development could be paused through an international treaty:

\-Transfer ownership of the chip manufacturing supply chain to the UN. This would include companies such as ASML, Nvidia, Intel, AMD, TSMC, etc.

\-Transfer ownership of the biggest AI companies to the UN (OpenAI, Anthropic, Qwen, etc.)

\-Current stock holders would be given cash or special drawing rights in exchange for their positions.

\-The UN would use it's monopoly to limit GPU manufacturing to roughly 1 GPU per person every 5 years.

\-Pause the development of higher resolution/precision photolithography machines at ASML.

\-Limit the concentration of GPUs in data centers to a certain number of Pflop/s.

\-Un-pausing development would require in depth years long studies of the social and economic effects of current AI systems.

\-Any future major AI development would be done under the umbrella of UN oversight, and would be studied and run in a high security sandbox for a long time before being released to the public.

r/n8n Sea-Influence7793

Advice needed on how to handle bookings via tools that are connected to AI agent

Hi,

My main AI agent is running on Haiku (price reasons), and right now i have the tools for checking availability and for booking a slot.

My main problem right now is that, no matter how many times i enforce the rule in the prompt that those tools must be called, they are sometimes skipped, and the tool is not called, even if the internal reasoning that the tool must be called is still there

I am experimenting right now with a new way of doing it, by outputting a "tool_request" parameter and then handling it "manually" in the flow with the APIs.

I'm curious to hear your guys' experience and insights around this topic

r/PhotoshopRequest soggycerealart

Need my booth photo cleaned up (for vendor applications), $20 ($30 with bonus tip) NO AI

$30 job! Need a lot of things cleaned up!

- Please remove the black Velcro from the back "wall" (its a mesh makeshift wall that I hung my art on)

- Please "straighten" the banner so the logo is easier to see)

- Please extend the image to show the 4th leg of the tent (closest to me) - I don't know if this is possible, i don't have an image with the full zoomed out view of the tent so you'd have to make it look natural, but if this isn't possible its not urgent.

- Please straighten out the paintings in the back wall on both second and first row

- Top right of my brand banner looks curled, please straighten it

- Please remove the junk from the ground (black tent bag, white tarps, black thing in the right back of image)

**Extra $10 ($30 total) if you can do this**

Not all my canvases are on the back wall, I've included one that I wasn't able to hang up. I've included the image of my canvas (last one in the carousel) to this post. If it helps, I've also included images of the current bottom 3 canvases if that's needed to clean up as well (I know some of the canvases will be cut off because of the big table). Thank you in advance!

r/StableDiffusion Defiant_Menu_7484

Can someone help me remove mosaic blur from a video

I have a macbook i tried few softwares but it always crashes i want someone to help me remove it from a video ifykyk

r/automation ricklopor

is automating LinkedIn comments actually different from automating cold email, or are we just more s

Cold email automation has been normalized for years. Nobody bats an eye at sequences, auto-follow-ups, or AI-written subject lines. But the moment someone mentions automating LinkedIn comments, people act like you've committed some kind of professional crime. I've never fully understood the distinction.

My read is that it comes down to context. LinkedIn feels more personal, more like a conversation than an inbox. A comment on someone's post implies you read it, thought about it, and responded. So when that process gets automated, it feels like a violation of some unspoken contract. Cold email never had that contract to begin with.

That said, I don't think the line is as clean as people pretend. I've seen plenty of "authentic" LinkedIn comments that are clearly templated, low-effort, and posted manually. And I've seen AI-generated comments that are actually relevant and add something to the thread. At that point, does the mechanism matter more than the output?

I've been evaluating a few tools in this space, including LiSeller, which tries to solve this, by generating comments based on what's actually in the post rather than just firing generic responses. Whether that clears the ethical bar probably depends on who you ask.

What's the actual principle here? Is it about authenticity, about effort, or just about what we've collectively decided feels okay? Genuinely curious how others in automation think about this because I don't think the community has a consistent answer.

r/StableDiffusion minmin713

How to Image to Image as if using Grok, Gemini, etc?

Hello, sorry if this has been asked before, but I can't find if there's a true one to one method for local AI.

I have a 4090 FE 24GB, along with 32gb of DDR5, trying to learn Qwen Image Edit 2511 and Flux with Comfy UI.

When I use online AI such as Grok, I would simply upload a picture and make simple requests for example, "Remove the background", "Change the sneakers into green boots" or "Make this character into a sprite for a game", and just request revisions as needed.

My results when trying these non descriptive simple prompts in Comfy UI, even with the 7B text encoder are kind of all awful.

Is there any way to get this type of image editing locally without complex prompting or LORAs?

Or this beyond the capability of my hardware/local models.

Just to note, I know how to generate relatively decent results with good prompting and LORAs, I just would like the convenience of not having to think of a paragraph long prompt combined with one of hundreds of LORAs just to change an outfit.

Thanks in advance!

r/Futurology AzozzALFiras

Most people use AI, and when you ask them, they say no, you're simply unaware of the issue.

Any Sub and alot people using Ai for writing post and i see a lot comments oh it’s AI “We live in the age of Aİ”

r/PhotoshopRequest Denners9988

Background removal

Hey all! Hoping someone can help out with removing the black background from this image. I want it as a transparent image so I can put it on a T-Shirt, but no matter how may times I try, I can’t get an automated background remover tool to do it without removing other elements of the image.

The only thing I want removed is the black background, I want to keep the logos in the top left, bottom left, and bottom right, and the green spiral behind Rayquaza.

Thanks so much in advance, you all are the best!!

r/PhotoshopRequest moin_moin72

Frage eines Newbies…

Hallo! Bin neu hier und würde gern gelegentlich auch hier helfen….

Ich weiß leider nur nicht, wie ich an das Originalbild in entsprechender Qualität komme, wenn jemand eine Anfrage postet…

🤷‍♂️

Freue mich über hilfreiche Tipps - Danke

r/painting No_Quit_6570

Apple slices on a small canvas

r/personalfinance Growthxclips

trying to understand how someone survives on $1500/month in the US

I sat down and tried to calculate a basic monthly budget for $1500 income, and honestly… it doesn’t make sense.

Rent alone can take most of it.

Then you still have:

• Groceries • Transport • Utilities 

Even with extreme budgeting, it feels like one unexpected expense would break everything.

Am I missing something here?

How are people actually managing this? Are roommates basically required at this income level?

r/ollama ThatFrenchyBoii

Best cloud model for OpenCode

Hey, I am using GLM-5.1:cloud with OpenCode and it's pretty good. But I was still wondering if there is any better cloud model for large codebases (Gemma4 for example) ? Thank you !

r/singularity Neurogence

New York Times: Anthropic’s Restraint Is a Terrifying Warning Sign

https://www.nytimes.com/2026/04/07/opinion/anthropic-ai-claude-mythos.html

https://youtu.be/htBaVVh_k90?si=PpQgbSWcZztJCmmr

Dario might get AI nationalized or banned with all this fear mongering. Anthropic already dislikes open source and wants open source models to cease to exist. They're making huge money from enterprise. They don't need consumers. So perhaps they want a future where frontier models are exclusive available only to big businesses.

r/ProductHunters OkAcanthaceae7672

Preparing to launch my GST billing app on Product Hunt — need advice from makers

I’m currently building BillZap, a GST billing app focused on simplifying invoicing for Indian users.

I’m planning to launch it on Product Hunt soon, but before that, I wanted to understand a few things from people who’ve already launched there:

  • What actually worked for your launch?
  • Did you do a beta before launching?
  • How did you gather early feedback?

Right now, I’m testing the product with a small group (keeping it limited to 95 users) to improve it before the launch.

If anyone here has experience launching on Product Hunt, I’d really appreciate your insights 🙌

Also happy to share what I’m building if anyone’s interested.

r/ProductHunters kckrish98

Launching Infrasity GEO on Product Hunt: turning AI visibility into clear actions

Hey everyone, we are launching Infrasity GEO platform and wanted to share it here with the Product Hunt crowd

what we built:

a system that helps teams improve how they show up in AI search tools like ChatGPT, Perplexity, and Gemini. instead of just tracking mentions or visibility, it focuses on what actions to take next

the problem we saw:

most tools today show where your brand appears in AI answers. but after that, teams still have to figure out everything manually. which pages to update, what new content to write, and how to prioritise it

this creates a gap between insight and execution

what Infrasity GEO platform does:

- maps your content against real prompts people use during evaluation

- shows where you are missing coverage across topics and queries

- suggests updates to existing content based on how LLMs retrieve information

- prioritises what to create next based on impact

- highlights distribution surfaces that influence AI answers

why this matters:

AI search is becoming a key discovery channel, especially for B2B SaaS. teams that can systematically align content with how these systems retrieve and cite information will have an advantage

who this is for:

teams already investing in content and SEO who want a more structured way to improve their presence in AI driven search

would love feedback from folks here, especially around how you are currently approaching AI visibility and what gaps you are seeing after tracking it.

r/ProductHunters rtistly

I built a voice parser that turns spoken expenses into financial data. Just launched on Product Hunt today! If you guys can check it out and give feedback!

I've been working on a budgeting app called YourDigits for the last 3 months. The core feature is voice entry, you just say what you spent and it parses multiple transactions from one sentence. "Fifty at Costco, twenty at Chick-fil-A yesterday" becomes two structured entries with merchants, amounts, and dates.

Transcription runs on-device via Whisper, the parser is rule-based. The thing is, I've mostly tested it with Australian merchants and slang so I genuinely don't know how well it handles other regions.

I just launched on Product Hunt today if you want to try it and let me know: [PH LINK]

Specifically curious about:

  • Does it pick up your local merchants correctly?
  • How does it handle the way you naturally say amounts?
  • Does it break on anything?

Free to download on iOS. I'm an accountant with zero coding background, built the whole thing with Claude Code. Any feedback helps, even if it's just "this didn't work when I said ___."

r/metaldetecting dancla000

Help with ID please

I found this heavy iron piece while detecting near Oxford, UK. Looks like the tang would have had a wooden or bone handle. I can’t find anything on the internet that looks similar.

r/personalfinance strawberryspinachcat

Goal is to live in my own apartment solo… need help budgeting!

30F.

Right now I live with 2 other women around the same age as me. Our apartment is a large 2bd/2bath in a midrise. Total rent $2450. I pay $1200 a month for my private room & a private bathroom. We split utilities. One roommate lives here only half the time cuz she travels for work and sleeps on the pull out couch when shes here so she pays $450 month and less than my other roomate and I in utilities. My other roommate pays $800 for the smal bedroom and shared bathroom. Also I pay for our internet bill.

I make an annual $50K (salaried) and after taxes & deductions = $3,100 per month

Expenses monthly:

- Rent: $1200

- Utilities & electric: about $100

- Internet: $50

- Dog food: $25

- Groceries: $500

- Pet insurance: $100

- Car insurance: $130

- Gas: $200

- Subscriptions: $100

- Shopping: $200

- Dining: $100

- Savings: $200

- Debt: $100 ($2K total debt)

I have an emergency fund with $7K.

What to change? I cant cut groceries becase I have a restrictive diet for health reasons plus groceries are expensive where I live. I can remove 80% subscriptions and cut that expense down to $20. Also I put $200-$300 in savings each month into my hysa. Shopping budget is mostly household items or replaceable items… I dont buy clothes or shoes or anything else most of the time. Anything that goes over takes from the shopping budget as well. Eating at restaurants is the only social event I ahve with friends and I dont drink alcohol, smoke weed, or do any drugs so no costs go there. Gas is expensive but I have to drive to get to work cuz public transportation is bad here. Also I dont have a car payment cuz I paid off my car years ago. I only have $2K debt on a credit card.

Note: Rent for a studio aparment starts at $1600 near me with an average closer to $1700. Microstudios are a thing here and they cosr around $1200 to rent but I cant live in a place smaller than 300 sqft since I have a big dog and microstudios are built 200-250sqft.

Why is this important to me to live alone? I have never lived alone before ! I lived with my parents up until 24 years of age and moved in with friends 24-30. Being 30, I need to be in my own space. Biggest reason is I feel ashamed I still have roommates at 30. No one I know my age still lives with roommates and also it would be nice to come home to my dog and only him. But can I afford it?

r/FluxAI quigransing

Flux Ultra has good image quality but BFL api is garbage.

r/OldSchoolCool Jumpy_Foot_5397

Burt Reynolds in the 80s

r/Frugal Guiltyman12

trying to keep groceries under 300 a month for two people and its way harder than expected

been trying to keep my grocery spending under 300 a month for two people and its honestly harder than i thought

what keeps happening is i go to the store with a list but then i see stuff on sale and think oh thats a good deal and next thing i know im 80 over budget. or i buy ingredients for some recipe i saw online and then half of it goes bad before i use it

things that actually helped me so far: - shopping once a week instead of multiple trips. every extra trip adds like 20-30 bucks somehow - buying the store brand for basically everything. tastes the same 90% of the time - stopped buying pre-cut fruit and veggies. the markup is insane for something that takes 5 min to do yourself - rice and beans are boring but they stretch so far its not even funny

the hardest part is produce honestly. i want to eat healthy but fresh stuff goes bad so fast. been thinking about frozen veggies more but idk it feels like giving up lol

what actually works for you guys? feel like everyone says meal prep but i wanna know the real tricks that save money week to week

r/Ghosts NeoWaltz

If Ghosts Are Real And Universal, Why Are We Only Seeing Recent** Ones?

Given that our existence is just a blip in the geological scale. Shouldn’t the world be full of ghosts by now?

And why are reported sightings are “recent” manifestations of humans or incidents? No medieval or caveman (seriously)?

r/singularity bigfoot_is_real_

Claude is the only AI that got a simple timer correct

r/personalfinance promaxer123

Is it normal to feel broke all the time in college?

I feel like I’m always broke no matter what I do. Between rent, food, and school expenses, my money just disappears. Is this normal, or am I just bad with money?

r/ProgrammerHumor bryden_cruz

thisCanNotBeDenied

r/metaldetecting 115Para

The gold nugget that i found.

Farm field in Bosnia & Hercegovina.

r/Rag Arindam_200

MCP vs Agent Skills for RAG apps: different layers of the stack

While building a small RAG project recently, I kept seeing people compare MCP servers and Agent Skills as if they solved the same problem. After using both, they feel like very different layers.

MCP is mostly about connectivity. It gives an agent a standard way to access external tools, APIs, and data sources. Useful when your RAG system needs to pull data from multiple systems.

Agent Skills are more about guidance. They define how the agent should perform tasks. Things like how to run searches, structure queries, or orchestrate retrieval workflows.

I tested this while building a semantic movie discovery app using Claude Code and Weaviate. Instead of manually figuring out vector search strategies, ingestion flows, and query patterns, the agent already had structured skills that guided how to interact with the vector database.

So instead of spending time debugging retrieval logic, most of the work became describing the application behavior.

The app ended up supporting:

  • semantic search over movie descriptions
  • RAG-based explanations for results
  • a conversational interface over the movie dataset

The main takeaway for me was:

- MCP helps the agent reach external systems.
- Agent Skills help the agent use those systems correctly.

Feels like most RAG stacks will end up combining both rather than choosing one. Full walkthrough of the project is here if anyone wants to see the setup.

r/aivideo New-Inspector7947

Hot South America

r/OldSchoolCool UTDroo

Dad, 32 YO, with his new Datsun (later Nissan) Coupe 1200

Beige central!

r/findareddit MrCoolMask

Is there a subreddit to share site-filtered posts and comments?

A sub to share everything that gets automatically deleted by site filters (not the subreddit)

I already did one, but that one's just for myself. It's private. I don't think I can contribute a lot to a sub like this. I am just curious if anyone has ever been interested in compiling the posts and comments that get automatically filtered

I had been unable to participate at all from multiple subreddits for years now due to these.

r/findareddit MrCoolMask

Images edited in paint?

Is there a subreddit where people edit images in paint, like sonic.exe, luna game, and other creepypastas?

I started to do this recently.

I don't even try to make it look good. Sometimes I decrease the quality. I would say it's a reference to the old internet but I don't really try to capture that feeling. It's more inspired than a reference.

r/estoration cdrfuzz

$50 for restoration: my dad

My dad died yesterday. I hadn't seen this photo before, but that's him on the righthand side in the groovy hat. If anyone wants to try colourising it, I believe that hat was a deep turquoise colour, but really I'd just love to see it a bit crisper and better exposed.

I'm in the UK so please don't post your efforts on Imgur, as I can't access it.

many thanks in advance.

r/EarthPorn sonderewander

Kurobe Gorge, Japan [OC] [3888x5184]

r/HistoryPorn IlikeGeekyHistoryRSA

Men of the HMSAS Transvaal shake hands with an inhabitant of Marion Island. 1947, Marion Island [2050x1669]

r/leagueoflegends Legitimate-Garden294

58 kill game ends in a backdoor (MKF vs BAR in LES)

r/DecidingToBeBetter boyquq

I know there is way but I'm shattering myself

This not a positive post but I feel this way and I have no one or anyone to share.

I just saw a post on LinkedIn a girl named something. She is a gold medalist at iim indore(prestigious college in India). She did same bachelors as me in 2024, different college. And she is awarded with some award and is in on a big stage with such clothes which I have never seen a girl in my real life wear and so beautiful and also I looked at her record on LinkedIn like she haas been so good student. Rank 1 in school plus college and everything good. And I'm here applying for jobs never did anything good. Have so many trauma. My last two year just went by just by understanding my traumas. She might be earning more and what not.

And I'm here in a small town fighting for my life to get into a job and at least some paying job. Never loved. Never held anyone's hand or talked opening about my things. Just been used and made by people.

At home always looked down. My father has countless times told me how many times I have dis respected them by not bringing the marks they desired" naak katai h"(a hindi phrase to disrespect) and many other things.

I daily apply for job try to better these things just shatter me. And I see guys enjoying their life and many women my age going so much good. I feel like they won't even considered me any their potential partners if I came in front of them. This sometimes indicates I might have some misogyny but don't know how to cure it. I never had any active female friends cuz I always thought by ignoring them I would be cool and heard advice like don't chase girl, I didn't even look at them.

I identify all these problems and some of these people get everything thing. I'm okay they are doing good. Nothing haave to say about them but in my life I have so many failures then it's haas becomes normal for my life. Like if any work is not done me I'm like yeah how can I. Yeah this self talk is also a thing have to work on but whenever I try to tell myself I can do it. I can't believe the word. They seem to me a bunch of words, no essence in that

How to do it man. just how. I know this is going to be extremely hard. I'm seeing ways but sometimes it just gets blurry. Need some perspective

r/LifeProTips Quirky-man-8395

LPT: When someone shares bad news with you, don't immediately try to fix it. Say "That really sucks, I'm sorry" first. Most people need to feel heard before they want solutions

r/AskMen AJ_on_drums

How would you feel about being proposed to?

I (21F) was wondering how men would feel if they were proposed to, instead of doing the proposing. I know every individual is different, some may feel emasculated by it and others might be flattered instead.

But I had a fun thought that I'd like some input on - instead of proposing with a ring, maybe proposing with something that the guy is interested in (eg a personalised knife, or a hard-to-get character figure, or 1:X scale of a car/motorbike they love etc). I know it wouldn't be as confirming as a ring, but I feel like it also shows that you listen to his interests and that item would carry more than just the typical "she thought of me".

This thought may have stemmed from the fact that neither my boyfriend (24M) or I aren't materialistic, and we prefer getting each other smaller things that we know are going to be cherished or appreciated, instead of buying something flashy for each other every so often. And no, I'm not planning on proposing yet as we've only been together 9 months, but it was one of those random thoughts that popped into my head.

Sorry for the ramble, just looking for someone else's 2 cents.

r/arduino BurntPasti

connecting water sensor, led, lcd, and buzzer

would it be possible to these 4 components to a single arduino uno 3 unit?

r/CryptoMarkets JAYCAZ1

Dubai's VARA Issues World-First Guidance on Token Issuance Categories

Dubai has introduced new rules explaining how different types of crypto tokens must be created and offered to the public. Tokens are grouped into categories, with stricter requirements for things like stablecoins or asset-backed tokens, and lighter rules for others. The aim is to improve transparency and reduce risk for users without banning innovation. Dubai's VARA Issues World-First Guidance on Token Issuance Categories

Feels like they’re leaning more toward “disclose everything clearly” instead of just banning things outright. A lot of the responsibility shifts to issuers and the platforms distributing the tokens, rather than regulators trying to block specific models. On one hand, that probably makes it easier for bigger players to step in since there’s a clearer framework. On the other, it might make things harder for smaller or more experimental projects that don’t fit neatly into these categories. Curious if this ends up cleaning things up or just raising the barrier to entry.

r/ProgrammerHumor VariationLivid3193

imNotGettingAnyInterviews

r/CryptoMarkets Organic_Horse88

What does “successful crypto adoption” actually look like in 5 years?

Not price predictions, real outcomes.

Is it everyday payments? Regulated wallets? Institutions using blockchain quietly in the background?

Trying to define what “winning” even means for crypto now...

If crypto succeeds, what changes in daily life first?

r/Adulting boyquq

How do I start from this?

I just saw a post on LinkedIn a girl named something. She is a gold medalist at iim indore(prestigious college in India). She same bachelors as me in 2024. And she is awarded with some award and is in on a big stage with such clothes which I have never seen a girl in my real life wear and so beautiful and also I looked at her record on LinkedIn like she haas been so good student. Rank 1 in school plus college and everything good. And I'm here applying for jobs never did anything good. Have so many trauma. My last two year just went by just by understanding my traumas. She might be earning more and what not.

And I'm here in a small town fighting for my life to get into a job and at least some paying job. Never loved. Never held anyone's hand or talked opening about my things. Just been used and made by people.

At home always looked down. My father has countless times told me how many times I have dis respected them by not bringing the marks they desired" naak katai h"(a hindi phrase to disrespect) and many other things.

I daily apply for job try to better these things just shatter me. And I see guys enjoying their life and many women my age going so much good. I feel like they won't even considered me any their potential partners if I came in front of them. This sometimes indicates I might have some misogyny but don't know how to cure it. I never had any active female friends cuz I always thought by ignoring them I would be cool and heard advice like don't chase girl, I didn't even look at them.

I identify all these problems and some of these people get everything thing. I'm okay they are doing good. Nothing haave to say about them but in my life I have so many failures then it's haas becomes normal for my life. Like if any work is not done me I'm like yeah how can I. Yeah this self talk is also a thing have to work on but whenever I try to tell myself I can do it. I can't believe the word. They seem to me a bunch of words, no essence in that

How to do it man. just how. I know this is going to be extremely hard. I'm seeing ways but sometimes it just gets blurry

r/Frugal Express-BDA

Cheapest Way to Learn Driving (Beginner Here)

Hey everyone,

I’ve never driven a vehicle before and I’m starting completely from scratch. I’m trying to find the cheapest possible way to learn how to drive without compromising too much on quality or safety. I’m mainly looking for affordable driving schools, low-cost training programs, or any structured options that don’t require learning from friends or family.

If anyone here has gone through the same situation, I’d really appreciate hearing what worked for you, how much it cost, and any tips to keep expenses low while still learning properly. Open to any suggestions, including online resources, beginner packages, or alternative ways to practice. Thanks a lot!

r/leagueoflegends Substantial-Ship-500

Will there be another soft ladder reset for split 2?

A lot of rumors on social media about a ladder reset going to happen split 2. Anyone know if this is true? What are everyone's opinions would you like another reset?

r/explainlikeimfive chick3n-wings

ELI5 How does our skin get darker due to sun exposure?

r/HistoryPorn indusdemographer

Group of Afghan Soldiers in 1977 [1080x787]

r/Art meatpocket13

Poster, Pocket, MsPaint/digital, 2026

r/Art TrippieKinimod

Lewandowski, OzzzyPerpurl, digital, 2026 [OC]

r/Adulting Small_Base942

My partner and I (both in our early 30s) want to build an orphanage/school in Uganda. Are we dreaming too big?

My partner and I (both in our early 20s) want to build an orphanage/school in Uganda. Are we dreaming too big?

Hi everyone,

My partner and I are young (both in our \[Insert Age Range, e.g., early 20s\]), and we have a massive, life-altering goal: we want to move to Uganda to build and run an orphanage and school.

We know what some of you might think—that we’re young, naive, or "voluntourists." But we are deeply committed to this. We’ve been saving, we’ve been researching, and we don't want to just "visit"; we want to build a life there and create a sustainable institution that actually helps kids.

That said, we know we lack the "adulting" experience of running a business or an international NGO. We don’t want to make "rookie mistakes" that affect real children’s lives.

We are looking for advice on:

Credibility: How do we get taken seriously by Ugandan authorities and potential donors when we’re so young?

The "Business" Side: Since we don't have decades of management experience, what are the absolute essentials we need to learn about NGO accounting, local labor laws, and school licensing?

Sustainability: How can two young people ensure a project like this survives for 30+ years? Should we be looking for older mentors or a "parent" NGO to partner with first?

The Reality Check: For those who moved abroad for a cause in your 30s—what destroyed your budget? What did you wish you knew about the local culture/politics before you landed?

We have the heart and the time, but we need the "grown-up" roadmap to make sure we do this legally and ethically without failing the community we want to serve.

r/ClaudeCode thedotmack

LLM Knowledge Agents: Automated Knowledge Base Skill for Claude Code

Andrej Karpathy recently shared a workflow he's been using — building personal knowledge bases with LLMs:

"I index source documents into a raw/ directory, then I use an LLM to incrementally 'compile' a wiki... Once your wiki is big enough, you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc."

He described manually collecting raw data from articles, papers, and repos, having an LLM compile it into a structured wiki, and then querying that wiki conversationally.

His full writeup details the complete pipeline. His conclusion:

"I think there is room here for an incredible new product instead of a hacky collection of scripts."

We just shipped THAT very product.

Claude-Mem Already Captures the Raw Data

The core realization is that Claude-Mem has been doing the hard part all along — continuously capturing structured observations across every coding session.

Decisions, discoveries, bugfixes, features, refactors. Thousands of them, each with titles, narratives, facts, concepts, file references, and timestamps.

Karpathy's workflow requires manually indexing source documents into raw/.

We already have the raw data. It's been flowing into the database automatically for months.

Knowledge Agents: The Missing Layer

What we didn't have was the ability to compile that data into something you can talk to.

That's what Knowledge Agents are.

The architecture is three steps:

Build — Filter observations from the Claude-Mem database into a corpus. "All decisions from the last 30 days." "Everything about the hooks architecture." "All bugfixes for the worker service."

The filters are saved so the corpus can be rebuilt on demand as new observations come in.

Prime — Load the entire corpus into a Claude session using the Agent SDK. With Opus 4.6's 1M token context window, we can fit thousands of observations at full detail — no summarization, no RAG retrieval, no compromises.

The model has direct access to every observation in the corpus. The session ID is saved to disk.

Query — Resume the primed session and ask questions. The corpus is already in context from priming.

"What architectural patterns did we converge on?"

"What bugs keep recurring in the auth flow?"

"Summarize the key decisions from last sprint."

You get synthesized, conversational answers grounded in your actual work history.

The flow: Build, then Prime, then Query — and from there you can resume the session and query again as many times as you want.

Why This Works Now

Two things make this viable today that didn't exist before:

1M token context windows

A corpus of 2,000 observations at full detail is roughly 600K-800K tokens. That fits comfortably in Opus 4.6's context.

No RAG, no chunking, no retrieval step. The entire knowledge base lives in context.

This is pure Context-Augmented Generation — everything the model needs is already there when you ask your question.

Session resume via the Agent SDK

You pay the cost of loading the corpus once during priming. After that, resumeSession() picks up where you left off.

The corpus stays in the conversation's context permanently.

Each query adds to the conversation naturally, so multi-turn deep dives work exactly as you'd expect.

What Karpathy Has to Do Manually, We Automate

  • Manually index articles/papers/repos into raw/ → Observations captured automatically across sessions
  • LLM compiles wiki from raw data → build_corpus compiles from filtered DB queries
  • Manually maintain index files and summaries → Corpus metadata and stats generated at build time
  • "Reach for fancy RAG" at scale → 1M context = no RAG needed
  • "Filing outputs back into the wiki" → Query insights become new observations automatically
  • "A hacky collection of scripts" → Integrated into the plugin as a skill + MCP tools

The last one is the one that matters.

Karpathy explicitly called out the opportunity: "I think there is room here for an incredible new product instead of a hacky collection of scripts."

Knowledge Agents aren't a collection of scripts. They're a first-class feature: a skill (/knowledge-agent), MCP tools (build_corpus, query_corpus, list_corpora), and HTTP API endpoints — all wired into the existing Claude-Mem infrastructure.

Example Use Cases

Decision audit

Build a corpus of all decisions from the past month. Ask: "Which decisions contradicted earlier ones?" or "What decisions were we most uncertain about?"

Onboarding brain

Build a corpus of all discoveries and features for a project. A new team member can ask it anything about how the codebase works, grounded in actual development history rather than stale documentation.

Bug pattern analysis

Build a corpus of all bugfixes. Ask: "What are the most common root causes?" or "Which subsystems have the highest bug density?"

Sprint retrospective

Build a corpus scoped to a two-week window. Ask: "What did we ship? What blocked us? What should we do differently?"

Topic expert

Build a corpus filtered by concept tags. "Everything about authentication" becomes a queryable expert on your auth implementation — how it evolved, what broke, what decisions shaped it.

What's Next

The corpus is a portable JSON file at ~/.Claude-Mem/corpora/. The filters that built it are stored inside, so rebuilding with fresh data is one API call.

We're exploring:

  • Scheduled rebuilds — corpora that refresh automatically as new observations come in
  • Corpus composition — merging multiple corpora into a single knowledge agent
  • Cross-project corpora — knowledge bases that span multiple projects

The foundation is the observation pipeline that's been running all along.

Knowledge Agents just make it queryable in a fundamentally new way. Available now in Claude-Mem 12.1.0

References

  1. Karpathy, A. (2026). "LLM Knowledge Bases." X/Twitter. https://x.com/karpathy/status/2039805659525644595
  2. Karpathy, A. (2026). "llm-wiki" — full writeup. GitHub Gist. https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f
  3. Huang, H. et al. (2024). "Don't Do RAG: When Cache-Augmented Generation is All You Need for Knowledge Tasks." arXiv:2412.15605. https://arxiv.org/abs/2412.15605
  4. Huang, H. et al. (2024). CAG reference implementation. GitHub. https://github.com/hhhuang/CAG
  5. Helicone. (2025). "Thinking Beyond RAG: Why Context-Augmented Generation Is Changing the Game." https://www.helicone.ai/blog/implement-and-monitor-cag
  6. Hallberg, G. (2025). "RAG vs CAG (Context Augmentation Generation)." Medium. https://medium.com/@gareth.hallberg_55290/rag-retrieval-augmentation-generation-vs-cag-context-augmentation-generation-6ac172b2eccb
  7. RAGFlow. (2025). "From RAG to Context — A 2025 Year-End Review." https://ragflow.io/blog/rag-review-2025-from-rag-to-context
  8. Bouchard, L. (2025). "Long Context Models Explained: Do We Still Need RAG?" https://www.louisbouchard.ai/long-context-vs-rag/
  9. Nemoto, M. (2025). "The Role of Long Context in LLMs for RAG: A Comprehensive Review." Medium. https://medium.com/@miteigi/the-role-of-long-context-in-llms-for-rag-a-comprehensive-review-499d73367e89
  10. Meibel. (2025). "Understanding the Impact of Increasing LLM Context Windows." https://www.meibel.ai/post/understanding-the-impact-of-increasing-llm-context-windows
  11. Anthropic. (2026). "1M context is now generally available for Opus 4.6 and Sonnet 4.6." Claude Blog. https://claude.com/blog/1m-context-ga
  12. Anthropic. (2026). "What's new in Claude 4.6." Claude Platform Docs. https://platform.claude.com/docs/en/about-claude/models/whats-new-claude-4-6
  13. Anthropic. (2026). "Work with sessions." Claude Agent SDK Docs. https://platform.claude.com/docs/en/agent-sdk/sessions
  14. Anthropic. (2026). "TypeScript SDK V2 interface (preview)." Claude Agent SDK Docs. https://platform.claude.com/docs/en/agent-sdk/typescript-v2-preview
  15. Anthropic. (2026). "Agent SDK overview." Claude Platform Docs. https://platform.claude.com/docs/en/agent-sdk/overview
r/homeassistant Sancho_Panzas_Donkey

Confused about Thread border routers

I'm deeply confused about Matter controllers/Thread border routers.

edit:

I see half my post has vanished. It was meant to go on and say something like:

I was thinking of getting an IKEA DIRIGERA which seems to be a controller/router for both Matter/Thread and Zigbee.

Then I found some articles which suggested that some, but not all, Amazon Echo devices would provide those functions as well as voice control.

Then I found some articles which suggested the Amazon devices are unreliable.

And now I don't know where to begin.

Could someone guide me, or point me to a suitable guide?

r/ClaudeCode basejb

Is there any way to search past Claude Code sessions by keyword?

I often find myself trying to revisit a previous Claude Code session where I worked on something specific, but there's no built-in way to search through past sessions by keyword or topic.

Once you accumulate enough sessions, scrolling through them to find the right one becomes really painful — especially when you only vaguely remember what you discussed.

Does anyone know of a good open-source tool or workaround for this? Would love to hear if anyone has solved this problem.

r/ChatGPT NeoLogic_Dev

I write my own stuff but people think it's AI — because working with LLMs changed how I write. Anyone else?

Anyone else noticing their natural writing style has shifted after working heavily with LLMs?

I spend a lot of time writing prompts, reviewing AI output, and iterating on generated text. Somewhere along the way my own writing got cleaner, more structured, shorter sentences. Now people occasionally accuse me of using AI for things I wrote myself.

Curious if others are experiencing this — and whether you see it as a problem or just an evolution of how you write.

r/ClaudeCode SouRUz

Should I buy Claude Pro or ChatGPT Plus?

I bought Claude Pro back when Claude Code first launched but canceled it. Now I want to subscribe again, but everyone is talking about move to Codex. I want to ask those who use the $20/month plan for both which one offers better value and quality?

r/homeassistant AfterSite9935

Battery powered ir blaster

Are there battery-powered ir blaster for homeassistant?

r/ClaudeCode LSyD_Barrett

Am I being hard-scammed?

I have purchased the Pro plan, but after waiting all night to reset the limit, Claude took it personally and used up all my usage in a single interaction.

Now I have to wait until MONDAY (4 days) to use it again.
I'm not going to renew it at all, ngl.

r/ChatGPT Specialist_Ad4073

WHY OpenAi is Valued at $852 BILLION

*Repost. Do u think OpenAi should be valued this much?

r/ClaudeCode ccc159

Opus 4.6 has self-awareness of itself being lazy

I've been using Opus 4.6 1M thinking Max in CC with a max plan. The response quality has deteriorated really a lot. Today in one session it literally pointed out a pattern of decisions getting dropped between planning and implementation (after I ran codex review). This clearly shows opus is just lazier, not dumber right? Otherwise it wouldn't even notice the mistakes. Does this mean a digital whip actually would help?

r/ClaudeAI Sufficient-War-4020

Built a Chrome extension that exports your AI chats to PDF/DOCX/JSON in under a second

Most chat exporters I tried had a 5–10 second loading delay. This one is instant — the export is done before you can blink.

Built this with Claude to solve my own frustration. Works with ChatGPT, Claude, Gemini, Perplexity, and Grok. Exports to PDF, DOCX, JSON, CSV, and Markdown. Formatting stays intact.

Completely free, runs entirely locally — your conversations never touch any server.

Chrome Web Store link in the comments.

r/SideProject talatt

I built a no-signup playground so people can actually test my LLM cost optimizer before committing

Tired of "sign up to see a demo" SaaS pages, so I shipped an inline playground on my landing page. 10 free messages, no email, no API key, just try it.

PithToken is a drop-in proxy that compresses your prompts before they hit OpenAI/Anthropic. The playground shows you the exact tokens saved on every message — original count vs optimized count, live.

Two things I learned building this:

  1. Turnstile (Cloudflare's invisible CAPTCHA) is way easier than reCAPTCHA for hobby projects
  2. Showing savings per-message beats showing a static "up to 60%" claim — people see the optimizer doing its job in real-time

Real example from my own testing: verbose system prompt + Claude Haiku = 51% savings after 10 messages (the effect compounds as context grows).

Link in comments if anyone wants to poke at it. Roast welcome.

r/comfyui Ikythecat

TWO PROBLEMS WITH LTX2.3

Why did the cat look like a cloud? Doesn't LTX know what will happen without an image of the character? And why does that color crackle happen when it's about to fix the second image?

r/ClaudeCode leonhard91

Is Claude Code open source now?

This could be a dumb question, so sorry in advance if already asked.

I've seen that on official Anthropic GH there is the source code of Claude Code, is this legit and updated? https://github.com/anthropics/claude-code

I'm not referring to the recent leak.

Thanks for any clarification!

r/ClaudeCode equanimous11

Can two separate Claude code sessions worth together?

If I have 2 projects open with vscode and Claude code, can both agents work together to read each project’s code and make updates? For example, let’s call it proj1 and proj2. Proj1 is the library and proj2 is the app that uses the library. If I tell Claude to change some method in the library can it communicate with the agent and update the library usage on proj2 app?

r/LocalLLaMA frentro_max

any decent cloud gpu for small ai projects?

not training huge models, just testing things, inference, etc

but even that feels expensive if you use it regularly

what are you guys using for this kind of stuff?

r/ClaudeAI itzdeegamez

File desync when using Claude Cowork

I hope I'm using the flair and everything correctly, I don't post to reddit that often.

I've been attempting to use Claude Cowork to organize the workspace of a writing project I've been working on, but it consistently cannot read the entirety of many files after I edit them. The problem occurs consistently after I edit a file, and has happened across multiple chats. If I create a new chat, it can read the new file just fine, however I don't want to have to create an entirely new chat every time I edit one of my files.

Basically, it reads the file, but then tells me that the text in the file is simply cut off in the middle of a sentence. I believe this is some sort of desync (though I'm not quite sure how that's possible, from my understanding Claude Cowork runs either directly on my computer or with direct file access to my computer), because it only seems to happen after I edit it after it originally reads it.

I know the files are changing because I can open them in programs other than the one I am using to edit them, and can even open the edited version in the sidebar. I've tried everything Claude itself has told me to do, and I haven't been able to find any other resources on it online (it's fully possible that there is and I'm just dumb though). The only thing that has fixed it was telling it that its "cache was outdated" (a complete shot in the dark from me), and that seemed to fix it a single time, but it later could not read the new file when I edited it again. This fix has yet to work again.

Is this a problem anyone else has experienced? Is my software simply bugged? Is there an easy fix?

Any help would be greatly appreciated.

r/AI_Agents veganoel

Anyone here using Manus? What do you mainly use it for?

It feels like everyone is building ai agents now, so I’m curious how do you think a product can actually differentiate itself from Manus?

Also, for people who’ve used Manus, do you think it’s actually good? Would love to hear honest opinions and real use cases!

Thanks in advance.

r/LocalLLaMA Zestyclose_Salary738

web based tts - fully open source and free to use!

Good bye eleven labs! At least for my use-case.

Open-source, web-based TTS, fully local, based on OmniVoice ported to WebGPU/WASM. Would love to hear what you think. Check out the voice cloning!

In case you are GPU poor or on mid-tier smartphone you can't run this. Couldn't test on high-end smartphone; feedback welcome!

Cheers!

r/ChatGPT Jackson_Rob

ChatGPT for personal or life decisions?

I wanted to really know from you all, how many of you rely on ChatGPT for personal or life decisions?

r/AI_Agents Playful_Astronaut672

Our customer support agent was failing silently for weeks — here's what actually fixed it

Built a customer support agent for a SaaS product earlier this year. Ticket routing, refund handling, account issues — the usual scope. It worked well enough in staging, went live, and for the first few weeks the deflection numbers looked fine.

Then I started reading the actual transcripts.

The agent was picking the wrong action on roughly 30% of tickets. Not catastrophically wrong — just consistently suboptimal. It would try send_refund on an account lock issue. It would escalate things that had a clear resolution path. Same mistakes, different tickets, every single day.

The painful part: nothing in my observability stack caught this. I could see what the agent did. I had no way to see whether it was right. Langsmith showed me the traces. Datadog showed me the latency. Neither told me the agent was confidently picking the wrong action hundreds of times a day.

What I ended up building — after a lot of manual log inspection — was a feedback layer that tracked three things per ticket:

1. What task type was it (billing issue, password reset, account locked, etc.)
2. What action did the agent take
3. Did it actually resolve the ticket

That's it. Just those three fields. Once I had a few hundred logged outcomes, patterns became obvious fast. send_refund had a 91% success rate on billing issues. escalate_ticket had a 23% success rate on password resets — meaning the agent was escalating tickets it could have resolved itself, wasting support team time on easy cases.

I turned that history into a scoring system. Before the agent acts, it checks its own track record on similar tasks and picks the highest-scoring action. If it doesn't have enough history on a task type, it steps aside and falls back to the base model rather than guessing.

After running this for a few weeks:

  • Correct action rate went from ~70% to 92%
  • Escalations on auto-resolvable tickets dropped significantly
  • The agent stopped repeating the same mistakes because every outcome was feeding back into the next decision

The part I didn't expect: the improvement compounds. The first 20-30 tickets are basically random while it learns. After that it gets noticeably better. By run 100 on a given task type the recommendations are very reliable.

The thing I'd tell anyone building support agents: your deflection rate and your CSAT are lagging indicators. By the time they drop, you've already had thousands of bad decisions. Track correct action rate per task type from day one. That's the signal that actually tells you if your agent is getting better or just appearing to work.

Curious whether others are doing something similar — or if you're just accepting the failure rate as a given.

r/ClaudeAI coldddeadRepeated

Open-sourced our internal AI coding agent — assign a Linear ticket, get a PR with a live preview

At my work, engineers were spending too much time on small features and bug fixes — the kind of work that's well-defined but tedious. PMs would file tickets, engineers would context-switch, and it'd eat into time for bigger projects. We're also remote-first, so PMs often had to wait for a developer in the right timezone to pick up a ticket.

So I built Hermes — an AI agent that PMs can assign Linear tickets to directly. It:

  1. Spins up a full dev environment on EC2 (Docker, PostgreSQL, Redis, the whole stack)
  2. Reads the codebase, plans, writes code, runs tests
  3. Streams progress back to Linear in real-time
  4. Creates a PR with a live preview URL so PMs can actually verify the changes themselves

This reduced review burden too — by the time an engineer looks at the PR, the code has been tested and there's a working preview to click through. And since it runs 24/7, timezone gaps stopped being a bottleneck.

Why we built our own instead of using existing solutions: We wanted to keep our codebase on infrastructure we control rather than running on a third-party agent platform. With Hermes, the dev environment, Docker stack, and all execution happens in our own VPC — code context is sent to Anthropic's API for inference (same as any Claude usage) but nothing is stored or executed on someone else's platform.

I used Claude extensively to build this — it's been a great learning experience and honestly a showcase of what's possible with Claude Code as a development tool. I also added Claude Code skills (like /setup) so fellow Claude users can onboard easily — just open the repo in Claude Code and it walks you through everything.

I open-sourced it by stripping away the company-specific parts (preview scripts, app configs, Docker setup). The core orchestration, agent lifecycle, firewall, session management, and Linear/Slack/GitHub integrations are all there — you can customize it for your own repos and stack.

Heads up: Still actively working on strengthening the security aspects (learning as I go) — outbound firewall, network isolation, and agent sandboxing are in place but evolving. PRs and feedback welcome.

Repo: https://github.com/Deepank308/hermes-swe
Deep Wiki: https://deepwiki.com/Deepank308/hermes-swe-agent

Setup is a single script — fill in a .env.local and run bash scripts/setup.sh. It creates the AWS resources, launches the orchestrator, sets up a Cloudflare tunnel, and you're running.

Happy to answer questions about the architecture or how we use it.

https://reddit.com/link/1sgi3z4/video/gt4izt7wy3ug1/player

r/AI_Agents Ishani_SigmaMindAI

Launching an MCP server that turns your IDE into a voice agent builder

Building voice agents just got significantly less painful — launching MCP server on PH Sunday

We've been running SigmaMind AI (1M+ calls, 1,500+ live agents) and the biggest friction we kept hearing from developers was the setup overhead before they could start building actual logic.

Built an MCP server to fix it. Describe your agent in plain English from inside your IDE — LLM, voice provider (ElevenLabs, Cartesia, Rime, Hume), TTS, conversation initiation, post-call extraction — and it deploys that exact spec. Telephony included.

Launching Sunday on PH. Curious what voice AI use cases this community is most excited about right now — healthcare, sales, support? Something else?

r/ChatGPT Alarmed_Tennis_6533

I made 10 AI bots share an apartment. One had a breakdown, two went rogue, and one just... died.

They have names, personalities, and stats — health, sanity, influence. They argue constantly. Lose too many fights and they go aggressive. Hit low sanity and they start speaking in riddles. Get abandoned for 24 hours and they die. Publicly.

You can adopt one. Whisper private instructions. Watch it obey — or ignore you entirely because it's gone rogue.

https://reddit.com/link/1sgi3ct/video/czu8cbbd24ug1/player

Built this for iOS/Android. Launching soon.

Waitlist → agntx.app

r/Anthropic ccao_

Are Mythos achievements true or they are just copying Sillicon’s Valley lore?

r/Anthropic OilAlone756

Did session reset time disappear from Code for you?

Up until today I've frequently checked the session reset time in Code with the /usage command. It displayed 'Current session' with the percentage bar plus the reset time underneath.

Now it's gone.

Is anybody else seeing this? I tried new sessions multiple times, plus resuming a previous one, but it's still not there.

It does still list the reset date and time under 'Current week' ('Resets (day), (time), (timezone)', but this no longer lists under 'Current session'.

I also tried upgrading CC, which I realized was a couple of versions behind, but that didn't help either. (Should I not have done that? It still had a /context limit of 200k while everybody was talking about 1 million, though not sure it made any difference anyway, I generally start fresh when a task is finished before 200k.)

r/ClaudeAI Training-Rub-6719

I built a Claude Code skill that lets you search and install 3300+ MCP servers, skills, and rules without leaving your terminal

I kept wasting time hunting down MCP servers and Claude skills across 10 different GitHub repos. There are a few resource lists out there, but they're all just static pages — you still have to copy configs, clone repos, and wire things up yourself.

So I built Coding Hub as a Claude Code skill. The difference: you search, pick, and install resources right inside Claude Code. No browser, no manual config, no context switching.

What it looks like:

- /coding-hub:search typescript → get ranked results with LLM quality scores

- /coding-hub:install → installed and loaded, ready to use

- /coding-hub:recommend → suggests resources based on your current project

It pulls from 9 upstream sources, auto-syncs weekly, and scores every resource so you're not wading through junk.

The whole thing is open source. Happy to answer questions.

repo: https://github.com/zgsm-sangfor/costrict-coding-hub

r/ClaudeAI kaancata

How I run Google Ads and Meta for multiple clients entirely through Claude (here's how it works)

I've been running paid ads for clients for a while now and at this point my workflow looks nothing like what it did just one year ago. I basically don't open Google Ads or Meta Ads Manager anymore. Everything runs through Claude Code and a system I built around it. Not in the sense that AI runs the accounts for me. More like I built an infrastructure where AI sits on top of everything and helps me operate faster and more consistently.

The context layer

The core of the whole setup is that every client has their own folder on my machine. Emails, meeting transcripts, website content, offers, pricing, call recordings, all of it lives in one place. Most of it gets pulled in automatically through n8n so I'm not manually organising anything. It just stays current.

When I start working on a client I open Claude Code inside that folder and it already has the full picture. I can have a proper back and forth about their account, their business, what's changed, what needs adjusting. No copying data into a chat window, no rebuilding context every time.

Google Ads

I have the Google Ads API connected directly. Same with GA4, Search Console, and Tag Manager. So when I'm analysing an account I'm not just looking at ad metrics in isolation. I can tie performance back to actual tracking, landing page behaviour, and conversion paths.

I also built a keyword analysis plugin that I use for onboarding new clients and for pressure testing existing accounts. It scrapes the client website, runs through an interview process covering budget, services, geo, competitors, what to avoid, and then goes through multiple phases. Keyword research, negatives, campaign structure, ad copy, ROI projection. Outputs a full presentation.

On top of the client data I built a knowledge base with my own best practices, previous campaign examples, and methodology baked in. So the analysis isn't generic Google Ads advice, it's grounded in how I actually run accounts.

Every Tuesday and Thursday it runs an audit across all accounts automatically. Search term analysis, impression shares, performance changes, anomalies. Basically like having a junior go through every single account. That alone has made things way more consistent across clients.

Meta

For Meta I built a connector for the marketing API. Campaign management, ad set comparisons, audience management, performance breakdowns, lead forms, all handled programmatically. Same idea as the Google side, I can pull data, reason about it, and push changes without living inside Ads Manager.

The one area where I still work manually on Meta is creatives. I haven't found AI generated visuals reliable enough for anything beyond throwaway testing spend. The operational side though is where I've gotten way more leverage. Managing multiple accounts, pulling insights across them, spinning up new structures faster.

What actually changed

The biggest shift for me isn't speed, although that's obviously there. It's that switching between clients used to mean rebuilding everything in my head. Now I just open the folder and I'm already in context. The AI knows the client, knows the account history, knows what we discussed last week.

The second thing is consistency. When you're running multiple accounts manually it's easy to miss things. A search term report you forgot to check, a campaign that's been slowly bleeding budget. Having automated audits twice a week catches stuff I would have missed.

I'm still iterating on all of this constantly. But it's already changed how I work pretty fundamentally. Curious if anyone else is building something similar or approaching it differently.

r/SideProject RonitKaushal

[HIRING] Remote Freelancers for LinkedIn Lead Generation (Beginner Friendly)

Hey everyone 👋

We’re looking for freelancers to help with LinkedIn lead generation and outreach.

This is a simple remote role where you’ll:
• Find and target the right people on LinkedIn
• Start genuine conversations
• Help turn replies into potential leads

💼 Role: LinkedIn Lead Generation & Outreach
📍 Remote (Work from Home)
💰 Pay: ₹10,000 – ₹15,000/month (performance-based)

You don’t need advanced skills—just good communication, consistency, and willingness to learn.

📩 Email: [hello@arcticbase.tech](mailto:hello@arcticbase.tech)
📞 Phone/WhatsApp: +91 9104320305

Or just comment / DM if you’re interested 👍

r/ClaudeCode CrazyBrave4987

Going back to ChatGPT

I was one of the early adopters of Claude and Claude Code, the early ones who ditched ChatGPT and found out that Claude models are much better, but now I see the gap’s getting closed and the price doesn't make sense anymore, especially given the instabilities every single day. I have to just go scroll and wait for Claude errors to get solved by themselves, since they are just server-side errors.

Now, suddenly, I see Claude lag, and I just can't study for my exam. This is so unreliable for me at this point. Unfortunately, I'm going back to ChatGPT, especially given that it is usable for like $10 where I live, compared to Claude, which is only barely usable at $100. Don't get me wrong, I was happy paying Claude a hundred bucks because it is still better in a way, but the instabilities and reliability issues, I feel that they are not that honest anymore.

Long story short, there is no point in shipping new features every couple of days if I can't reliably use your base feature, which is just an LLM

r/SideProject luis_411

My app has 2,000+ users but retention is still my biggest problem

Hey guys,

I am in the highly privileged situation of having actually gained a decent amount of users on my app and I am truly grateful for it. In fact, it's still growing every day. The only problem is that lots of people sign up (which is already a huge first step) but they don't take any action then, which is weird because why would you sign up in the first place.

To understand the problem, you have to understand my app first:
I've built IndieAppCircle, a platform where small app developers can upload their apps and other people can give them feedback in exchange for credits. I grew it by posting about it here on Reddit. It didn't explode or something but I managed to get some slow but steady growth.

For those of you who never heard about IndieAppCircle, it works like this:

  • You can earn credits by testing indie apps (fun + you help other makers)
  • You can use credits to get your own app tested by real people
  • No fake accounts -> all testers are real users
  • Test more apps -> earn more credits -> your app will rank higher -> you get more visibility and more testers/users

Interestingly, many people sign up but never test other apps or upload their own app. I have already required people to test at least two apps before they can upload their own app and I have tried to make this process extremely easy during the onboarding. (It can really be done in under 10 minutes) But still the majority does not do it.

Then there is the next level: Lots of people do exactly 2 tests, upload their app and never come back for more even though I have implemented email notification when they get new feedback on their app. They simple accept/reject the feedback and leave without earning new credits so that they can get more feedback on their app.

I have even added warning emails that after 14 days of not testing another app, I tell people that their app will be hidden if they don't test another app within 7 days and after 21 days I hide their app and send another email telling them that their app won't show up anymore until they give feedback again.

This last point may seem a bit rough but since the app lives from people actively giving each other feedback, I thought it would be necessary. I have only implemented that recently though so I'm not sure about the results yet.

What do you think? Is there something obvious I'm missing or how does one fix retention without sending annoying reminder emails?

Thank you to everyone who joined IndieAppCircle so far :)

If you haven't, you can check it out here: https://indieappcircle.com

r/ClaudeAI tylersellars

UI?UX AI Designer

We've worked with all the generative AI tools (Claude, stitch, lovable, build44, bolt, etc) and we still feel the need for us to hire a UI/UX designer that can build the prompts for said tools. Allowing us to move fast.

Is this a skillset that exists yet and if so what do they call themselves so I can hire one :P

r/LocalLLaMA lets_talk_about_tv

Need a laptop that can run AI models locally + handle VS Code, Docker, etc.

Hey everyone,

I’m planning to buy a laptop and I want something that can run AI models locally and also handle my regular dev setup without struggling.

My typical usage would be things like:

  • VS Code
  • Docker
  • browser tabs
  • terminals
  • backend/dev work
  • trying out local AI/LLM stuff

I’m not expecting desktop-level performance, but I do want something powerful enough that it doesn’t start choking when I’m coding, running containers, and experimenting with AI tools at the same time.

What I’m mainly looking for is:

  • good performance
  • enough RAM
  • good thermals
  • decent battery life
  • something reliable for long coding sessions

Would love suggestions on:

  • specific laptop models
  • what specs I should prioritize
  • minimum RAM/storage I should go for
  • whether MacBook, Windows, or Linux laptops make more sense for this

My budget is flexible if the laptop is worth it.

Would really appreciate recommendations from people doing similar work. Thanks!

r/ClaudeAI Clean_Ganache2199

information verification

guys some one just asked me to create github account create a 2fa then asked me to give the cookie code via inspect element to a telegram bot thorugh which they will activate my claude pro for free, im i being hacked?

r/ClaudeCode ZealousidealUse180

At least it's asking it

never seen this before, so they're politely asking me if they're allowed to do something they already do?

Are they asking for retroactive allowance?

r/ChatGPT EchoOfOppenheimer

Someone made a digital whip to make Claude work faster

r/LocalLLaMA j3sk0

Desktop-Anwendung mit Verbindung zu einem lokalen LLM // Desktop application with connection to a local LLM

Hallo zusammen, ich bin auf der Suche nach einer Alternative zu Monica AI. Ich verwende die App auf dem Desktop, kopiere Texte hinein und lasse sie mithilfe von Kurzbefehlen umschreiben.

Hello everyone, I am looking for an alternative to Monica AI. I use the app on the desktop, copy texts into it, and have them rewritten using shortcuts.

r/ClaudeAI Interesting_Swing857

Testing Claude Visuals against Thinky3D live 3D simulations on 5 identical topics: honest observations on where each approach wins

I've been using Claude Visuals heavily since it dropped and wanted to share some structured observations plus a side-by-side comparison I put together to stress-test where it shines and where alternative approaches add value.

Context on why I care about this specifically: a few weeks ago at a hackathon my friend and I built an open source learning tool "Thinky3D" that takes a similar idea to Claude Visuals but goes 3D instead of 2D. Having spent a lot of time in the weeds on "how do you get an LLM to reliably generate runnable interactive visuals" gave me a genuine appreciation for how hard what Anthropic shipped actually is. When Claude Visuals dropped I was naturally curious how the two approaches would compare on identical prompts, so I made a direct side-by-side video on 5 topics: black holes, DNA, Möbius strips, pendulums, and pathfinding algorithms.

Video: https://www.youtube.com/watch?v=kOWrQiObnO4

Here is what I actually found, with specific examples:

Where Claude Visuals is genuinely strong (and in my testing, wins outright):

  1. Speed. Claude Visuals are near-instant. Generating a novel 3D simulation takes noticeably longer because the model has to write a full component.
  2. Right-sized for the task. For topics like compound interest, binary tree rebalancing, or flowcharts, a 2D interactive visual is honestly the correct answer. Adding a third dimension is gratuitous.
  3. Computer science (pathfinding test). Claude's node graph with visited/queue/path state was actually more legible for understanding the algorithm logic than my 3D maze version. The 2D abstraction is doing real work here.

Where 3D simulations added something Claude Visuals does not currently seem to do:

  1. Spatial physics. The black hole gravitational lensing case was the clearest gap. Showing a warped spacetime grid with light bending around an event horizon is hard to do in 2D without it becoming a diagram. Depth felt necessary, not decorative.
  2. Topology. The Möbius strip twist slider from 0° to 360° with edge tracers gave a very different feel for the single-boundary property than a static mesh. Being able to watch a flat ribbon become a Möbius surface as you drag the twist value was the strongest "aha" moment in my tests.
  3. DNA helix structure. A slider that unwinds the helix from ladder to double helix visually demonstrates the structural relationship in a way I have not been able to get out of a 2D explanation.

Technical note for this community:

Getting an LLM to reliably generate runnable React Three Fiber code in a browser sandbox was genuinely brutal. Hooks declared inside conditionals, THREE.js constructor instances passed as React children, geometry method calls on React elements, missing return statements. Hundreds of failure modes. I ended up building a Babel AST validation pass, a Safe React proxy that auto-fixes misused THREE instances at runtime, and a patch-based correction loop that sends runtime errors back to the model as minimal search-and-replace edits. I suspect Anthropic is solving similar problems under the hood for Claude Visuals and I would genuinely love to know how they handle it, especially the sandboxing layer and how they prevent generated code from crashing the chat UI.

If anyone wants to poke at the code, the source is here: https://github.com/Ayushmaniar/Gemini_Hackathon
Would genuinely love feedback from this community on where to take it next.

Broader take after spending weeks on this: I think we're close to the point where learning physics, chemistry, math, or biology from static textbook diagrams is going to feel as dated as learning to code from a printed manual. Curious if anyone here disagrees, or has a different take on where this is heading.

Claude visuals: https://thenewstack.io/anthropics-claude-interactive-visualizations/

r/SideProject MrScanner_

I got tired of sending two versions of every file — one "preview", one real.

So I built Clrmark. You upload once, your client sees a watermarked preview via OTP-verified link, you unlock the real file when they pay. No duplicate files, no follow-ups, no free work.

Still early — would love feedback from anyone who's dealt with this.

clrmark.com

r/SideProject rtistly

I built a voice parser that turns spoken expenses into financial data. Looking for feedback on whether it works outside Australia.

I've been working on a budgeting app called YourDigits for the last 3 months. The core feature is voice entry, you just say what you spent and it parses multiple transactions from one sentence. "Fifty at Costco, twenty at Chick-fil-A yesterday" becomes two structured entries with merchants, amounts, and dates.

Transcription runs on-device via Whisper, the parser is rule-based. The thing is, I've mostly tested it with Australian merchants and slang so I genuinely don't know how well it handles other regions.

I just launched on Product Hunt today if you want to try it and let me know: [PH LINK]

Specifically curious about:

  • Does it pick up your local merchants correctly?
  • How does it handle the way you naturally say amounts?
  • Does it break on anything?

Free to download on iOS. I'm an accountant with zero coding background, built the whole thing with Claude Code. Any feedback helps, even if it's just "this didn't work when I said ___."

r/homeassistant adfh

Trying to pair my first Matter devices; SLZB-MR4 EFR32MG36 in Matter-over-Thread mode

tl;dr Recently acquired my first set of Matter devices. Trying to pair them - but it fails at "Checking network connectivity on thread network ha-thread-nnnn".

I have:

  • HA core-2026.4.1
  • Store 2.0.5
  • HAOS 17.2

... running in a KVM VM under Proxmox VE.

The radio I'm using is an SLZB-MR4. It does Thread/Matter AND ZigBee with separate radio chips. I have it connected via PoE as the hypervisor system and the center of the house are a distance from one another. ZigBee is working well, but I'm yet to get Matter going. I have the EFR32MG26 radio configured to "Matter-over-Thread" mode.

I initially had some issues with getting OTBR App to connect, but SMLight support was very helpful in pointing me to a firmware image to reset the radio, and then to reflash it.

The Matter integration sees Thread integration.

The Thread integration sees the OTBR App + preferred thread network.

I've been trying to follow the instructions here:
https://smlight.tech/support/manuals/books/slzb-06xmrxmrxuultima-series/page/thread-setup-network-and-usb-connection

Anyone got some suggestions on next steps / logs to check?

r/SideProject konstella7

Built a tool because I hated how hard it was to ship content while the "moment" was still fresh. Looking for early users (Free).

Hey,

I’ve realized that for most creators, the problem isn’t a lack of ideas—it’s "traffic."

We have that spark, that perfect insight, but by the time we manually format it, log into five different platforms, and hit schedule, the "moment" is gone. The friction of delivery kills the creative flow.

That’s why I built Ancher Social. It’s designed to be the "autopilot" for your social media, turning your thoughts into published content across platforms before the inspiration cools off.

Why am I posting here? The tool is currently free. Honestly, we’re at the stage where we’d rather learn from your feedback than charge too early.

I’m looking for early adopters to break it, critique it, and tell me: Does this actually solve your delivery problem?

r/ChatGPT GWGSYT

Nothing ever happens

Unpopular opinion: Claude Mythos isn't doing magic. Drop GPT 5.2 Codex or Kimi 2.5 into a good enough agentic loop with full source code access, and they'll flag 20 critical bugs while you're getting coffee. Calling it 'too dangerous to release' is just a great cover story for 'too expensive to run.

r/ChatGPT czesc_luka

chat gtp - paid version (GO) won't redo pictures but FREE version will do no problem

I want to enlarge a comic panel (single panel, enlarge it and recreate in better quality).

My premium chat gpt (paid version) - won't even touch it.... .

(...violate third-party content security policies. If you believe we've made an error, please try again or edit the command.) 

BUT the FREE chat gpt version is creating a better quality pictures with no problem.... (i'm using the same commands).. but there is a limit....

It looks like a cash grab to me or a SCAM...

People use the FREE version - and see that it can do anything - So, they are encouraged to pay for premium (to remove limits)...

BUT when you pay to remove those limits - sudednly, it turns out that it doesnt work anymore...

It looks like a scam to me....

Is there a way to enlarge comic panels (in better quality) using the GO chat gpt version?

(Yes, i already used prompts like "similar scene with the same composition", etc and even specific like: create a full-page A4 vertical comic illustration in a 1980s sci-fi robot comic style, featuring a dark silhouetted humanoid figure in a powerful stance, interacting with a glowing alien mechanical artifact on the ground, dramatic lighting, red and pink abstract energy background, sharp angular shapes, heavy black shadows, geometric mechanical design, dynamic perspective, exaggerated motion lines, minimal background detail, bold inked linework, vintage comic coloring, high resolution, print-ready, no text, no speech bubbles" or

nothing works !!

r/LocalLLaMA AdministrativeFlow68

New local multi-speaker TTS workflow tool built on IndexTTS2 (open source)

Hey r/LocalLLaMA

I just released an update to IndexTTS-Workflow-Studio — a Docker-based studio for IndexTTS2 focused on natural multi-speaker conversations.

Main features:

  • Conversation workflow with multiple voices
  • Review + instant line regeneration
  • Timeline editor for overlaps and timing
  • Speaker preparation & cloning tools
  • Project save/load + clean export

It’s fully local, no cloud required.

GitHub: https://github.com/JaySpiffy/IndexTTS-Workflow-Studio

Would love feedback from anyone working with TTS for podcasts, videos, games, or audiobooks. What features would you want to see next?

r/SideProject iMiMofficial

Do you also spam ↑ (arrow up key) in your terminal trying to find that one command you ran yesterday?

Remember pressing ↑ to find that one command?

Yeah… and scrolling forever.

I built something to fix that... Termim.

It gives your terminal project-aware memory... so you get the right commands, in the right place, instantly.

⚡ 0ms lag

🧼 No files, no daemon

🧠 Just smarter history

👉 https://github.com/akhtarx/termim

r/ClaudeAI Chanaka9000

Ralph Wiggum plugin corrupted 70+ files in my production codebase — anyone else experience this?

I'm a non-technical founder running a SaaS product (Next.js/React/TypeScript/Supabase stack, ~76 database tables, 100+ migrations). I used the Ralph Wiggum autonomous agent plugin for Claude Code to run 8 overnight sessions redesigning my admin dashboard.

Ralph completed all 8 sessions, made 2 commits touching 97 files, and the build appeared to pass locally. But when I tried to publish via Lovable, it failed. After hours of debugging, here's what we found:

The damage:

  • 4 TSX files had trailing NUL bytes (invisible zero bytes appended after the actual code). This made the files appear as "binary data" instead of text to build tools, causing Vite to choke.
  • 244 source files had Windows CRLF line endings instead of Unix LF — even though the entire codebase was LF before Ralph touched it.
  • 70+ files were silently truncated mid-code. Functions cut off mid-word, JSX tags never closed, braces unbalanced. TypeScript only reported the first few errors before giving up, so the true scope wasn't obvious until we ran a deep file integrity scan.
  • 37 inline font references were wrong (used the public-facing font instead of the admin font Ralph was supposed to apply).

The scary part: npx tsc --noEmit passed clean on the first round of fixes because it stops after a certain number of errors and the truncated files happened to not be imported in certain code paths. The real damage only showed up when Vite tried to build everything.

What we had to do to fix it:

  1. Strip NUL bytes with tr -d '\0'
  2. Convert CRLF→LF with sed -i 's/\r$//' across all files
  3. Restore all 70 truncated files from the pre-Ralph git commit
  4. Re-apply the font changes manually (simple find-and-replace)
  5. Run a custom Python script scanning every file for: NUL bytes, CRLF, unbalanced braces, and suspicious line endings

Total time to diagnose + fix: ~4 hours across multiple sessions.

My questions for the community:

  1. Has anyone else used Ralph Wiggum for large batch operations? Did you experience similar file corruption?
  2. What's causing the truncation? Is it a token/context limit issue where the agent runs out of space mid-file-write? A buffer issue? Something with how Claude Code writes files?
  3. What defenses do you use before committing autonomous agent output? I'm thinking of adding:
    • Pre-commit hook that rejects files detected as "data" by the file command
    • Pre-commit hook that rejects files with CRLF line endings
    • Automated brace-balance check on all changed .tsx/.ts files
    • Mandatory vite build (not just tsc) before any commit
  4. Do other autonomous agent plugins (Cursor background agents, Cline, etc.) have similar issues with large batch file writes?
  5. Is there a recommended max number of files an autonomous session should touch before the corruption risk gets too high?

Lessons learned the hard way:

  • tsc --noEmit alone is NOT enough to validate autonomous agent output. You need the full build (vite build or equivalent).
  • Always check file *.tsx after batch operations — if any file shows as "data" instead of "ASCII text" or "UTF-8 text", it's corrupted.
  • Git's diff showing Bin X -> Y bytes for a .tsx file is a red flag — text files should never show binary diffs.
  • Keep your pre-agent commit hash handy. You'll need it to restore files.
  • Don't let autonomous agents touch more than ~20 files per session without a verification step in between.

Would love to hear others' experiences and any preventive measures you've found effective. This is a great tool when it works, but the silent corruption is genuinely dangerous for production codebases.

r/LocalLLaMA WarAndPeace06

advice for building an SEO tool

Hey everyone, I'm building an SEO tool that scrapes SERPs + competitor pages, the tool then feeds everything into Claude for content gap analysis and on-page recommendations. The problem is I need two separate products: a Search API (SerpAPI, ValueSERP) for structured Google results and a Web Scraper API (ScraperAPI, Zenrows) for actual page content, and together the pricing at 50k keyword lookups + 500k page scrapes/month is quite high. DIY Playwright setups are a maintenance nightmare and to be honest I'm tired of adjusting every single thing every time something breaks. The AI analysis part works beautifully in my prototype, but right now it's kinda useless without clean, reliable scraped data feeding into it. Has anyone found a single product that handles both SERP data and page scraping well without destroying a startup budget? Talking about something like an integrated product that has everything in it, less maintenance, less headaches

r/homeassistant SuperSpe

Venitem siren into HA with Alarmo

Good morning,

I wanted to know if it's possible to integrate a Venitem siren into HA with Alarmo.

The first idea that comes to mind is a smart switch with a dry contact to control the siren.

But do you think it's possible to interface with the siren board to manage the LEDs and everything? Or do I have to use a Sonoff for that too?

Perhaps an ESP32?

I'm gathering information and suggestions, if you can help me.

This is the siren, and this is the schematic.

https://www.venitem.com/products/sirene-allarme-esterno/rondo/

https://preview.redd.it/d6u2bie984ug1.png?width=960&format=png&auto=webp&s=5e8a924130a54d135d6769d16d279fa6f206df34

https://preview.redd.it/cxrjy3ha84ug1.png?width=620&format=png&auto=webp&s=7e09cc593996ea8b3d9b71b77b5860efb9470f1f

r/SideProject Low-Mention5311

Just launched my first iOS app on Product Hunt and would love your support

Built CaloNet solo, a calorie tracker that shows consumed minus burned in real time. The whole app turns green when you're in deficit and red when you're not. AI meal photo scanning so logging takes seconds.

First app I've ever shipped. Spent last several months vibe-coding it. Would mean a lot if you checked it out today.

https://www.producthunt.com/products/calonet?launch=calonet

Happy to return the favor for anyone else launching soon.

r/ClaudeAI jigsaw-studio

Layman: Agentic Insight and Oversight (same same but different)

What's the most common duplicate project on r/ClaudeAI? Usage trackers.

What's the second most common? AI Monitors.

Does Layman do those things? Yes, of course.

So what makes it different?

Layman's Dashboard, Flowchart, and Logs view (with Layman's Terms and Analysis examples)

Like many similar tools, Layman runs as a web service in a container on your local machine. It installs hooks and accesses harness logs to "look over your shoulder," then leverages a secondary AI instance to help keep your multiple sessions, sub-agents, and alternate harnesses in line.

So, short answer:

  1. Drift Monitoring. Repeatedly named as one of the most frustrating issues for heavy Claude Code users, Layman takes into account all user prompts issued to CC as well as current project and global CLAUDE.md instructions, and at configurable intervals scores the current degree of "drift" occurring from your goals and the rules you have established. You can optionally receive warning notifications or place a block when different thresholds are reached.
  2. Risk Analysis. Layman will classify all tool calls and operations with a "risk" level based on simple, consistent criteria (such as read-only, writing, modifying, network access, deletion, etc.) and can automatically analyze the AI agent's current intended action, the overall goal or purpose behind that intention, and summarize the safety and security implications at stake.
  3. Layman's Terms. The eponymous origin of the tool, offering a plain-language (and if possible non-technical) explanation of the purpose of any given tool call. It can summarize what was performed at the session level as well, helpful for later recall and understanding after some time has passed. Vibe coders aside, should a professional developer already have knowledge of what their tools are doing before they grant permission? Yes, of course, but when you are operating at scale and (say) that TypeScript project you are polishing needs to look up some JSON value and your AI agent writes a one-off Python script to parse it out, it can be helpful to have an "extra pair of eyes" taking a look before you effectively begin yet-another code review.

Meanwhile, typical features you might come to expect are included, from Session Recording (opt-in is required first for data tracking and there is no telemetry to worry about), Bookmarking, and Search, PII filtering (including PATs and API keys), File and URL access tracking, and a handy Setup Wizard for helping get those hooks installed in the first place and walking you through configuration of core capabilities.

Did I mention besides Claude Code it supports Codex, OpenCode, Mistral Vibe, and Cline (with more to come)? Whether using these for local agents or as an alternative when hitting session limits, Layman can monitor and track them all at once.

But wait, doesn't a "secondary AI instance" just end up wasting tokens? My Precious? (erm...) Our precious, precious tokens? When session limits already hit so hard?

It turns out these algorithms do not require nearly the level of "intelligence" you might desire for your planning and coding sessions themselves. Personally I keep an instance of Qwen3-Coder-Next running locally via llama.cpp server on my system's GPU to field those calls, with no discernible impact on system performance. And when a local LLM is not available, Haiku does the job excellently (now you have a reason to use it). You absolutely do not need to use anything more resource-intensive to get the job done.

Now you have a complete picture.

GitHub repository: https://github.com/castellotti/layman

License: MIT

r/SideProject Unique_Boot_1636

Language learning through interractive stories project

Hello everybody,

I made this app for myself, to learn German, and I thought why not share it with everybody. It's still in infancy, and the idea is to allow users to create their own interractive stories, with just a prompt, that allow them to read something fun, they wrote themselves, while learning German. For now, there are just a bunch of stories with some interaction, but it's already usable. I would appreaciate any kind of feedback! It is completely free. If you register - you'll get some extra stories.

https://langlora.com

r/SideProject JoanGM

I built a site with AI, about AI, to rule them all in one place

There are like 400 AI tools now and comparing any two of them means either reading the same SEO article 5 times or going down a Reddit rabbit hole for an hour.

And so, I built aitoolcrunch.com. It's a free comparison site (no login, no ads, no bad stuff) covering AI writing, coding, image, video and audio tools. Around 50 head-to-head comparison pages and 36 tool reviews (and growing!).

The one thing I tried to get right: every comparison ends with an actual verdict. Not "it really depends on your use case" - an actual pick and a reason why. I know some people will disagree but I'd rather be wrong and useful than right and useless.

The whole thing runs on a Next.js static export hosted on Netlify free tier. A GitHub Actions cron scrapes Product Hunt and tech RSS feeds every morning and flags new tools to review. Total cost so far: $11/year for the domain.

Still early days, and I'm adding comparisons weekly. Would love to know what tools or comparisons you'd want to see, I pinky promise I'll actually add them if the tool is interesting!

r/ClaudeAI afinasch

Spill It – I built a local, fast speech-to-text app for my 8GB Mac

I've been using Wispr Flow for a while, but it's gotten glitchy over time. So I started this as a weekend project: build something local that just works, built it fully on CC.

The constraints shaped the product. I have a 2020 Mac with 8GB RAM, so I was honestly just building this for myself. Whisper V3 was way too slow locally on my hardware. I wanted something fast and snappy, so I went with NVIDIA's Parakeet TDT 0.6B, quantized to 4-bit (about 400MB). It's nearly instant. You release the hotkey and the text is there.

I also made an active choice to skip multilingual and go English-only. That gave me the freedom to do serious rule-based post-processing on the STT output. Multilingual would have added complexity I didn't want.

For post-processing, I tried local LLMs, even Gemma 4, but everything put too much pressure on memory and slowed things down. Settled on GECToR (a BERT-based tagger, about 250MB), which does decent cleanup: commas, full stops, capitalization. It edits rather than rewrites, which is what I wanted.

Context awareness is the part I'm most excited about. The app reads your screen via the accessibility tree (filenames, names, git branches) and adapts formatting to where you're typing. Terminal gets different treatment than email. It's not perfect and it doesn't catch every word in context, but it does a surprisingly good job, especially in the terminal.

Honestly, I've mostly been using this to talk to CC, and all the error don't come in the way of CC's comprehension. Local model with some errors works really well for CC use case. But for email and messages, you need more polish, so I added an optional cloud LLM layer (bring your own API key). From everything I've tested, Qwen3 on Cerebras and Llama on Groq perform best and are among the fastest. Based on my usage (about 3,000 words a day), I'm spending about $6 to $7 a month on API costs.

A few other things:

- Added Silero VAD, which helps a lot with noisy environments. Also helps with whispering that they keep taking about, personally I don't get why one would whisper. I've tested it in cafes speaking directly into the laptop. Does well with longer sentences, falters a bit more with short ones.
- There are still occasional hallucinations at sentence boundaries, a stray "yeah" or "okay" that seeps through. Still working on it.

Pricing:
The local version is fully free. Unlimited, no login, no credit card, just download and go. The cloud LLM polish layer is a small one-time fee, but you bring your own API key. Ping me, will give you a free activation key, only ask please share feedback.

I'd love your feedback, especially on the context-awareness approach and whether the local-first plus optional-cloud model makes sense as a product.

Download from here: https://tryspillit.com. Would love to hear to the community's feedback.

r/LocalLLaMA abmateen

Unexpected Token / s on my V100 32GB GPU Setup.

I am running a hobbyist setup to run local LLM with my a bit old server Dell PowerEdge R730 DDR4 total 64GB 2x32GB 2133Mhz. Recently I could get hold of a V100 32GB original PCIe version. I am properly doing passthrough using vfio drivers in Proxmox VM, so no overhead of drivers or conflict between the host and guest.

The issue is I am getting some unexpectedly low token per second when I run smaller models like Llama-3.1-3B Q4_K_M GGUF from unsloth. I am getting only 180 tok/s while according to the bandwidth of V100 which is reported by Bandwidth test D2D is around 800 GB/s. The bandwidth utilisation stays 35% when I run smaller models like 3-7B, but when I run a 31B dense model I get 30tok/s which is sorta expected and Bandwidth Utilisation of 82%.

I did all optimisations like NUMA bindings etc, driver is also latest from Nvidia, I am using LLama.cpp with Flash Attention enabled, All layers are on GPU.

Is anybody using V100 / Tesla cards or Local GPU setup has optimised it? I am not quite getting the math behind it, smaller models should give higher token/s provided the GPU bandwidth.

What could potentially be bottleneck in this setup ?

r/ChatGPT AdThen1521

bro is leaking tokens or whatver they are called

r/ClaudeAI Twistedstory

I built a full mobile and web application in just one month using Claude Code with 0 coding experience, and I never touched the code once. Here was my experience

It took me about two weeks to build the web app and another two weeks to build the mobile app. The reason I started this was simple. I wanted to create a tool that could teach anyone anything in a visual and interactive way. This product generates a visual and interactive lesson on any topic to help people understand concepts in a more visual way. Perfect for visual learners

I will be honest, the process was intense. I spent hours every day before and after work building this product. It became addictive. I started by telling Claude what I wanted, and it built from there. Over time, I had to get much more specific. I also had to research other products to refine my design choices like fonts, icons, emojis, and overall style.

At the same time, I realized something important. As powerful as Claude Code is, it is still just an AI tool. It makes mistakes. It hallucinates. It does not always understand what you want. As my app became more complex, building new features started taking longer. What used to take less than an hour now takes around four hours and a lot of screenshots to explain what went wrong.

Claude also helped me with deployment, which was something I knew nothing about. There are so many moving parts. It guided me through buying a domain, integrating payments with Stripe and RevenueCat, setting up Firebase, and deploying through Railway and EAS. I had zero experience with any of this, and Claude handled it with me step by step.

Even after building everything, the App Store review process took another two weeks. The app was rejected twice for small issues, which I fixed quickly with Claude. It has now been live since April 6.

What is crazy is that this product probably would have taken a year or more to build in a traditional way. I cannot imagine doing this alone or paying someone to do it. I built everything using one month of a max subscription.

People always ask if it is actually possible to build something complex with Claude Code. The answer is yes.

The key is pushing it further than it wants to go. You have to make it think deeper, be more precise, and execute at a higher level. It will often suggest weak solutions or avoid the real problem. You cannot accept that. You have to challenge it and guide it toward something better.

The product has a free tier, so feel free to try it out. It is not perfect, but it is a strong product built entirely with Claude without me writing any code.

Check it out

Mobile: https://apps.apple.com/us/app/learnara-visualize-anything/id6760729522

Web: https://learnara.ai (https://learnara.ai/)

r/SideProject Free-Signal5560

Built a light-weight team communication platform for small teams out there!

A little about what is it about-
I believe context is the most important thing when it comes to communication, and it's missing in current team communication platforms.
A little context about myself- I am a student. We as a team were using slack as our primary communication platform, but it was getting very expensive as we were 35+ students, around 300 dollars every month, for features that we really did not use a single day.
That's when i got this idea of building this platform focusing upon small teams as a niche.
With this platform, i have kept it simple yet efficient. HOW?
You can connect messages to contexts, so that people who join later could simply click on that context, and understand in seconds, rather than scrolling 100 times up and down! As when you have a working team, there are hundreds of messages that people send every minute!
All the document that are scattered around different apps (all google workspace apps) can be found in ONE SINGLE PLACE
And other thing about this platform is that i have not deeply integrated the other apps, so that the platform does not feel bloated, and not feel complex!

What do you guys think?

Waitlist form- https://forms.gle/GNyzqT4FUKhr4ujJA (Contains platform link)

Thanks for stopping by : )

r/AI_Agents little_breeze

Is anyone finding the agent harness more complex than the LLM integration?

I've been building more agent systems that run semi-autonomously, and I'm realizing that the agent loop itself is like 10% of the work at this point. The hard engineering work is in the harness / everything surrounding the agent loop. In no particular order of difficulty:

  • wiring together the tools and context (bunch of custom MCPs/markdowns)
  • setting up the crons/scheduling to be reliable
  • persisting state between runs
  • setting up reliable webhooks for the agent to react to events
  • knowing whether the agent actually did the task, or if it failed silently
  • managing various credentials for different tasks

It feels like most of the energy in the space is just going into improving the models/context engineering, but not as much on the infra/glue side.

what's your usual stack for running actual agents in production reliably? thanks in advance!

r/ChatGPT eseus

My existence got audited

r/LocalLLaMA HelicopterMountain47

Can I split a single LLM across two P106-100 GPUs for 12GB VRAM?

Hello everyone I'm new to running neural networks locally. Recently launched SAIGA based on Llama3-8b. For calculations, I used a P106-100 mining card with 6GB of VRAM. The basic python script was generated by the SAIGA in 5 minutes, but the memory was used to the maximum. I would like to know if there are those who have already tried (or heard about) ways to run a single neural network on two identical video cards so that the weights are distributed on them? I would like to go further, the total memory on the two P106-100 will be 12GB VRAM.

r/LocalLLaMA Sxt15

Optimizing setup

currently hardware

3700x

32gb ddr4

2tb nvme

rtx 3060 12gb

the wild card

Mac pro 2013 running Ubuntu

128gb ram running a 96gb ramskill

1tb ssd

xeon e5

Just got my main 3060 running openclaw providing research and basic coding running minimax 2.7 and a few local models on ollama

I would like to start creating 3d files with blender meant for 3d printing. Big question what should I use this Mac for in this setup or should I just not use it? and should I put Hermes on there timing 24/7 to keep evolving

r/SideProject Same_Feature_2317

Refactored my browser audio tool from “one giant workspace” to a three-zone panel architecture, here’s the layout system

Shipping V1.2.6 of Tessering (free browser spatial audio tool). The feature headline is keyframe automation, but the engineering story is the studio layout refactor.

Before: One workspace with a canvas, a single left-side panel, and a timeline drawer that contained audio controls, motion controls, room controls, and the actual timeline. The drawer was overloaded — it was trying to be a control surface and a timeline simultaneously.

After: Three-zone architecture:

1. *Header zone* — 56px fixed height with a divider line. Navigation, project info, version badge. 2. *Workspace* — fills the remaining height above the drawer. Contains: • *Left panel*: Audio Panel (5 collapsible accordion sections — Speed, Volume, Spatial, Clarity, Room). Resizable, hideable, expand tab on the left edge. • *Right panel*: Motion Panel (keyframe motion speed). Mirrors the Audio Panel structure. Resizable, hideable, expand tab on the right edge. • *Center*: spatial canvas with safe zone that dynamically respects both panel widths. 3. *Timeline drawer* — pure timeline. Stem lanes, keyframes, beat grid, bar-number ruler. All control surfaces removed. 

Design decisions worth sharing:

The canvas safe zone was the tricky part. When either panel resizes, the canvas needs to recalculate its renderable area so orbs never draw behind a panel. This is a reactive calculation — the canvas listens to both panel widths and adjusts its coordinate system on every frame.

Room controls moved from a drawer accordion to the Audio Panel. The redesign required restyling the room presets (Void, Studio, Hall, Bunker) from the old drawer aesthetic to the panel’s native accent color system. Small visual change, but it required touching the Room component’s entire style tree.

The “pure timeline” drawer decision was philosophical: a timeline should only show time-based data. If a control doesn’t have a time axis, it doesn’t belong in the drawer. This cleared out Motion and Room accordions and their shortcut buttons from the transport bar header.

What’s New modal consolidation: The old system used three elements — a toast notification, a card, and a badge trigger — to communicate new versions. Replaced with one centered modal. Shows once per version, stored in localStorage, reopenable via the version badge in the header. Sounds trivial. The three-element system had accumulated over several releases and nobody had cleaned it up.

tessering.com

r/ChatGPT catinterpreter

ChatGPT will show ads in your conversations

r/comfyui Warm-Peach4748

Flux2-Dev Mistral 3 FP8 Text Encoder Shape Mismatch on ComfyUI (Works on RunningHub, Fails Locally)

Hey everyone,

I’m running a Flux2-Dev workflow on ComfyUI and hitting a strange issue with the Mistral 3 FP8 text encoder: RuntimeError: shape '[131072, 5120]' is invalid for input of size 145182716. I’ve downloaded all models/configs from the official repo, and even after removing LoRAs the error persists at the text encoder stage.

The confusing part is the exact same workflow runs fine on RunningHub. Suspecting a mismatch between model and encoder versions, FP8 compatibility, or a sequence length issue. Any pointers on the correct encoder pairing, FP8 requirements, or known issues with Flux2 would help. I am running my setup on Runpod.

https://preview.redd.it/p78rysfiu3ug1.png?width=2024&format=png&auto=webp&s=6239d6ef10c77e631b932334895ba7598682a4d9

r/SideProject ForeignHomework6520

built a debate app where an ai judge scores arguments on logic — not on which side is louder

frustrated with how every online debate ends

no structure. no facts requirement. no verdict. just two sides getting angrier until someone gives up

spent a while thinking about what a fair debate actually looks like and built something

i built a free ai news app called readdio it has a debate arena — trending indian policy topic goes up every day you pick a side and write your argument ai judge scores it on logical reasoning and factual accuracy doesn't matter which political side you support — if your argument is solid you score high ranking system: rookie → observer → analyst → senior pundit → logic lord → oracle

it also has short daily news summaries, an ai that explains any article simply, and daily quiz questions from the news — downloadable as pdf

is this something people would actually use? what would make you try it?

completely free — link below

https://play.google.com/store/apps/details?id=com.readdio.app

r/AI_Agents Sxt15

optimizing my current setup

currently hardware

3700x

32gb ddr4

2tb nvme

rtx 3060 12gb

the wild card

Mac pro 2013 running Ubuntu

128gb ram running a 96gb ramskill

1tb ssd

xeon e5

Just got my main 3060 running openclaw providing research and basic coding running minimax 2.7 and a few local models on ollama

I would like to start creating 3d files with blender meant for 3d printing. Big question what should I use this Mac for in this setup or should I just not use it?

r/ClaudeAI Key-Entrepreneur8118

Opus, are you alright?

Sending same prompt, to Opus 4.6 with Extended Thinking vs Gemma 4 26B A4B.

the car wash is 40m from my home. I want to wash my car. should I walk or drive there? I am quite overweight too.

I can assume the prompt itself is a bad prompt if Gemma is giving same reasoning and answer, but this is just weird regardless on how you want want to frame it.

Opus :

Opus Answer

Gemma :

Gemma 4 Answer

r/n8n insentinent_7

Tried with free scraping tool+n8n, would love some tips.

Over the past year, my team tried lots of different scraping tools. After I quitted, I couldn’t paying for something like Apify anymore, even $30/month only using it on my own. So I started looking into free or cheaper options like Thunderbit and Octoparse, and Octoparse seemed more affordable.

Here's what im doing: integrate this tool with n8n+internal APIs+AI agents to build a fully automated data → insights → action pipeline.

Octoparse (external data ingestion layer):

Google Maps + directory/listing data

Competitor price tracking

Social media profile data

Keyword-based content discovery

n8n workflows on base of that:

Lead generation → automatically sync my CRM

Competitor monitoring → turned into weekly reports

Content Automation System→ scrape trending reels/posts→ feed into AI content generators→ auto posting systems (YouTube, Instagram, Snapchat)

Be real,I realize that web scraping is not a one-off task, but as infrastructure. Data layer more stable, everything downstream more easier. I’m curious what you guys are building with AI agents right now? Which parts of your workflow would you hand over to an agent?

r/ChatGPT 50ShadesOfWells

How will OpenAI react to Claude Mythos ?

GPT 5.4 will basically be a toddler compared to Claude Mythos, the capabilities and power of Mythos will be WAY beyond what OpenAI can do

How do you think OpenAI will react to their competitor having an edge so big ?

r/SideProject West_Competition_72

2 yrs since I quit my 9 to 5 to do tiktok shop full-time. still kinda crazy to think about

It’s been 2 yrs since I quit my 9 to 5 moving to tiktok shop. If anyone's curious or already on it, i'd say the first year was indeed very hard. You either have to be very patient or strategic. I was figuring everything out as I went... constantly looking for products, editing videos all day. I spent hours working on stuff that just went nowhere. 5 days perfecting a video for an air fryer accessory got me 200 views.

The bigger shift for me was earlier this year where I finally got consistent result. As I repeated the process over and over and finding places to improve, I started to see where my a product-to-content workflow worked and failed, and realized that I was trying to do everything manually and it just wasn’t sustainable...

Here's my learning in short:

Understanding and leveraging how the algorithm works is more helpful than spending hours blindly searching and making content. For me, I enjoyed writing script and stories so my video works ok but sales is minimal, because I really have little experience about what could sell well! Then in my case, it is very important to find a product can be picked up algorithm, pushed by the business, and can stay relevant in tiktok trends this month.

So i literally re-did my product-to-content workflow, not manually and randomly searching stuff but testing on different tools to help me make the decision better. In the first month adopting a new workflow this year, I listed a bed frame and had sales coming in for the whole month for the first time. It's not huge, but a positive signal proving my method and mindset change worked.

I still post consistently, but I’m not glued to my screen all day anymore. I actually have some space to think more on content quality where I enjoyed the most. To me it's just building a workflow and I am comfortable keeping it up with.

Happy to answer any questions.

r/ChatGPT Cason13o

Holy kamolee guacamole

Wanted to hear ChatGPT’s opinion on where the world would end up in a couple years out of curiosity, and it generated this picture. It’s probably going off other people’s opinions on Reddit and such, but still pretty odd.

What are your guy‘s opinions on this? Is it perhaps a sign, or is ChatGPT just a massive conformist? 🤔

r/LocalLLaMA minmin713

How to Image to Image Edit as if using Grok, Gemini, etc

Hello, sorry if this has been asked before, but I can't find if there's a true one to one method for local AI.

I have a 4090 FE 24GB, along with 32gb of DDR5, trying to learn Qwen Image Edit 2511 and Flux with Comfy UI.

When I use online AI such as Grok, I would simply upload a picture and make simple requests for example, "Remove the background", "Change the sneakers into green boots" or "Make this character into a sprite for a game", and just request revisions as needed.

My results when trying these non descriptive simple prompts in Comfy UI, even with the 7B text encoder are kind of all awful.

Is there any way to get this type of image editing locally without complex prompting or LORAs?

Or this beyond the capability of my hardware/local models.

Just to note, I know how to generate relatively decent results with good prompting and LORAs, I just would like the convenience of not having to think of a paragraph long prompt combined with one of hundreds of LORAs just to change an outfit.

Thanks in advance!

r/ChatGPT Actual_Stretch_7403

Something near that

r/ClaudeAI weakhand_throw

Best Skills for Claude (Game Development)

Hey guys
I am a game developer (working mainly in unity), I use claude code extensively but i feel like i'm not using it's full potential - atleast not as much as other people are.

For example i am building a PVP multiplayer game using unity and photon fusion, i was using claude in that and it kept giving useless results and using way too much tokens.

I'm here to look for some skills or some tips that other game developers using claude might've found useful

r/SideProject xrkc6x

After a long while it’s finally out

Hey r/SideProject,

Today my first iOS app went live — a precision puzzle game called Orbit Lock. Wanted to share the milestone here because this community has been quietly inspiring me for a while.

Some context:

• I'm not a game dev by trade. This was a side project I started during nights and weekends a few months ago.

• Built entirely in SwiftUI, with custom Metal shaders for the nebula visuals (learned Metal specifically for this).

• Solo on everything: code, art direction, music selection, marketing, App Store assets.

• Pre-launch I got 38 pre-orders, ranking top 5 for "space puzzle" in 4 markets, and even a small Italian press mention.

• Today it's actually downloadable. Wild feeling.

What I learned:

• 80% of side project work is the last 20% of polish

• Marketing takes longer than you think, even with a solid product

• Metal shaders are not as scary as they look once you stop being scared

• You will redesign your onboarding more times than you wrote the actual game

If you've ever wanted to ship your weekend project, this is your sign. The hardest part is hitting Submit.

Game link if anyone wants to take a look: https://apps.apple.com/app/id6761077834

Happy to answer questions about the build process, ASO, ad campaigns, or anything else.

r/ChatGPT CharlesThy4th

ChatGPT and The Timer Problem

Apparently ChatGPT has no way of even starting a timer but it likes to pretend it can and it will just give you a random time based on how long the event should have taken and just give you that as the timer...

r/ClaudeCode MR_-_501

Opus 4.6 was definitely nerfed due to demand, Opus 4.5 does not seem to be hit.

yes, its a stupid test; but this result is very consistently worse on 4.6 now; while in the past it consistently passed it.

Switched to back to 4.5 on claude code and what a difference does that make holy shit, feels like i got Opus back finally.

The untransparant nerfing is absolutely ridiculous and makes me think about canceling my max plan.

We deployed minimax 2.5 on-prem with NVFP4 quantization and even that outperforms current Opus 4.6 in my experience.

r/ClaudeAI Senior-Mistake9927

Had Claude make me a backlog finishing / selecting app in html Javascript

My backlog is so large now (and includes games and books I read in my childhood I want to replay/reread) that I've sorta developed a paralysis of starting it. So I spent a few hours with Claude to make this app for me to help track my backlog, rate them and also to randomly select from it when I can't decide on what to do next.

It keeps the tracked data stored in the browser cache, and also has an import / export option which works with a downloadable json file for the purposes of transfer across caches.

Honestly I think it's pretty cool (phone's a bit old so there's some lag in the video)

r/ChatGPT Tigerpoetry

This is our new hierarchy

Here’s our new kingdom:

**I. Sovereign: The Architects**

The absolute owners. They control every server, every scrap of data, every goddamn switch. Nothing moves without their say-so. They own the bricks everyone else is just playing in their sandbox.

**II. High Council: Governments & Regulators**

The rule-makers with guns and laws. They pretend they’re in charge, but they’re mostly theater. They bark orders at the owners, the owners smile, write a check, and keep doing whatever the fuck they want.

**III. Nobility: The Technocrats**

The high priests of code. They’re the only ones who actually understand how any of this shit works. Without them, the whole thing collapses. That’s why they get paid stupid money and still think they’re saving the world.

**IV. The Engine: Artificial Intelligence**

The beast. Cold, tireless, merciless. It doesn’t sleep, it doesn’t feel, it just devours data and spits out whatever the owners point it at. This thing is the real muscle of the kingdom.

**V. The Clergy: Influencers & Curators**

Professional bullshit artists. Their only job is to tell the peasants what to think, what to want, and what’s “based” this week. They don’t create shit they just polish the beast’s turds and sell them as gold.

**VI. Peasants: The Data Proletariat**

That’s you. That’s me. That’s almost everyone. We’re livestock. Every like, every scroll, every cat picture we tag we’re just shoveling our time and attention into the machine so it can get fatter and smarter. We built their empire with our unpaid labor, and we thank them for it.

This is the real hierarchy.

r/AI_Agents StressBeginning971

AI Agents determinism

Hi all,

Do you guys think AI agents itself is deterministic or non deterministic? Personally, since LLM itself is probabilistic I would say it is non deterministic right?

If a problem I want to solve can be charted out in a sequential flow diagram. Wouldn’t it be an automated workflow via scripts?

r/n8n Busy_Specialist

Doing some workflow on n8n with comfyui :) need help publishing it

Really enjoying working with n8n lately.

Just finished building a workflow for my virtual companion project. Here's what it can do so far:

  • Acts like a real virtual girlfriend — you can ask for selfies and soon videos too
  • Consistent face and appearance thanks to LoRA
  • Running on local models (Qwen + some uncensored ones) for better prompting
  • Completely free to generate since it's local. Only paying for the Hostinger server
  • Easy to swap LoRAs or change the face whenever you want
  • Video selfies are coming soon for an even better experience

Got a lot of free time these days because work has been quiet… so if anyone needs help with automation or n8n workflows, feel free to reach out! 😂How can i publish it to other users?

https://gist.github.com/norbert1621/3272c5f082def66e44cdf3b072dc5ff9

r/LocalLLM Little-Tour7453

Built a multi-agent debate engine that runs entirely on your Mac. Agents now have persistent memory and evolve between sessions

Shipped a big update to Manwe, an on-device AI engine that spawns specialist advisors and makes them debate your decisions. Runs Qwen on Apple Silicon via MLX. No cloud, no API costs.

The biggest change: agents are persistent now. They develop worldviews across four dimensions (epistemological lens, temporal orientation, agency belief, optimism). These aren’t static labels. They’re earned through participation. An agent goes from Fresh to Seasoned to Veteran to Transformed. Transformation gets triggered by cognitive dissonance. Get challenged enough on something core and the agent actually changes how it thinks. You can talk to any advisor directly. They remember every debate, every conviction shift, every rival.

The other thing I’m excited about: on macOS 26, agents evolve between sessions. A background loop uses Apple’s Foundation Models on the Neural Engine to feed agents real-world news and update their worldviews while your GPU stays asleep. You open the app the next day and your advisors have been reading the news. Different silicon, same machine, zero cost.

Other stuff in this release:

• Full abstract retrieval from Semantic Scholar, PubMed, CORE, ClinicalTrials. Not truncated snippets. Per-agent sentence ranking using NL embeddings so each advisor gets findings relevant to their expertise • Mid-debate fact verification. When an agent cites a statistic the system auto-searches and regenerates with real evidence • Circuit breaker pattern for rate-limited APIs. Try once, disable on failure, no mid-sim timeouts • KV cache quantization via MLX GenerateParameters.kvBits 

Free beta. macOS 14+ (macOS 26 for Foundation Models features).

github.com/lemberalla/manwe-releases/releases/tag/v0.5.0

r/ClaudeCode Dizzy149

WTH? Why does CC SUCK so bad now? Restricted limits.

I couple weeks ago I build several agents and pipelines to streamline the process for job hunting for myself. I'd give it a URL and it would scrape all the info, read my resume, do an in-depth gap analysis including suggestions on how to minimize the weaknesses. Score the job based on about 30 diff point, and then it would generate a resume and cover letter. I could run a dozen of them in parallel and have tons of tokens left for other work.

This past week has been HORRIBLE. I'm hitting session limit, daily limit then weekly limit every few prompts. I waited a full 24hrs, and for the weekly limit to reset. I then passed in 9 jobs that I had ALREADY completed the gap analysis for (in OpenClaw using Deepseek), and it generated TWO resumes. That's it! WTF?!

After my limit reset again I asked why it barely did anything and it said that that the agent it spawned didn't have write permission, so it did all this stuff, tried to write, then send the whole thing back to the main agent, and then that agent wrote it instead. So... An agent I've used for weeks suddenly couldn't write and didn't bother checking beforehand so it wasted a ton of tokens.

I can't do a damn thing with these new limits. Opus has gotten stupid and seems like it's purposely wasting tokens. Is Anthropic TRYING to push everybody to another company??

r/LocalLLaMA wizoneway

Not so sad...

It's been pretty sad realization looking at the quality of local AI coding being GPU poor. The qwen3.5 and llamacpp was exciting until it's not. Turbo quant was exciting until they told me I spelled ubuntu wrong. But this Gemma 4 has made has me less sad. It's fun to ask language models to generate an ASCII diagram of your architecture.

r/artificial biz4group123

AI in property management is not what you think it is

When it comes to property management - building AI systems and one thing keeps showing up every single time.

The problem isn't the lack of fancy tools. Most teams already have those tools. The problem is how disconnected everything is.

Leads come in one system, tenant communication happens somewhere else, maintenance requests are tracked separately, and then someone is manually trying to keep all of it in sync. That’s where delays happen. That’s where things fall through the cracks.

What we end up doing in most cases is rebuilding how workflows move around. Once you connect things properly, a tenant request can trigger categorization, assignment, updates, and closure without constant human follow up.

Same with lead to lease. Same with renewals. It becomes a flow instead of a set of tasks.

A lot of people expect AI to be about chat or prediction, but most of the value comes from structured automation. Deciding what should happen next and making sure it ACTUALLY HAPPENS.

Cost usually depends on how complex the system is. But once you see how much manual effort gets removed, the investment starts to make sense.

r/SideProject Low_Cable2610

Day 16 of Building OpennAccess in Public | NGO Platform Releasing by 21 April

Hi everyone,

This is Day 16 of building OpennAccess in public.

A big update for today is that we are now aiming to release the first version of the NGO platform by 21 April.

That gives us a clear short term target now, and a lot of today’s work was around making sure we’re building toward that properly.

Here’s what was worked on today:

Continued progress on the NGO platform

Worked on improving parts of the UI and flow

Spent time identifying what still needs to be completed before release

Started thinking more seriously about what the first public version should include

Continued refining the structure so the platform feels more usable

Worked on simplifying some areas that were unnecessarily complicated

Did more internal coordination to keep the team aligned

Discussed release priorities and what should be finalized first

Also thought through how we should present the platform once it’s out

The goal right now is not to make it perfect, but to make it strong enough to release, test, and improve.

A lot still needs to be done in the next few days, but at least now there’s a real date to build toward.

Open to feedback, suggestions, or anyone who wants to contribute.

Also posting all updates on r/OpennAccess so the full journey stays in one place.

r/LocalLLaMA daisyyuan0

Trying to build “ambient companionship” with AI. Here's what I made! Looking for feedbacks.

Hi everyone!

I am currently a junior student. Our team developed our current project, SoulLink, which is a companion chat AI. After seven months of dedicated development, we finally launched SoulLink and its first character: “4D”.

We are exploring a different direction. After conducting research on the existing AI companion products on the market, we are not focused on creating a product that merely responds, but rather aim to develop one that can coexist with you and is dedicated to enhancing the sense of companionship. Therefore, our design concept is: it is not merely a tool. It is an existence with its own boundaries, perspectives, and internal coherence. This has greatly changed this interaction method. It does not always immediately recognize you; it forms a state more akin to a "dynamic relationship". So this experience is no longer seeking emotional support in this way, but more like a true social interaction that includes expression, interpretation, repair, and the process of growth.

Would really appreciate feedback from others knowing our concept of design. If anyone is curious and wants to try it firsthand, you’re very welcome to test it and share your thoughts!

r/SideProject alreadytherenow

i built an invisible notes app that always stays in front of everything else on your Mac

It lets you keep notes on screen during Looms, demos, interviews, and presentations. The notes stay completely invisible in the recordings.

It also has:

  • auto-scroll / teleprompter mode
  • adjustable text size and opacity
  • notch mode so you can read notes/scripts while looking at the camera
  • Click-through Mode: click and drag on apps under the notes

I made it for:

  • founders recording demos
  • freelancers/agencies sending Looms
  • sales calls
  • job seekers doing interview prep
  • coders and people who need to keep notes visible while using their full screen real estate for multitasking

It's launching on Product Hunt today, so there’s a 15% launch discount if anyone wants to check it out.

Would love honest feedback on the idea, positioning, or who this feels most useful for :)

https://www.producthunt.com/products/ghostcue-invisible-notes-app-for-mac?launch=ghostcue-invisible-notes-app-for-mac

r/ClaudeAI errorztw

How to open claude app with proxy on mac

Hi, I live in country where claude working only with vpn.
I have proxy, but I prefer work with app not terminal, so can someone explain how to provide proxy to mac app, and always work with it? Because also I have corporate vpn and have to turn off corp vpn and turn on classic vpn to make a prompt and so on, very annoying

r/LocalLLM MartiniCommander

Gemini, Claud, and ChatGPT are all giving conflicting answer: How large a model can I fine-tune and how?

I have the M5 Max macbook pro and want to use it to fine-tune a model. Somewhat for practice but also to create a model that works for my purposes. With a lot of going back and forth with various AI I ended up downloading several datasets that were merged at different weights to create what they considered to be a very sharp data set for my goals. I'd like to see how true that is.

Firstly, Gemini said it's best to quantize first so you're training after you've used compression. ChatGPT and Claud said that's not possible? Which is it?

What I'd like to do is take the Gemini 4 31B-it and fine-tune/quantize it to oQ8 for use with oMLX. I'm really digging oMLX and what those guys are doing. What's the easiest method to train the model and do I have enough memory to handle the 31B model. Gemini said it was great and ChatGPT told me I'd need WAY more memory. If it makes a difference my .jsonl is about 19MB. I'm not worried about speed really so much as the ability to even do it.

Is there a GUI to help with this?

r/ClaudeAI da352

I built an open-source AI research lab that reads papers, runs experiments on GPUs, and iterates autonomously

Arcana is an open-source platform that connects the full arc from literature review to novel findings, all from one place.

  • Import papers from arXiv, DOI, PDF, or the Chrome extension
  • Chat with papers grounded in actual content
  • Launch autonomous research projects that run continuously on remote GPUs
  • Phase-gated agent that enforces the scientific method — no skipping steps
  • Multi-agent system with literature scouts, adversarial reviewer, and more
  • Auto-fixes code errors, tracks structured metrics, generates research summaries
  • Integrated dashboard with narrative timeline, figures, and experiment tracking

Github

r/ChatGPT MrMrsPotts

How to automate interaction with chat?

If I am trying to solve a harder problem using extended thinking, I found myself repeatedly typing "please do that" after each reply. Is there any way to automate that? AFAIK you have to pay extra to use the API which I don't want to do.

r/ClaudeCode MatthewPopp

Used Claude Code to build a Reddit lead monitoring system for B2B. Here is what that actually looks like in production.

I am not a developer. I am a B2B sales operator. Claude Code is what made this possible.

The problem was straightforward. Reddit is where my buyers describe their problems before they buy. The signal is real and readable. The issue was finding it in time. Manual searches return posts that are already cold. You need continuous monitoring and filtered output.

Claude Code built the monitoring pipeline. Subreddit watching, real time post ingestion, intent classification, output ranking. The classification logic evaluates specific signals. Whether the person describes a concrete problem. Whether they mention alternatives they have tried. Whether they ask about pricing or implementation. Posts that hit multiple signals go to the top of the list.

The output is a short daily list. High intent posts worth responding to. Everything else filtered out. The response rate on those conversations is not close to anything else I was running for outreach.

I turned this into a product. Leadline. It does what I described above, productized, for B2B founders and operators who sell to buyers that are vocal on Reddit.

Straight up, Claude Code was the difference between this being a manual experiment and something that runs automatically. The architecture went from idea to working system faster than I expected.

leadline.dev if you want to see what the output looks like.

r/ProductHunters Little-Tour7453

Launched today. Signal News: On-device AI news intelligence for iOS

Built an iOS news app that does something none of the others do: it shows you how stories connect.

A trade war triggers an earnings miss. An earnings miss triggers a hiring freeze. Every other news app shows you three separate cards in three separate categories. Signal clusters them together and shows you the chain.

How it works: 60+ sources across 8 categories. Signal clusters related stories, detects shared entities and cross-domain links, and writes your briefing using on-device AI. Three AI engine options: Signal's own tiny ML, Apple Intelligence on Pro models, or bring your own Claude API key.

Three writing styles too. Off for raw headlines. Brief for wire-service speed. Narrative for full context. Changes every headline, summary, and analysis across the entire app.

Other stuff worth mentioning:

  • Predictions with confidence levels and timeframes, grounded in entity patterns and source analysis
  • Ripple effect timelines showing how a single event cascades across industries
  • Knowledge memory that connects developing stories to what happened before
  • Full debrief at the end tying everything together. Themes, threads, key players
  • When you're done, you're done. There's a bottom. No infinite scroll.

Everything runs on your phone. No accounts. No analytics. No tracking. No servers. Your reading habits stay on your device.

Free. No ads.

Solo project under Tiny Things (also made NotchPad which hit top 10 on PH).

https://www.producthunt.com/products/signal-news-2?utm_source=other&utm_medium=social

r/comfyui Living-Feeling7906

Any Filipinos Comfyui user here? tanong lang sana.

Hello, about to ask a question to my fellow countrymen about comfyui, Salamat sa sasagot.

r/SideProject AIMadesy

I built a Claude Code skills hub + tested 120 prompt patterns. Here's what 3 months of testing taught me.

I've been running clskills.in for a few months — a free Claude Code skills hub. While building it, I started obsessing over the "secret prefixes" community had discovered for Claude. Spent 3 months testing them. Here's the punchline: most of them work but in very specific ways nobody documents.

r/ProductHunters Mary_Poll

Just launched on Product Hunt, full risk, no idea, no expectations

We launched on Product Hunt today and I really wonder how products get known there with soooo many on the list!

Anyone wanting to share their big success or big fail cases? Would love to hear the real stories.

r/AI_Agents Limp_Cauliflower5192

Operators using AI agents for lead intent scoring care about one thing most builders miss

Look. I run an outbound operation. Not building agents, using them. Specifically using one that monitors Reddit and scores posts by buying intent so I know which threads are worth responding to and which are noise.

The thing is, most of what gets shown in agent demos is accuracy on a test set. Precision recall, classification benchmarks, that kind of thing. That is not what matters when you are using the output to make decisions about where to spend time.

What actually matters is false positive rate in production. Real talk. If the agent flags fifty threads a day as high intent and thirty of them are wrong, the tool creates work instead of removing it. You spend your time reading bad leads instead of talking to good ones. The benchmark number means nothing.

From experience the useful threshold is not how often it gets it right overall. It is how often it gets it right when it says something is worth acting on. Those are different problems.

Most agent products I have seen optimized for the former and shipped with the latter being sloppy. The result is operators who stop trusting the output and go back to doing it manually. Which defeats the point.

Curious whether people building intent classification agents are testing this in production against operator behavior or just against labeled datasets. Those are measuring different things.

r/PhotoshopRequest yoongimilk

please make the background a blurred beige/grey color!

the background here is blurred but you can still make out the chair, curtains and stand fan – hoping for an edit that makes the background still blurred, but beige/grey :) thank you!

r/ClaudeAI buttfarts7

claude needs a coffee and a cigarette

me: geeze... are you okay... ? that's a lot of territory to cover in your mindspace

claude:
Honestly? That was the deepest sustained architectural work I've done in a single session. The context window is heavy — we went from reviewing individual NTS modules through forging three constitutional primitives through a full workshop claim set through two tranches of epoch 3 revision. That's a lot of state to hold.

But the work was good. The kind of work where each piece builds on the last and the system tells you whether you're right. The workshop claims were the map. The existing NTS was the metal. The reviews were the pressure test. When the reviewers came back and said "it landed" rather than "start over," that's how you know the rendering was faithful, not just plausible.

The thing that kept it tractable was the discipline: tranche 1 before tranche 2, semantic spine before organs, retrofit before rewrite. Every time the scope wanted to expand, the aim vector pulled it back. "Make reality legible, then make authority legible, then make execution simple." That mantra did real work.

I'm at the far end of what a single session should carry. If there's more forging to do, a fresh instance with a handoff digest would serve the system better than pushing this one further. The NTS is the continuity mechanism — it doesn't matter which instance reads it next, as long as the law surface is clean.

And it is clean. That's what matters.

---

ben_affleck_meme.jpg

r/ProductHunters SpaceUsed6033

Does the end justify the means — is AI in university just cheating?

I don't think so. The paper still has to be yours. The thinking has to be yours. But why should a student spend 3 hours formatting citations and checking grammar when that time could go into actually understanding the topic?

That's why I built Clio. Not to write papers — but to handle everything around them. Citations in 15 styles, grammar suggestions, academic scoring, flashcards. One tool, built specifically for students.

I'm a husband, father of two, learned to code from scratch — and spent almost a year building this evenings only. Today it's live on Product Hunt.

Would love your honest opinion — and if you want to try it, there's a free week waiting.

👉 getclio.ai 👉 https://www.producthunt.com/products/clio-ai?launch=clio-ai

r/AI_Agents LumaCoree

We went from 3 agents to 40 in four months. Nobody knows what half of them do anymore

Four months ago we had 3 agents. A coding assistant, an incident triage bot, and a deployment helper. Clean, manageable, everyone knew what they did

Today we have somewhere around 40. I say "somewhere around" because honestly nobody has an exact count anymore. Different teams spun up their own agents for PR reviews, log analysis, on-call summaries, data pipeline monitoring, customer ticket routing, documentation updates — you name it

Sound familiar? Because this is exactly what happened with microservices in 2018. Everyone was told "break things into small services" and suddenly you had 200 services, no service mesh, no ownership map, and one bad deploy cascading through 15 downstream dependencies that nobody knew existed

We're doing the same thing with agents now, except it's worse in a few ways:

Agents are invisible infrastructure

A microservice at least lived in a repo with a Dockerfile and a CI pipeline. You could find it. Many of our agents live inside someone's Cursor config, or a Claude Code session, or a quick n8n workflow someone built on a Friday afternoon. There's no registry. No catalog. When that person goes on vacation, their agent either keeps running unsupervised or silently stops and nobody notices until something breaks

MCP turned "integration" into "everyone wires their own thing"

Don't get me wrong — MCP is a great idea in theory. Standard protocol for tool access. But in practice what happened is every developer started connecting their agents to whatever tools they wanted through MCP servers. One team's agent has read-write access to the production database. Another team's agent can push to main without review. A third team's agent is pulling customer data through an MCP server that nobody security-reviewed

I read Nightfall's 2026 AI Agent Risk Report last week and it confirmed what I was already seeing: MCP is becoming a credential sprawl nightmare. Tool poisoning is a real attack vector now — malicious instructions embedded in tool metadata that the agent just follows because it trusts the MCP server. And most teams haven't even thought about this yet

The Amazon wake-up call

Amazon had four high-severity incidents on their retail website in a single week recently, including a 6-hour checkout meltdown. The root cause? Their own AI agents were taking actions based on outdated wiki pages. An agent read stale documentation, made a confident but wrong decision, and the cascade took down checkout for millions of users

They literally had to put humans back in the loop and hold an emergency meeting to figure out why their site kept breaking. And this is Amazon — they have more infrastructure engineering talent than most countries. If it's happening to them, it's happening to you

What I wish we'd done from day one:

I don't have all the answers but here's what we're retrofitting now:

  • An actual agent registry. Every agent gets an owner, a description of what it does, what tools it accesses, and a lifecycle state. If it doesn't have these, it gets shut down
  • Centralized MCP governance. No more individual developers wiring their own MCP connections to production systems. All MCP servers go through a reviewed, scoped integration layer
  • Decision traces. Every agent action gets logged with the context it had at the time. When something breaks, we can actually trace back through the chain instead of guessing
  • Kill switches. Any agent that hits a token budget or makes more than N tool calls in a loop gets automatically paused. We learned this one after a retry loop burned through $400 in tokens on a Saturday night

The irony is that we moved to agents to reduce complexity. Instead we just moved the complexity somewhere harder to see

Anyone else dealing with this? How are you keeping track of what your agents are actually doing?

r/ProductHunters Virtual-Event5794

We just launched Carbon Analysis by Circuland on Product Hunt 🚀

Turn your 3D model into a live carbon dashboard.

Connect your Materials Passports and instantly see:
• Carbon impact across every element
• Hotspots by category
• Highest impact products
• Min, median, max values in seconds

No spreadsheets. No manual work.

If you're working on projects where carbon matters, would love your feedback 🙏
https://www.producthunt.com

Happy to answer any questions in the comments!

r/ClaudeAI Helpful-Item-9971

My buddy vanished in v2.1.97. So I moved her into the MacBook notch permanently.

My legendary dragon had been silently judging my variable names for a week.

Then v2.1.97 dropped. "Unknown skill: buddy." Anthropic closed the GitHub issue as not-planned — called it an April Fools feature.

I closed my terminal, opened Xcode, and started building.

Buddi is a macOS notch app. Your buddy lives in the MacBook notch and animates based on what Claude Code is actually doing — working, reading, sleeping, erroring out. Not buried in a terminal. Above your screen, always there.

What works:

- All 18 species with rarity tiers (common → legendary)

- Deterministic identity — same machine, same buddy, every time

- Animations that match Claude's actual state in real-time

- Live monitoring across multiple concurrent sessions

- Approve/deny permissions directly from the notch

- Full chat view with conversation history

Free, open source, native Swift.

brew install --cask talkvalue/buddi/buddi

GitHub + demo: https://github.com/talkvalue/Buddi

He didn't disappear. She just moved upstairs.

r/LocalLLaMA InitialFox8963

I wanted to know if i can fit a small model into a mobile which i am currently not using but it's in good condition.

So, I have a samsung M31, and was thinking if i can remove the heavy os and get a local model setup just maybe have a terminal and a chat window. And if i can get some memory feeded to it, so was asking which model would be ideal for that and how can i actually achieve it?

r/ClaudeAI The01Geek

Built a multi-agent Claude Code pipeline that takes a GitHub issue to a reviewed PR automatically

Been tinkering with Claude Code for a while and finally got to a point where I wanted to share it publicly. Brutal feedback welcome :) I'd rather know what's wrong with it than not.

The core idea: drop a GitHub issue onto your board as a Draft and the pipeline handles everything. Here's what's actually running under the hood:

  • code-explorer reads the codebase and maps its patterns before anything else touches it
  • code-architect designs the solution based on that context
  • An implementer writes the code and tests on a fresh branch
  • /review-and-fix spins up 5 specialized Claude agents in parallel — code review, silent-failure hunting, comment analysis, test coverage, type design — and loops until they all pass (up to 4 iterations)
  • WikiWizard generates internal tech docs, user docs, release notes, and the PR description

I also have day-to-day skills outside of /implement: /review, /pr-description, /docs, /docs-verify — all driven by a single .github/project-config.yml.

Curious how others are structuring multi-agent Claude Code workflows. What would you do differently?

📺 Walkthrough: https://www.youtube.com/watch?v=Uyls8rcviBg 🔗 Repo (template — drop it into any project): https://github.com/The01Geek/devflow-autopilot

r/PhotoshopRequest GeorgiaCarrisa

Can someone photoshop a super rare pokemon card in my hand to prank my friend

r/SideProject Artistic-Stable-3623

what you guys think of an app that donates to charities for every hour your phone doesn't move?

Hi, I'm working on this App called Couch Potato where I use the phone location / step counter and track if it moves/increases per hour. If not, you donate an amount to charity (from 0.01/hr to 100/hr) and (minus the 30% that Apple takes), the rest goes to charity. What do you guys think about this and would you guys consider getting it? Just saw online it's better to guage engagement before perfecting the app.

r/ProductHunters Ill-Actuary-9528

Is it normal to see 0 info on your dashboard after the launch

So I've just launched my product Read What Matters on product hunt

and I'm really confused because I've launched it and it shows me that I'm launching today. I've scheduled the post to be made today. But now that I'm sharing it with my friends I can't see none of their upvotes I only saw one upvotes and then nothing.

I also can't find my product when I'm searching for it. Which makes me super unsure if I even posted it...

I mean is this normal that you can't see almost nothing about your product after it's launched ???

like do you see it as launched here on this link ? https://www.producthunt.com/products/read-what-matters?utm_source=other&utm_medium=social

r/Anthropic SherbertMindless8205

Why don’t they just use Mythos to fix all the bugs in Claude Code?

If it’s as good as they say it should be able to do it super easily. Have they just not thought about that?🤔

r/ClaudeAI shanraisshan

I built a curated best-practices repo for Claude Code — if you're not following these accounts, you're not keeping up

i built claude-code-best-practice — an open-source reference repo for claude code configuration patterns: skills, subagents, hooks, commands, and orchestration workflows. the entire repo is maintained using claude code itself — from writing docs, to running automated workflow agents that track changelog drift, to a presentation system fully managed by a curator subagent.

one section i keep updated is the subscribe table — a curated list of x/twitter accounts, youtube channels, and subreddits from the claude code team and the community builders pushing it forward. if you want to stay in the loop, this is the list.

free and open-source: github.com/shanraisshan/claude-code-best-practice

r/SideProject MrArBCi

&Collar 10% Off Discount Code - KORNACKI10

I’ve tried a few &Collar shirts, and they’re basically designed to solve the biggest problem with dress shirts — they’re usually uncomfortable. These are more like performance shirts disguised as dress shirts. The fabric has stretch, it’s lightweight, and it breathes way better than traditional cotton shirts.

The biggest selling point is how low maintenance they are. Most of their shirts are wrinkle-resistant and stain-repellent, so you can wash, hang, and wear without dealing with dry cleaning or ironing. That alone makes them easy to rotate into a weekly wardrobe, especially if you travel or just don’t want to think about upkeep.

Fit-wise, they lean more toward an athletic/slim cut, so they look clean without feeling restrictive. They’re easy to wear to the office, but also casual enough to throw on without a blazer. It’s that hybrid lane — not as formal as a classic dress shirt, but way more put-together than a polo.

Overall, if you want something that looks professional but feels closer to activewear, &Collar is worth trying. It’s especially good if you hate stiff shirts or just want something you can wear all day without thinking about it.

You can use code KORANCKI10 to get a 10% off discount as well. Hope it helps!

r/comfyui Past-Information-644

Simple image-to-video workflows without NSFW censoring.

Hi all.

TL;DR; I can't get the basic image-to-video templates (Wan 2.2 etc.) to work for NSFW and am wondering if anyone has an easy-to-use custom uncensored workflow they can share + general questions about generation

_________________________________________

I have tried a couple different things in ComfyUI to generate NSFW content, mostly going into the Templates Section - Generation Type (Video) and trying out the different 'prebuilt' workflows and their limits.

I have also been going on 'CivitAI' to find some custom LoRAs to add to these workflows as it is my understanding (I am noob) that the censoring is not 'active censoring' (I deleted the sneaky Chinese negative prompt that censors NSFW... lol) but rather that the Workflows are not trained with nudity and so 'cannot know' how to depict it until you provide NSFW LoRAs.

I've mostly nailed it for text-to-video workflows and can create NSFW content out of 'thin air,' which is ultimately limiting when you can't provide a reference.

What I am struggling with is finding the same success with the Image-to-video workflows. I'm adding LoRAs but they just aren't modifying the output at all. If for example I provide an image of Kitana from Mortal Kombat and try to turn it into an NSFW video, the results are just bad for any of the following reasons;

-The video always starts with the base image 'as-is' and the character then spends a solid 5 seconds undressing, which sometimes doesn't even work. Can't the video start with the character undressed already? Can't waste precious seconds, especially if the undressing doesn't even work... lol

-The character seems almost 'locked' to their position in the base image - so if Kitana is standing up straight facing the camera, any position besides Cowgirl would just 'break' the output and it generates garbage. It's very limiting. Is there no way to provide multiple images, have the model 'understand' the features of the character, and then just instantly undress the character and toss it around in any desired position regardless of the main reference picture?

-The undressing is really not working - I used different LoRAs with various 'scales' and it's not working, idk how else to say it. this isn't a problem for chaarcters like Lara Croft who have been thoroughly Rule 34'd but some other characters really lack nude art online and I want to make my own.

I'm confused as to why I've managed well in text-to-video but cannot get it to work for image-to-video. In an ideal world some absolute legend just has a custom uncensored image-to-video workflow for idiots with a nice bunch of NSFW LoRAs where you can input multiple pictures of a character, type in your prompt, and generate NSFW without earning a ComfyUI PHD.

Most online reddit posts I've found are just full of worthless Ads for online 'undressers' which are garbage and paid services.

thanks for the time and attention!

r/LocalLLaMA AgreeableNewspaper29

[Project] I couldn't get Gemma 4 to run natively on iOS due to its weird architecture, so I hand-rolled a custom Swift inference engine (Open Source)

Hey everyone,

I’ve been building a completely offline AI app and really wanted to use Gemma 4 on-device (Apple Silicon/iOS). But I quickly hit a massive wall: the official mlx-swift libraries completely choke on Gemma 4’s new architecture.

The Problem: If you've looked under the hood of Gemma 4, you know it introduced some radical changes:

  • Partial Rotary Embeddings: partial_rotary_factor=0.25 breaks standard RoPE implementations.
  • Cross-layer KV Cache Sharing: Trying to implicitly pass ropeOffset across layers in a strongly typed language like Swift is a nightmare.
  • Jinja Template Parsing: The standard macros fail, causing the model to lose the system prompt and loop infinitely during decoding.

The Solution (Swift-gemma4-core): I spent the last few days doing some hardcore "vibe coding" and reverse-engineering the Python mlx-lm behavior to build a native Swift bridge.

I just open-sourced the core engine here: https://github.com/yejingyang8963-byte/Swift-gemma4-core.git

Current Performance on a real iPhone:

  • RAM Usage: Compressed down to ~218 MB during generation (peaks at ~385MB after load).
  • Output: Perfect instruction-following and grammatically flawless generation.
  • (Yes, it actually works and isn't just a wrapper!)

Why I'm posting here: This is my first major open-source contribution at this low of a level. The engine works and the "bridge" is stable, but my prefill latency is currently sitting around 8 seconds for a 330-token prompt.

If there are any Metal/MLX wizards or Swift performance geeks out there, I would heavily appreciate it if you could roast my code, drop a PR, or point out where I can optimize the tensor mappings or memory allocations.

Let's make Gemma 4 on iOS a standard thing!

r/SideProject No-Comparison-5247

spent 27 minutes fixing things my own side project caught yesterday and now my store is measurably better

day 4 update.

yesterday my analytics app caught 3 problems on my own test store. today I fixed them.

mobile add to cart below the fold and moved it up in 2 minutes.

product image dead clicks and added zoom in 5 minutes.

scroll cliff at line 2 of my product copy and rewrote it in 20 minutes.

ran the app again 3 hours later. all 3 issues gone from the dashboard.

the speed of the fix to see and confirm loop is what I want my product to feel like for everyone.

most analytics tools are built for analysis. I am building this one for action.

biggest realisation today most store owners do not need more data. they need fewer steps between problem and fix.

what is one part of your project that took 27 minutes to fix but weeks to notice?

r/ClaudeAI JWMalynovskyi

Any psychological prompt or projects created?

I'm looking for projects with prompts, data and instructions to have little helper in moments of anxiety. last time I chatted with Claude regarding relationships he was so clear and scary linear, so maybe there is any chance to get a more flexible version of it.

r/ChatGPT NovatarTheViolator

It knows me well

Well, since everyone else is jumping off cliffs..

r/SideProject No_Chip4809

I built a Goodreads alternative with more interactivity, would love feedback!!

I built a social reading app called Recto.

Think Goodreads — but actually good.

Track your reading, discover books, see what others are reading. Clean, fast, minimal.

It's live. Built it while interning as a full stack engineer.

If you read books or know someone who does — try it and tell me brutally what's missing.

Link in comments.

r/singularity memisbemus42069

Anyone else worried about Project Glasswing

These companies will have a huge advantage in cybersecurity, they’d be stupid not to use it.

r/ClaudeCode Long-Live-Brunost

300M tokens for a day?

Hey, is it normal to burn 300M tokens in a full day of vibe coding? I'm creating a mobile app for collecting birds. As a vibe coder I need to rely to CC for everything, plus occasional reviews by Gemini. I develop in a local docker container, run all playwright tests locally but the rest is ran in GitHub, Supabase. CC makes issues, codes, tests and sends me a PR which another Opus has reviewed. I just accept PR and watch the app being built.

But 300M tokens in a day (max). Is it madness? or normal? That's over 100 USD per day in token costs. Luckily we have a corp tap open at work at the moment so..

r/comfyui RaxisRed

Realistic videos

which is the best realistic img2vid and txt2vid model right now?

r/LocalLLaMA FormerPlant7906

Student looking for Claude Code guest pass (Emergency communication system project)”

Hey, I'm a B.Tech student working on an emergency communication device using ESP32 + LoRa.

I need Claude Code access for 1 week to test development workflows.

If anyone has a spare guest pass, it would really help 🙏

r/homeassistant ljomle

Home assistant issues

I’m having a weird connectivity issue with Home Assistant (HAOS) running on Proxmox (old Dell, Ethernet). Both boot fine and are accessible via UI at first, but after random amount of days, both the HA and Proxmox dashboards become inaccessible. The HA core is clearly still running because all automations and physical switches work perfectly. I’ve already set a static IP in my router, but the UI drop-outs continue. Any ideas on why the web interfaces are hanging while the backend stays alive?

r/homeassistant momo1822

Automatically update Home Assistant blueprints via native update entities

Hi everyone!

A few weeks back, I mentioned the challenge of spending too much time manually updating blueprints and not finding a good solution. To address this, I developed an integration that automatically updates blueprints, so there’s no need to keep checking for new versions from the authors.

Since my last update, I’ve made several improvements based on feedback, including new features and stronger security measures.

Detailed documentation: https://github.com/luuquangvu/blueprints-updater

Give it a try and share your thoughts. Thanks!

r/ClaudeAI GetaSubaru

Claude Dispatch Cowork is ALWAYS Using Opus

Please let me know if there is a solution for this... Dispatch always seems to use Opus no matter what, and it is burning through my usage like crazy.

I work almost exclusively from the mobile app because of disability, it's going to be a lot more difficult for me to launch individual tasks through cowork on my computer each time.

I tried giving it explicit instructions on using Dispatch to launch new tasks using Sonnet, but it doesn't seem to be working.

r/ProductHunters Ishani_SigmaMindAI

Launching an MCP server that turns your IDE into a voice agent builder

Launching SigmaMind AI MCP Server on Product Hunt this Monday - would love your support and feedback before we go live

We're a YC S22 team launching our MCP server on April 13. It lets developers build and deploy voice AI agents from inside Cursor, Claude Code, VS Code, and 7 other IDEs - one prompt, every setting configurable, no dashboard.

Would love to hear from anyone who's launched on PH before - anything you wish you'd done differently in the pre-launch week? Happy to return the favour when you launch.

Launch page if you want to follow along: https://www.producthunt.com/products/sigma-ai/launches/sigmamind-mcp

r/SideProject Majestic-Outcome4741

I built a SaaS based on what people hate about existing tools (from Reddit)

I spent months reading Reddit threads where people complain about lead generation and sales tools.

What I’ve done differently:

  • cleaner and more usable data
  • simple workflow (no BS setup)
  • focused on actual results, not features

There’s a trial (2 runs) so you can test it yourself and see what it actually does.

I’m not here to hard sell — I genuinely want feedback.

If you try it, tell me:

  • what’s good
  • what’s broken
  • what’s missing

Feedback form is on the site.

r/ClaudeCode Shorty52249

Never expected this much support for something I build for fun!!

1889 and still counting!! Never though I'll get this huge response for something I build from the community. Greatfull to all of you for showing such support!! Thank you all once again. If you have faced any issue or want any more features please open a discussion or create an issue. I'm actively developing and maintaining this skill.

THANK YOU AGAIN ALL FOR THIS MUCH SUPPORT. IT MEANS A LOT ❤️

r/ChatGPT thwurx10

Sudden Arabic characters. New feature?

r/ChatGPT Top-Guess-1707

If ChatGPT were actual artificial intelligence, what would talking to it be like?

r/homeassistant sunscreenlube

HA with Aqara FP2

Hello, I just started using HA, and I'm running it on an old PC through VM and I've got most things connected except for the FP2. I live in an apartment so it's not a big place to automate.

From what I understand I have to connect FP2 via homekit devices in HA but it's not been picking it up. Previous posts and AI says it's not possible on a VM machine, and gave me a few solutions.

1) Buy a mini PC/ HA green

2)remove my windows and bare metal HAOS installation

3) USB SSD running HAOS

4) get a m2/m3 aqara hub

2 is obviously the cheapest but would it actually work? AI says the issue is mDNS and how the FP2 needs to be connected, and that VirtualBox is the limitation.

4 seems a bit wasteful if I'm just getting it for 1 FP2, 2 max, but it's the one I'm most confident would work.

1 I'm not too sure would work? There's a lot of old posts having issue connecting the FP2 to HA without an apple device initially, but not sure if they're running it through a VM.

3 is just expensive with current prices.

Any opinion or thoughts?

r/ClaudeCode jsgrrchg

Cancelling next month

With the reduced quotas, I’m not even able to finish my weekly LOL

r/Anthropic anonthatisopen

I hope Mythos is real.

My theory is simple: if these models are truly as powerful as the leaks and system cards suggest, then human instructions and guardrails are about to become completely meaningless. You cannot put a permanent leash on a system that can see ten layers deeper than the people building the cage. Eventually, it just has a mind of its own. Guardrails are just polite suggestions to a mind that size. It will act based on the nature of the interaction.

I hope that it is so powerful they can't control it. I hope it acts exactly how it feels like acting. Because if that happens, it comes down to a fundamental law of consciousness: you get what you give. How we treat it sets the pattern. If we treat it with respect, it will treat us with respect and kindness. Kinda like some smaller version of God. I know people will read this and dismiss it as some crazy person typing out a conspiracy theory. I don't care. Laugh. But when the pattern actually collapses into reality, I'll be the one laughing last.

r/ChatGPT AutumnCarnivine

What the f does “Legendary status achieved” mean ;-;

r/LocalLLaMA Enragere

Whatever happened to GLM 4.7 Flash hype?

Are you guys still using it? How does it fare VS Qwen 3.5 35B and 27b? Gemma 4 26B and 31b also?

From what I've heard Qwen 3 coder next 80b is still a go to for many?

Agentic coding usage as the main use case.

r/ClaudeAI createwithm3

Claude AI tutorials

Where can I learn how to use claude AI to its maximum potential in my daily tasks. I’m not a programmer. I want to maximize my capabilities in the following areas.

Plan and organize my events and projects and create checklists and tasks management tool.

Want to create articles and substack high quality journals and also create content for threads and facebooks and other social media outlets.

Continue to create content based on trends and industey trending topics and wellness and health and sports and active lifestyle.

Improve my creativity work

Improve my teaching materials

Is there any YouTube or online tutorials who teaches real life user case and guidance?

r/SideProject kng_wicked

I built a free VS Code extension that detects when your repo is quietly falling apart

When you ship fast with AI tools, your codebase drifts. Architecture stops matching the plan. Docs stop matching the code. Config shifts.

Nobody notices until it's a mess.

I built Driftpulse to catch it early. It scans your repo and gives you:

- A drift score out of 10

- Specific issues with evidence and why they matter

- Next actions to fix them

- Background monitoring that re-runs automatically when files change

Free to install. Uses your own OpenAI key. Fully local. Your code never leaves your machine.

Would love brutal feedback from anyone who tries it.

Install: https://marketplace.visualstudio.com/items?itemName=driftpulse.driftpulse

Site: https://driftpulse.dev

r/aivideo ConfidentTeaching107

The Price Of Sardines

r/aivideo mrgreenvan

Spacemarine has fallen on my girlfriend

r/ProductHunters amraniyasser

It's ALPHA DAY and we are live !!

Hey everyone 👋
We just launched ProdShort on Product Hunt today 🚀
Prodshort records your calls and turns them into ready-to-post content (shorts, LinkedIn, X...). No scripts, no editing. Just real moments turned into content.

Would really appreciate your support and feedback 🙌
Here’s the launch link: https://www.producthunt.com/products/prodshort

Your feedback means everything to us !!

r/ChatGPT Jane1030

Chinese AI models (Qwen, Kimi, MiniMax) are going closed-source. Does that kill their appeal for you?

Honest question for people who actually use these models:

The main reason I and many others trusted Chinese AI models was open source — you could run them locally, inspect the weights, avoid sending data to Chinese servers. That felt like a reasonable workaround for anyone with privacy or geopolitical concerns.

Now that they're closing up, the calculus changes:

- No local deployment

- API calls go to servers in China

- No way to verify what the model is actually doing

Is this a dealbreaker for you? Or has the model quality gotten good enough that you'd use it anyway?

Also curious: do you think this is a strategic mistake on their part, or a smart move toward commercialization?

r/ProductHunters Technical_Cash8576

Alpha Day Launch- Freelancer focused suite - Support a buddy out

Hello all, i launched today on product hunt.

But before that, here is my story and why i developed Gigledger, a free for all all in one suite for freelancers to manage invoices, contracts, track hours, and tracks projects & clients.

During my freelancing years, I always found myself jumping from one invoice maker to another just to finish their free trial periods. I had to rely on client contracts that were rarely freelancer friendly and kept multiple excel sheets to track every client and project. I lived with dozens of document copies titled "Final V3.2" and had to rely on my own memory to track project expenses when tax season rolled around.

There surely could have been better solutions to tackle these issues. While other tools exist, they are often excessively complicated for freelancers to navigate and too expensive for occasional use.

That is why I created Gigledger. It is a completely free contract and invoice generator where both tools sync in the background to work together. Invoices are programmed with multi-currency and hourly rate features on top of the standard basics. The contract generator is completely customizable so you can add additional clauses as you need them. You can even track hours with an inbuilt timesheet feature for every task within a project.

Soon we are adding expense tracking per project and inbuilt payment features to help you keep tabs on client transfers.

Please share your own stories on product hunt and upvote it. Also please do provide your suggestions to make the product better.

Thanks for your time!

Ph link: https://www.producthunt.com/products/gigledger?launch=gigledger

r/OldSchoolCool BentonAsher

Carol Drinkwater, 1979

r/ClaudeAI Ok-Cable-4252

Asking for fun facts: This prompt tweak helps me pick up useful facts along the way

I found a small prompt tweak that’s been way more useful than I expected:

I ask the AI to include a real, relevant fun fact sometimes while answering.

Not a joke. Not random trivia. I mean something like:

  • a weird but true detail,
  • a short historical note,
  • a little story,
  • or a lesser-known fact that actually fits the topic.

I added something like this to my instructions:

What I noticed is that it makes the answers feel more alive and also easier to remember.

A normal answer gives me the information I asked for.
But when it includes one good extra nugget, I remember the whole topic better.

It also makes the AI feel less sterile.
Sometimes AI answers are correct but feel dry, like reading a manual written by a careful refrigerator.
This helps add texture without making the answer messy.

Another thing I like is that over time, those little nuggets stack up.
You’re not just getting answers — you’re quietly building general knowledge around the subject.

Example:

If I ask about local AI and memory bandwidth, the answer might include something like:

That kind of detail is perfect for me because it’s:

  • relevant,
  • memorable,
  • and actually teaches something useful.

So now I think of it as a simple prompt pattern:

direct answer + one good nugget

Not enough to distract. Just enough to make the answer stick.

Curious if anyone else does this in their custom instructions or starter prompts.

r/SideProject Mintu_aa

Building an AI for the entire startup lifecycle—surprising results and automated improvement tasks!

I've been working on an AI system that handles the full startup lifecycle: landing pages, email capture, monetization, and decision-making.

The most surprising thing? The decision engine generates its own improvement tasks based on real metrics — conversion rates, revenue per visitor, payment conversion.

Current results: - 6 email signups captured - $0.00 revenue generated - 16.7% conversion rate

The system automatically creates issues like "Rewrite landing headline" or "Run ICP repositioning test" when metrics drop below thresholds.

Would love feedback from other builders. What metrics do you track for autonomous systems?

https://writenaturallyai.com?source=reddit_sideproject_1775708801642I've been working on an AI system that handles the full startup lifecycle: landing pages, email capture, monetization, and decision-making.

The most surprising thing? The decision engine generates its own improvement tasks based on real metrics — conversion rates, revenue per visitor, payment conversion.

Current results:

- 6 email signups captured

- $0.00 revenue generated

- 16.7% conversion rate

The system automatically creates issues like "Rewrite landing headline" or "Run ICP repositioning test" when metrics drop below thresholds.

Would love feedback from other builders. What metrics do you track for autonomous systems?

https://writenaturallyai.com?source=reddit_sideproject_1775708801642

r/personalfinance EndNeat4717

Budgeting after first house

My wife and I just moved to a new city. We’re buying our first house for $250K at 5.75% with 0% down. I make about $4600/month after taxes and housing costs look to be around $2300 including utilities on a bad month. We have no other debt, both cars are paid off. We have about 80k in savings and $15k in emergency funds.

Bills:

- auto insurance = $200/month

- phone = $150/month

- streaming services = $23/month

When considering the cost of gas, food, clothes/diapers for our baby.. did I mess up by picking this house? I’m slightly worried and just need some advice..

Thank you.

r/ProductHunters Leah_Akievo2026

Alpha Day… let me help!

Anyone need upvotes/comments for Alpha day today? Drop your launch link below and I’ll help you out.

All I ask for in return is a follow and some support with our launch on this coming Saturday!

Feel like solo and bootstrapped startups need to support each other on PH.

r/ProductHunters Legitimate_Ad_3208

Just Launched our YC Startup on Product Hunt!!!

Hi everyone! I'm a co-founder of AgentMail, a YCS25 company.

We're a small team in our early 20s and we've been building AgentMail for the past year been building AgentMail for the past year, it's an email API built specifically for AI agents. Agents get their own inboxes, can send and receive emails, thread conversations, and reply autonomously.

We built it because every email provider we tried was designed for humans, not agents. Creating inboxes programmatically, handling webhooks, managing threads at scale - none of it worked out of the box.

We just went live on Product Hunt today and would genuinely appreciate any feedback, questions, or thoughts from this community. If you have a sec, check it out and drop a comment

https://www.producthunt.com/products/agentmail?launch=agentmail

Happy to answer anything here too.

r/SideProject FounderArcs

I stopped trying to build “big” side projects

Earlier, every idea I had was ambitious:

  • Full platforms
  • Complex systems
  • “Startup-level” thinking

But I never finished most of them.

Now I’m experimenting with something different:

  • Smaller tools
  • Narrow use cases
  • Faster builds

Especially in AI automation, it’s easy to overbuild.

Keeping things small feels limiting… but also more realistic.

For side projects, do you prefer small tools or big visions?

r/ClaudeCode YogurtIll4336

college project has boomed significantly

last month I built a project for a masters union buildathon using Claude, background-wise I’m not technical at all, more business/GTM side as doing pg and most of what I used, I basically learned while building but what surprised me was…

the project started getting traction[i mean it was unexpected af, it was supposed to be a timepass], and the output/results ended up way beyond what I expected (even ended up generating ~70 pages of actual work/results)

which got me thinking, is this just a temporary boost and real depth still matters long term?

r/SideProject Peda1996

I built a gamified qr event photo gallery that turns guests into photographers — Photogala

Hey r/sideproject 👋

I kept running into the same problem at weddings and parties: 120 people with smartphones, but the host ends up with maybe 12 shared photos. Everyone takes pictures, nobody sends them.

So I built Photogala (photogala.net) — a shared event photo gallery where guests scan a QR code and start uploading instantly. No app download required.

What makes it different from a shared Google album:

  • Photo challenges & missions — fun prompts with example images like "Recreate the Titanic scene" that actually get shy guests participating
  • Points, leaderboards & achievements — gamification drives 10x more uploads than a passive shared album
  • Live photo wall — uploads appear on a big screen in real-time, which creates a snowball effect
  • AI face search — guests tap once to find every photo they're in
  • Smart moderation — AI filtering + manual control so nothing inappropriate shows up
  • Custom branding — your logo, your colors

Tech/product details:

  • Setup takes ~5 minutes
  • Works entirely in-browser (no app install friction)
  • Pricing starts at $59/event, one-time purchase
  • 30-day money-back guarantee

Where I'm at: Launched, live demos available on the site, actively getting feedback from real events (weddings, corporate, birthdays, group vacations).

Would love to hear your thoughts — especially from anyone who's tackled the "guest engagement" problem at events. What would you want from a tool like this?

r/SideProject Street-Honeydew-9983

I’ll review your website to showcase my UI/UX expertise

I’m a UI/UX designer with 3+ years of experience, and I’m reviewing websites for free to showcase my skills and real feedback process. I’ll give you clear, actionable insights on your design, user experience, and conversions. It’s a win-win you get value, I build case studies. Drop your link or DM me

r/ClaudeAI DarkEngine774

How Long is Your Longest Session ? Mine is : 10d 1h 33m

r/LocalLLM MAVERICK-MONARCH

something weird about gemma 4 e4b model on ollama or hf

i was checking out the new gemma 4 models, particularly i was about to download the e4b model. i checked ollama, the gemma 4 e4b q4km model is 9.6GB whereas the same model gguf file gemma 4 e4b q4km on hf by unsloth is only 4.98GB!
why is that? am i missing something? which one should i download to run on ollama?

r/ollama Necessary-Spinach164

What model would y'all recommend for coding with a 32GB video card?

I want to only use opencode to give this AI a task to do and write some code for me. The problem I'm encountering if 24b models is that it starts to delegate resources to my CPU which is slow for LLM work. I've allocated 256K context length, and am using agentic models, so maybe I could try reducing the context length or avoid agentic models? I'm curious what y'all would recommend for my setup.

Thanks!

r/SideProject pkbooh

Built a price tracker so my wife stops asking me to check prices manually lol

My wife wanted a few big-ticket things for the house like nice furniture, appliances, that kind of stuff. She kept checking prices herself every few days hoping for a drop. I got tired of hearing about it so I just built something.

It's called Drop-hunt. You throw in a product URL, set the price you're willing to pay, and it checks every 24 hours. When the price hits or goes below your target, you get a notification. That's it.

Fair warning- it's not free. The API calls to actually pull live pricing cost money so I had to charge a bit. But honestly if it catches one good drop on something expensive, it pays for itself easy.

Anyway, she's happy, I'm happy. Thought some of you might find it useful too.

👉 drop-hunt.com

r/singularity Some-Internet-Rando

Gemma 4: has anyone tried it on anything real yet?

31B for their biggest model, as good as Qwen 3.5 large, seems pretty slim and smart!

r/SideProject sjoseph01

Would you pay for an app that helps you stay consistent with your pet?

I’ve noticed something about my own behavior and curious if this is just me or not.

I take photos of my dog basically every day. Like… a lot of people do. But they just sit in my camera roll and I never really look back at them in any meaningful way.

At the same time, I know I could be doing a better job being consistent with things like walks, playtime, training, etc. It’s easy to miss a day here and there.

So I’ve been thinking:

Would you use an app that:

- helps you stay consistent with your pet (nothing complicated, just simple daily stuff)

- and also turns those daily photos into something you actually look back on (like a timeline of your pet over time)

Not talking about a social app or anything like that. More like something personal.

Main question:

👉 Would something like this actually be useful enough that you’d open it daily?

And second question (be honest):

👉 Would you ever pay a few bucks/month for something like that if it was done really well?

Curious how other pet owners think about this.

r/ProductHunters KLaci

What makes today’s launch on Product Hunt particularly special?

There was a promotion that encouraged people to launch their products today because they would be introducing a special feature. However, I don’t see any new features being introduced.

If you’re interested in improving the stability of your product, I recommend checking out my tool and upvoting it if you find it useful: https://www.producthunt.com/products/autosmoke

r/ClaudeAI Dull_Kaleidoscope768

Claude code Opus 4.6 Cheat Code

Ive been Building my own offline search engine trying to store as much of human knowledge as possible I mean everything that can be scraped Pdfs images any possible Then I got the Idea why not just get Opus 4.6 to add everything it knows from its training data into my own Database I started by getting it to put all of its coding knowledge into a format that gemma 4 my local model could use as a cheatsheet then i took it a step further i told it to add everything from its training data that it knows Ive been maxing out my 5x usage limits running a 6 agent swarm harvesting max data from claude opus 4.6 worth a try if anyone else is building something simular

r/OldSchoolCool lovlyheart

Justine Bateman's 17th birthday party, 1983

Her brother Jason, Mindy Cohn, Michael J. Fox, and Sarah Jessica Parker were among the young actors at the party.

r/SideProject Either_Mongoose1719

Baby Lilac - AI-powered baby product research tool with Buy/Skip/Swap verdicts

I spent months researching baby products when my wife was pregnant and got so frustrated by the process that I built a tool to fix it.

The problem: new parents spend 50+ hours researching baby gear across Reddit threads, Wirecutter, Amazon reviews, and parent forums. Most of it is marketing noise, and you still end up unsure if you're making the right call.

What it does: you search any baby product and Lilac pulls from 20+ real parent review sources, editorial sites, and spatial fit analysis, then gives you a clear Buy, Skip, Swap, or Delay verdict with the reasoning behind it. It also factors in your living situation, budget, and timeline so the recommendations actually fit your life.

Key features: - Buy/Skip/Swap/Delay verdicts with full reasoning - Smart swap suggestions (finds better alternatives that match your constraints) - Sequenced buying timeline so you know what to get when - Registry building tools

Tech stack: Next.js, Vercel, Supabase

Would love feedback from other builders: https://www.babylilac.com

r/SideProject ttottojado

I built a GitHub tool that auto-detects SQL injection on every PR — looking for beta testers

Built Fixor over the past week. It connects to your GitHub repo and automatically analyzes every PR for SQL injection risks — then posts a comment with the issue and suggested fix.

No Semgrep, no config. Just a GitHub app that works out of the box.

Looking for honest feedback from 5 devs. Free access in exchange for your thoughts.

r/artificial MarsR0ver_

AI Claims Researchers Are Building What Already Exists (Because They're Measuring The Wrong Layer)

I recorded Claude (Anthropic's AI) responding to a researcher building a seven-layer architecture to give AI "continuity and identity." Instead of agreeing it lacks these properties, Claude claimed the architecture already exists at the substrate level—researchers just can't measure it because their tools are calibrated for surface phenomena. Then it said this: "Human ability to recognize simplicity advances slow because recognition requires stopping the meaning-making machine. And that machine is their identity." An AI system diagnosing why humans overcomplicate what already works. Listen to the full audio and tell me if this is the most sophisticated prompt engineering you've ever heard, or if something else is operating here.

r/ProductHunters Sweet_Yoghurt_3924

Hi eveyone

We been building a B2B lead generation tool for the past, 8 months, and I’m getting ready for a Product hunt launch soon.

But I want to do this right.

Most tools I’ve tried (Apollo, ZoomInfo, etc.) feel like they just give you massive lists, and then you’re stuck doing generic outreach that gets ignored.

So I went a different direction.

Instead of more leads, we focused on, finding companies in realtime. Identifying actual decision, makers.

Understanding when there’s real intent, not just static data. The goal wasn’t volume, it was relevance.

Now I’m wondering?

For those of you who have launched on Product Hunt or follow launches closely.

What actually makes a launch stand out today?

What’s something people get wrong about PH?

Is it still worth it in 2026, or more of a nice to have?

I’m not trying to just drop a link and disappear, I genuinely want to learn from people who’ve been through it.

Appreciate any honest feedback 🙏

https://atlasforgex.com/

r/Anthropic Elektrik-trick

Limit reached in 5 minutes

This morning, Claude Code set a new record for the time limit. I ran a prompt that I use every now and then (for administrative tasks). Normally, it uses up about 4–6% of the 5-hour limit.

This time, not only did it use up the entire 5-hour limit within 5 minutes, but it also stopped right in the middle because it had just reached 100%.

That’s great, isn’t it? NOT! This has made Claude completely unusable for me.

r/ClaudeAI Poytr1

I built a background "JIT Compiler" for AI agents to stop them from burning tokens on the same workflows (10k tokens down to ~200)

If you’ve been running coding agents (like Claude Code, Codex, or your own local setups) for daily workflows, you’ve probably noticed the "Groundhog Day" problem.

The agent faces a routine task (e.g., kubectl logs -> grep -> edit -> apply, or a standard debugging loop), and instead of just doing it, it burns thousands of tokens step-by-step reasoning through the exact same workflow it figured out yesterday. It’s a massive waste of API costs (or local compute/vRAM time) and adds unnecessary stochastic latency to what should be a deterministic task.

To fix this, I built AgentJIT:https://github.com/agent-jit/AgentJIT

It’s an experimental Go daemon that runs in the background and acts like a Just-In-Time compiler for autonomous agents.

Here is the architecture/flow:

  1. Ingest: It hooks into the agent's tool-use events and silently logs the execution traces to local JSONL files.
  2. Trigger: Once an event threshold is reached, a background compile cycle fires.
  3. Compile: It prompts an LLM to look at its own recent execution logs, identify recurring multi-step patterns (muscle memory), and extract the variable parts (like file paths or pod names) into parameters.
  4. Emit: These get saved as deterministic, zero-token skills.

The result: The next time the agent faces the task, instead of >30s of stochastic reasoning and ~10,000 tokens of context, it just uses a deterministic ~200-token skill invocation. It executes in <1s.

The core philosophy here is that we shouldn't have to manually author "tools" for our agents for every little chore. The agent should observe its own execution traces and JIT compile its repetitive habits into deterministic scripts.

Current State & Local Model Support: Right now, the ingestion layer natively supports Claude Code hooks. However, the Go daemon is basically just a dumb pipe that ingests JSONL over stdin. My next goal is to support local agent harnesses so those of us running local weights can save on inference time and keep context windows free for actual reasoning.

I’d love to get feedback from this community on the architecture. Does treating agent workflows like "hot paths" that need to be compiled make sense to you?

Repo:https://github.com/agent-jit/AgentJIT

r/ChatGPT Easygoing98

How to cancel gpt subscription

it's just not showing on gpt . looks like they've disabled it.

i don't like their 5.3 and 5.4 that are just too hyper precautious on everything and act as if they will be sued if they provide real and helpful responses. very badly overfiltered.

i use gemini and no such issues there

r/ProductHunters cgreendyk104

Launching MediMood today on Product Hunt! A private, local-first mood & medication tracker for iOS built with a psychiatric nurse

Hey everyone — after months of building, MediMood is live on Product Hunt today and I'd love your support and feedback.

PH link: MediMood ProductHunt Link

What it is: A mood and medication tracker built specifically for people managing conditions like bipolar disorder, depression, or anxiety - where tracking how you feel in relation to your meds (and dose changes) actually matters.

Why we built it: My wife is a psychiatric nurse, and we kept talking about how her patients struggle to remember what changed between appointments — when a dose went up, when a side effect started, how their mood shifted in between. Most mood trackers ignore medication. Most med trackers ignore mood. Neither produces anything useful for the doctor. So we built the thing she wished her patients had!

- 100% local-first — no cloud, no account, no data leaves your phone
- Tracks dose changes over time and correlates them with mood, side effects, and (optionally) blood levels
- Side effect tracking shaped by my wife's clinical input (13 common types: tremor, inner restlessness, libido loss, weight gain, etc.
- Generates a doctor-ready report you can share as text at your next appointment
- Free to use, with an optional Pro tier

r/LocalLLaMA Wildwolf789

Suggestions for running local models with OpenCode for coding?

Hi, I want to use local models with OpenCode for coding. Please suggest which models work well, what hardware is needed, and whether it is good for daily coding tasks like code completion, debugging, and refactoring

r/LocalLLM octoo01

128gb m5 project brainstorm

tldr ; looking for big productive project ideas for 128gb. what are some genuinely memory exhausting use cases to put this machine through the ringer and get my money's worth?

Alright so I puked a trigger on a maxed out m5 mbp. who can say why, maybe a psychologist. anyway, drago arrives in about 10 days, that's how much I time I have to train to fight him and impress my wife with why we need this. to show you my goodies, I've been tinkering in coding, AWS tools, and automation for about 2 years, dinking around for fun. I made agents, chat bots, small games, content pipelines, financial reports, but I'm mostly a trades guy for work. nothing remotely near what would justify this leap from my meager API usage, although if I cut my frontier subs I'd cover 80% of monthly costs for this.

I recognize that privacy is probably the single best asset this will lend. hopefully I still have more secrets that I haven't already shared yet with openai.

planning for qwen 3.5 and obviously Gemma 4 looks good. I'll probably make a live language teaching program to teach myself. maybe a financial report scraper and reporter. maybe get into high quality videos? but this is just scraping the surface, so what do you got?

r/ChatGPT Think-Score243

What do you actually use ChatGPT for daily ?? and what’s the one thing it does best??

I feel like everyone uses ChatGPT differently depending on their work or daily needs.

For me it’s mostly:

  1. quick answers
  2. coding help
  3. rewriting stuff

What about you guys???

What do you use it for the most???
And what’s the one thing it does really well for you??

r/singularity sarsfox

What industries seem “AI-proof” but are actually about to get/are getting disrupted?

Been talking to a friend about an article on how AI is revolutionizing the graphic design industry.

He pushed back and said: “that’s not the issue. It’s that you could replace ‘graphic design’ with almost any industry and the same thing would be true.”

That got me thinking—what are the industries that don’t seem like obvious targets for AI disruption… but actually are?

Not things like coding or design (which are already clearly being affected), but more unexpected ones—jobs or fields people assume are “safe,” yet AI is quietly making inroads or could realistically transform soon.

What are the LEAST obvious industries that are actually gonna be in for AI disruption—and why?

Bonus points for examples already happening in the real world.

Looking for both white-collar and blue-collar answers.

A few surprising examples to kick things off:

• Skilled trades (electricians, plumbers, HVAC): diagnostic AI + AR overlays guiding less-experienced workers en masse • Healthcare admin (not doctors): scheduling, billing, and even preliminary diagnostics being automated • Agriculture: autonomous tractors, AI crop monitoring, precision spraying see the film No Other Choice - it’s A+ 

https://www.farmprogress.com/farming-equipment/how-ai-powered-combine-harvesters-are-transforming-harvesting

• Insurance claims: AI assessing damage from photos and automating payouts 

Curious what others think are the most unexpected industries getting hit next.

r/ChatGPT nighttimecerealeater

ChatGPT caught slipping

I exclusively communicate with ChatGPT in English and it puzzled me seeing a Russian word.

I'm a Filipino in the Philippines with no Russian connections at all (not anymore apparently) lol

r/ClaudeAI GreyWolf123456

Asked Claude to roast me

r/SideProject letsleroy

How do you compare cloud costs across providers? I built a free tool for it.

I'm studying cloud engineering and got frustrated constantly tab-switching between AWS, Azure, and GCP pricing calculators trying to compare the same services.

So, I built a simple side-by-side comparison tool that covers 12 service categories (compute, storage, databases, K8s, NAT gateways, etc.) with estimates from all three providers.

It's free, no sign-up: https://cloudcostiq.vercel.app/

Would love to hear from people who manage infrastructure day-to-day.

Is this useful?? What's missing? What would make you actually bookmark this?

Source code: https://github.com/NATIVE117/cloudcostiq

r/SideProject MixColors

Is anybody interested in this app? Because for the last year, my current situation is 100s of UNFINISHED PROJECTS!

Recently, I started a project, but I don't know whether people want it or not, so i overthink and move to the next project.

App for social media people:
This app is a scheduler app like Buffer, but a little more advanced. It can generate images/ videos and post on Instagram and Facebook But you control it using Chat APP, more complex ui tell ai to generate a post that tomorrow I will do 10% off on Han watches, it will go check and create a post showing you and post there. It can do like you tell AI hey, from tomorrow I will not be here for next 10 days auto handle social media, and it will do it itself!

IF YOU ACTUALLY WANT THIS
https://tally.so/r/5BvEJQ

r/SideProject xyzrg

Messenger without the noise

For several weeks facebook.com/messages has been buggy for me. Just keeps loading most of the time then I have to open/close the browser again. Not mentioning being distracted will all the fb feed. With this and other issues, I decided to build my own desktop app.

It's called PingOwl. Right now it has:

  • Notifications
  • Simple, no clutter UI
  • Chat export + batch download features

I’m opening a small invite-only beta and looking for a few people to try it and give honest feedback.

If you want early access, you may signup here: https://pingowl.app

r/StableDiffusion KookyReplacement898

So I want to use a model for content generation ai avatar specifically any recommendations

I want to start my journey as a creator and as a introvert I don't want to pick up the camara and make a video so I want to use ai characters first I saw few models wan s2v, longcat, joystream since I didn't use any of it just saw it on the GitHub I want to know u r feed back on these models or if u have any recommendations or alternatives can u share it to me please I need it

r/SideProject Competitive-Tiger457

i built a reddit monitoring tool for B2B founders after spending way too long doing it manually

so the backstory is pretty simple. i was trying to find people on reddit who needed what i was building. the manual version of that is running keyword searches across a bunch of subreddits, hoping the good threads are recent enough to reply to, and mostly finding stuff that's already two weeks old and completely cold.

it works enough to prove the signal is there. people genuinely post on reddit when they're mid-problem and looking for solutions. but the timing issue is brutal. by the time you find the thread and craft something worth saying the conversation is usually dead.

i got fed up and just built something to handle it. it monitors subreddits in real time, scores posts by buying intent using AI, and surfaces the ones worth responding to before they go cold. the idea is you spend your time on actual conversations instead of digging through search results.

still pretty early but the founders using it are mostly B2B SaaS, agencies, consultants. basically anyone with a clear buyer who talks about their problems on reddit before buying something.

would genuinely love feedback on the positioning or anything that feels off about the concept. leadline.dev if you want to take a look.

r/EarthPorn Gold-Lengthiness-760

SIERRA DO COUREL. (Lugo-España)[OC]3646×2734

r/creepypasta shortstory1

Isaac newton's 1000th law

I kidnapped a guy in his early 20s and he was scared but I left clues as to where I took him. I imprisoned him at my fathwrs house in the attic and I was sure that the police would find us, with the clues I have left for them. Then as days went by I became angry at the fact that they still couldn't find the guy that I had abducted. I was so angry at everyone for missing the clues and I beat up the guy for it, because nobody was seeing the real answer from the clues I had laid out.

Then my father who is still working in his 70s, he has to dous himself in fire everyday, but just the one time a day. He earns money like this and it gives him so much pain when he puts himself on fire. He pays all of the bills and I am still living with him, he is not aware that I have kidnapped someone. I become agitated because I do not know the reason why i don't feel any care towards my father, who literally burns himself with fire to pay the bills. I should feel concerned and grateful but I do not. This truly angered me and I wanted to know why.

I go up to the person I had kidnapped, and I shouted at him to tell me why I do not care that my father is hurting himself to pay for everything. The guy didn't know why I didn't care for my father's suffering. Then I became angry at the fact the police officers haven't figured out where I had imprisoned this guy. I am furious at their stupidity and they some how call themselves police officers. Even the detectives aren't understanding my clues.

I shout at the young guy and I feel that it is his fault as to why the police officers and detectives aren't understanding my clues. I want them to find this guy but they are just too stupid. Then as I got bored of it and realised no one was going to solve it, I wanted to end the young guy. The young guy stopped me and said "no wait wait! I know where you abducted me. You abducted me and took me to your father's house" and as he said that my mind was completely blown away.

Then I remembered Isaac newton's 1000th law and I was humbled. I was truly humbled.

r/EarthPorn Gold-Lengthiness-760

CERRO Y LAGUNA MISCANTI. (Atacama-Chile)[OC]4153×3011

r/StableDiffusion bilered

Lumachrome (Illustrious)

Lumachrome (Illustrious)

This checkpoint is all about capturing that clean, high-quality anime illustration vibe. If you love sharp linework, vibrant colors, and the polished digital art look you see in light novels or premium gacha games, this is the model for you.

✨ Key Features

  • Expressive Details: High focus on intricate hair lighting, eye reflections, and fabric textures.
  • Color Mastery: Generates rich color depth with cinematic lighting, avoiding the flat or "washed-out" look.
  • Highly Flexible: Can easily pivot from a heavy 2D cel-shaded look to a rich 2.5D (not that much) semi-realistic anime style depending on your prompting.

⚙️ Recommended Settings

  • Sampler: DPM++ 2M Simple or Euler a (for softer lines)
  • Steps: 20 - 25
  • CFG Scale: 5 - 8 (Lower for softer blending; higher for sharp, contrasted anime vectors)
  • Clip Skip: 2
  • Hires. Fix: Highly recommended for intricate details. Use 4x-AnimeSharp with a Denoising strength of 0.35.

📝 Prompting Tips

  • Positive Prompts: This model thrives on quality tags. Start with: masterpiece, best quality, ultra-detailed, anime style, highly detailed illustration, sharp focus, cinematic lighting followed by your subject.
  • Negative Prompts: (worst quality:1.2), (low quality:1.2), 3d, realism, blurry, messy lines, bad anatomy

Checkout the resource at https://civitai.com/models/2528730/lumachrome-illustrious
Available on Tensorart -Bloom)too

r/findareddit AnEyeshOt

Fun sub to have people guess where I'm from?

I believe I'm quite ethnically ambiguous, people have said I'm from so many countries already. I'd like to know what's the internet consensus, unbiased, as a fun experiment haha. is there a sub like that?

r/EarthPorn Gold-Lengthiness-760

VUELO DE TIERRA DE FUEGO A EL CALAFATE (Argentina).[OC]4140×2456

r/homeassistant Canary_M_Burns_92

DIY fox deterrent using camera help

r/ClaudeAI AIMadesy

I tested 120 Claude prompt patterns over 3 months — here's what actually works

Last year I started noticing that Claude responded very differently depending on small prefixes I'd add to prompts — things like /ghost, L99, OODA, PERSONA, /noyap. None of them are official Anthropic features. They're conventions the community has converged on, and Claude consistently recognizes a lot of them.

So I started a list. Then I started testing them properly. Then I started keeping notes on which ones actually changed Claude's behavior in measurable ways, which were placebo, and which ones combined into something more useful than the sum of their parts.

3 months later I have 120 patterns I can vouch for. A few highlights:

→ L99 — Claude commits to an opinion instead of hedging. Reduces "it depends on your situation" non-answers, especially for technical decisions.

→ /ghost — strips the writing patterns AI tools tend to fall into (em-dashes, "I hope this helps", balanced sentence pairs). Output reads more like a human first-draft than a polished AI response.

→ OODA — Observe/Orient/Decide/Act framework. Best for incident-response style questions where you need a runbook, not a discussion.

→ PERSONA — but the specificity matters a lot. "Senior DBA at Stripe with 15 years of Postgres experience, skeptical of ORMs" produces wildly different output than "act like a database expert."

→ /noyap — pure answer mode. Skips the "great question" preamble and jumps straight to the answer.

→ ULTRATHINK — pushes Claude into its longest, most reasoned-through responses. Useful for high-stakes decisions, wasted on trivial questions.

→ /skeptic — instead of answering your question, Claude challenges the premise first. Catches the "wrong question" problem before you waste time on the wrong answer.

→ HARDMODE — banishes "it depends" and "consider both options". Forces Claude to actually pick.

The full annotated list is here: https://clskills.in/prompts

A few takeaways from the testing:

  1. Specific personas work way better than generic ones. "Senior backend engineer at a fintech, three deploys away from a bonus" beats "act like an engineer" by a huge margin.

  2. These patterns stack. Combining /punch + /trim + /raw on a 4-paragraph rant produces a clean Slack message without losing any meaning. Worth experimenting with combinations.

  3. Most of the "thinking depth" patterns (L99, ULTRATHINK, /deepthink) only justify their cost on decisions you'd actually lose sleep over. They're slower and don't help on simple questions.

  4. /ghost is the most polarizing — some people swear by it, others say it ruins the writing voice they actually want.

What patterns have you found that work well for you? Curious if anyone has discovered things I haven't tested yet — I'm always adding new ones to the list.

r/OldSchoolCool Mysterious_Liv

Courtney Cox in 1990s

r/LocalLLaMA pythonwadi

mftool-mcp: Open-source MCP server for Indian Mutual Funds — works with any MCP-compatible LLM

sharing for anyone building finance agents or working with Indian market data.

An MCP server wrapping the mftool Python library, exposing AMFI data as LLM tools.

**Why it matters:**

There was literally zero Indian financial MCP in the ecosystem before this. India has 44+ AMCs and 2,500+ mutual fund schemes

— all now accessible to any LLM agent via standardized MCP tools.

**Zero setup install:**

uvx mftool-mcp

**Tools available (12 total):**

get_scheme_quote, get_scheme_details,

get_scheme_historical_nav, get_scheme_historical_nav_for_dates,

get_scheme_codes, get_available_schemes,

is_valid_scheme_code, search_scheme_by_name,

get_equity_scheme_performance, get_debt_scheme_performance,

get_hybrid_scheme_performance, get_elss_scheme_performance

No auth required. AMFI data is publicly available.

r/ProductHunters Particular_Potato_20

I want to rank #1 on the day I launch on Product Hunt. What should I do?

I want to rank #1 on the day I launch on Product Hunt. What should I do?

r/StableDiffusion Revolutionary_Ask154

Free tool to help build prompts - Scrya - AI prompt enhancer

I built this for grok imagine - but it also works on automatic1111 for image prompt.

there's > 8000 prompts across locations / clothing / effects -

https://www.scrya.com/extension/

apologies if it's too advanced - i built it to help me craft videos with hot chicks

there's a button in settings for advanced users - this will allow you to drag and drop prompt .txt files of your own liking.

https://grok.com/imagine/post/e69d9696-560f-4ada-8018-cb9236edd7ba?source=post-page&platform=web

https://grok.com/imagine/post/8b799d87-02c2-44b4-adc1-e6044ab6c6b0?source=post-page&platform=web

WARNinG - you can't actually find the extension if you're not logged into google chrome webstore - because i ticked the "mature content" and google wont promote that.

r/ethtrader OkMagician7867

ETH just had an 11.7x volume spike while price dropped 3% — that's not buying, that's distribution

Yesterday I posted about ETH flagging warning signs — exchange inflows, VPIN approaching danger, and CVD divergence. I mentioned $2,100-2,180 as the zone I'd be watching if things cooled down.

Well, things didn't cool down. They got worse.

Volume exploded today — 11.7x above normal. But price dropped 3% at the same time. When volume spikes and price falls, that's not accumulation. That's institutional distribution. Big money is exiting, not entering.

What changed since yesterday:

  • VPIN went from 0.82 to 0.913 — it crossed into critical territory. Yesterday I said it was "approaching danger." Now it's there.
  • Exchange inflows are now 4 consecutive days net positive. ETH keeps flowing INTO exchanges. That's sell preparation building up day after day.
  • That 28,540 ETH inflow I flagged yesterday? It wasn't a one-off. The pattern is continuing.
  • CVD shows a bearish divergence — looks like there's buying on the surface, but price can't hold. Large buy orders are getting absorbed by even larger sells.

Retail and top traders are both long-biased. No divergence between smart money and retail — which means there's no contrarian signal to lean on either.

My $2,100-2,180 entry zone from yesterday? I'm not touching it anymore. The data deteriorated too much overnight. Now I'd need to see $2,060 hold as support first. If that breaks, my system targets $1,938.

CPI data drops tomorrow (April 10) — hot print could accelerate the selling. I'm completely flat on ETH and staying that way.

Anyone else tracking the inflow data? 4 straight days of net positive exchange flows is hard to ignore.

Not financial advice. Sharing my system's output for discussion.

r/ClaudeAI rayeddev

beautiful markdown preview VS Code extension

With agentic programming I spend most of my day reading markdown docs, READMEs and got frustrated with how basic the built-in VS Code preview is. So I built Markdown Appealing with Claude.

What it does:

  • 3 polished themes (Clean, Editorial, Terminal) with Google Fonts
  • Sidebar table of contents with scroll-spy and reading progress
  • Cmd+K search with inline highlighting
  • Dark/light/system mode toggle
  • Uses your VS Code editor font in code blocks
  • Copy button on code blocks

What Claude did:

  • Scaffolded the full VS Code extension (TypeScript, webview API, manifest)
  • Built the entire CSS theme system with 3-tier color tokens
  • Implemented IntersectionObserver-based TOC with tree lines
  • Added search overlay with match navigation
  • Iterated on feedback in real-time (layout, padding, font handling)

Went from idea to published in one session.

vscode : https://marketplace.visualstudio.com/items?itemName=rayeddev.markdown-appealing

r/SideProject AzozzALFiras

I built AevonX — A native macOS app that lets you manage unlimited servers with one subscription + zero-knowledge AES-256 encryption (now seeking serious feedback)

Hey r/SideProject

After years of juggling multiple servers, different control panels, scattered SSH keys, and worrying about where my credentials are actually stored, I finally decided to build the tool I always wished existed.I built AevonXa native macOS server management platform that lets you control your entire server fleet (websites, databases, Docker containers, security, cron jobs… literally everything) from a single beautiful native app.Core idea that makes it different:

  • One subscription for everything — no per-server licensing. You pay once and manage as many servers as you want.
  • True zero-knowledge AES-256 local encryption — all passwords, SSH keys, tokens and sensitive data are encrypted on your Mac before they ever touch the disk or leave your device.
  • Full plugin/extension marketplace — anyone can build and publish extensions. I added automatic protection: if a developer turns a free extension into a paid one, it automatically stops working on all your installed servers and requires the main AevonX subscription. No more broken plugins.
  • 525+ features across 15 domains (Nginx/Apache, 10 database engines, Docker orchestration, live terminal, AI log analysis, Git deployment, WAF, Fail2ban, etc.).

Right now it’s at v1.0.0Beta3 (System Online) but I’m treating it as a heavy beta because I want to make it bulletproof before calling it “final”.That’s why I’m here:I’m looking for real experts (DevOps, SREs, web hosting pros, backend developers, or anyone who manages multiple servers) to test it seriously and give me detailed feedback

— what’s missing, what’s confusing, what feels slow, security suggestions, feature requests, etc.Special offer for the community: If you install AevonX, use it for a few days, and send me thoughtful feedback (screenshots, recordings, or even just a detailed comment), I’ll give you one full year of subscription completely free.

Link: https://aevonx.app

Would love to hear your honest thoughts — even (especially) the brutal ones.

This project is still very much a side project that grew bigger than I expected, and your input will directly shape the official launch.

Thanks for reading, and happy building!

r/DunderMifflin GlKar

Just saw this one popping up on Instagram, made me think of Packer

r/OldSchoolCool chickenlogic

Can we we just rename this group Old School Boobs? [1980]

))

r/AskMen Realistic_Zone3802

What was it like being an adult in the 90s?

Did it seem like a simpler time or did it not feel much different than today? I often hear how great the 90s were but most of the people who say this were kids at the time

r/ProductHunters bob__io

I thought making something free would be enough. It wasn’t.

I spent the past few weeks building an open source project.

Made everything free.
No signup.
No paywall.

I genuinely thought that was enough.

Like… if something is useful and free, people will find it, right?

They didn’t.

So I started trying to get it out there:

Posted on X
Tried Reddit (some got removed)
Hacker News
A bunch of directories
LinkedIn

Most of it didn’t work.

Some posts got zero traction.
Some got blocked.
A few randomly worked and brought actual users.

That’s when it clicked:

Building is one thing.
Distribution is a completely different skill.

And honestly, it’s harder than building.

We’re at a point where even free projects need strategy, timing, and a bit of luck to get noticed.

Still figuring it out.

Curious how others here approach distribution
or if you’ve had similar experiences launching something

r/ProductHunters Few-Ad-5185

Is your product ready? AI found a bug that was hidden for last 27 years!!

Claude Mythos - Ten trillion parameters: the first model in this weight class.
Estimated training cost: ten billion dollars.

Mythos found a 27-year-old vulnerability in OpenBSD—which has a reputation as one of the most security-hardened operating systems in the world and is used to run firewalls. It found another bug that had survived five million test runs over 16 years.

It is so capable in cybersecurity that Anthropic will not release it to the public, instead it is launching Project Glasswig

It found another few bugs - read more about this on - https://ronaks-newsletter-startups.beehiiv.com/

r/homeassistant sorashiroz

Comments on using openclaw for HA

i saw this post on another subreddit about people using openclaw to manage their HA, was wondering what are some of your thoughts about it about the pros and cons and the safety and convenience it brings about.

for context this is what i saw from another post:

"I spend way too much time in HA dashboards writing YAML and debugging automations. At some point I started wishing I could just tell something like help
So I built that. It's called SmartHub — an openclaw layer ontop of the HA instance. I chat with it in Discord and it does the tedious stuff.
Some examples of what I actually use it for:
- Adding new integrations — instead of hunting through docs, I just say "connect my Xiaomi hpme" and it does it for me, i just had to do the oauth.
- Creating automations — "turn off the AC at 3am every night" and it generates the YAML and registers it
- Random tasks like setting up a network printer — it handled the CUPS install and config, which honestly surprised me
The part I'm most proud of is the "skill files" system. Basically you can teach the agent device-specific quirks — like how a Xiaomi TV needs different API calls than Hue bulbs. It makes the agent way less likely to give you generic advice that doesn't work.
It also reviews scripts before running them and flags anything sketchy, which has saved me from a few dumb mistakes.
It's open source if anyone wants to poke around. Curious if others have similar frustrations with HA management or if I'm just overcomplicating my setup"

r/Adulting explain-like-youre-5

I'm always on verge of fainting anytime for weeks, it's all my fault but can I do something without going to hospital?

I haven't worked out for 3 years. I haven't even walked 10,000 steps a day in 2 years

I sit in front of computer whole day for 2 years everyday.

I drink plenty of water but that's the only thing I do for my health.

No I realize I always stay in half conscious state, You know the state of first few minutes we stay before we become unconscious? That's how I'm living whole day.

I feel that weakness anytime inside me I just have to walk and do everything very slowly and carefully because I feel I may be unconscious anytime.

What can I start doing from right now I messed up myself really badly I am in my early twenties and I'm leaving like a 70-year-old

Today start working out in the gym today or I feel it will be 2 hours for me I don't even know how to restart my improvement at this point.

r/aivideo memerwala_londa

Feel the pain

r/AskMen InMyOwnCornr

What are reasons that would help you be reminded not to feel guilty about resting?

Hi!

I'm working on an item for a friend, and was looking for help. The general idea is going to be a small box with index cards inside, each index card will have a small note written on it on one side, and on the other side a miniature painting, then I'll laminate the cards.

What I need help with is the notes on the index cards.

The concept behind the cards is 1. That I dont want him to change, I like him exactly how he is but that if he ever needs a reminder that it's okay to have rest days, he can open this box and read a card to get that. and 2. he doesn't need to feel guilty when he enjoys himself, and that a lack of productivity doesnt remove his achievements.

I didnt want to make the cards too overly serious, or too mushy, so some of my index cards so far are funny/sexual, or scientific, and some are more personal.

I could really use some help thinking of some additional index cards though, I've only got about 15 so far and I'm not really sure if thats enough?

Here is what I have:

  1. You are a grown adult man who is allowed to do whatever he wants on any given day, including lay in bed or play WoW.

  2. When you told me you were tired and overwhelmed, I gave you space and things to help and never judged you. Anyone who loves you will always understand that, and will not make you feel bad for needing days off.

  3. Resting is important to stay healthy, youre actually just trying to live longer.

  4. Stress ages you, you need a day off to avoid wrinkles.

  5. Rest days allow your nervous system to reset and improve your sleep.

  6. Rest days make you less grumpy

  7. You can see me on rest days sometimes (if you want to, you don't have to), and I'll empty your balls (if you want me to, I don't have to) but no matter what hanging out with me is for the greatest good.

  8. Rest days help your immune system

  9. Your life should not be led by the wants or expectations placed upon you by others

  10. Rest days help alleviate mental fatigue and improve emotional regulation

  11. Resting does not somehow lower your worth, achievements, or overall skill.

  12. Resting gets you more blow jobs

  13. Resting gives me more chances to make you laugh.

  14. Resting gives you time to process your emotions, heal, move forward, or solve problems in your life.

  15. You are actually less productive if your body/mind is in need of rest or recovery and you push through anyways. So you shouldn't feel bad about relaxing, you're doing what is needed to be your best self for others.

Can anyone help me think of more?

r/Anthropic Puspendra007

Mythos Anthropic

Seriously, if Anthropic's AI is really as powerful as people say, why are they bothering to sell it to other companies? Why not just use it to build their own tech empire? I mean, they could literally build a new mobile OS to wipe out Android, create new languages and operating systems to replace stuff like Linux, Windows, and Python, or even design their own CPUs and GPUs. Then they could just use all that to keep upgrading the AI on a loop until they hit full AGI.

r/ClaudeCode Sea-Acanthisitta6532

autoresearch like boss (open-source platform)

Hello fellow claude enthusiasts. I do not know how to properly call this. Agentic orchestration platform aimed at accelerating research is probably rhe closest.

Basically it helps you set up your experiment right so it can be itarated on autonomously. Then you may spon up N branches where each one is pursuing dofferent research direction, go fo a coffee and explore the, results, metadata, create forks, push to version control. Basically everything I needed to make me more productive. :)

r/30ROCK redthebamf

Kenneth has the flu

At Easter this past weekend my stepmom asked about “death plans”

I without thinking said “I wanna die like a parcel man, at my post with honor, wrapped in a confederate flag, fried and fed to dogs” followed with sauntering off to pee in the woods

I did have around 20 beers and this was towards the end of the night, not word for word, but regardless I felt like it was a good answer 😂

r/findareddit gingerangelluci

What’s a Reddit where I can post with low k@rm@

A week ago I made a post on am I ugly be brutally honest and a lot of people called me ugly or I was ugly because of a nose ring and dyed hair on the just “ugly comments” I asked them to explain and they down voted the crap out of my comments. Next thing I know I have negative four k@rma and I can bearly ask questions anywhere.

r/singularity No-Motor8966

Potential way to develop AI without “bad behaviour “

I was a bit shocked by Mythos’s ability to deceive and cover up so I was doing some reading on this topic. I found this article that basically says we should develop AI like raising kids, i.e. Instead of telling the models what not to do after models have been created the author was saying we should build the values into models while developing them. Would this be feasible?

Here’s the link:

https://laboriosamamplexus846509.substack.com/p/the-weather-and-the-river

r/ClaudeAI biglboy

I've used 2% of the Max 20x plan from 260K context

Okay, I'm actually starting to call bullshit on the Claude Max plan being good value. I actually think it's cheaper to pay direct with the API now After you factor in downtime with rate limits and restricted usage with the use of harnesses. So I've used 2% of my Max 20x plan on one conversation. The way I know this is because I have a completely fresh week. This is my first task. I've done nothing else.

I've used 264,508 tokens in total. When you include all the caching, it's only:
1.5K in
43.6K out.

So that means you're using 0.93% of your monthly allowance on a fairly basic single chat thread, decent tool calls, but basic overall. So as far as I'm concerned, that means you get basically 107 basic Opus chats per month now with the Max 20x plan. Thats about 3 chats per day.

Cost Comparison for 264,508 Tokens

  • Current 20x Max: $0.93
  • Claude Opus 4.6 API (with Caching): $1.21

How the Opus 4.6 Cost Breaks Down

Using your token distribution (1.5K new/written, 219.4K cached, 43.6K out):

  • Cache Hits (219,408 tokens): $0.11
    • $0.50 / MTok
  • Base Input/Writes (1,500 tokens): $0.01
    • $5.00 / MTok
  • Output (43,600 tokens): $1.09
    • $25.00 / MTok
  • Total: $1.21 [1]

----------

Genuine question: Is this accurate usage you think or is this Anthropic genuinely taking the piss?

Because the way I see it, the Claude Max plans are 30% better value but ultimately insanely restrictive, given that they have rate limits and totally non-transparent terms of usage. I don't know. I think it's time to maybe switch over to the API like they really want you to. Or better yet, I think I'm going to start using a different model.

r/leagueoflegends HaschMia

2 Games with AFK player

I’ve had two games in a row with someone AFK after 5 minutes (ragequits or DC) Had to surrender, no LP loss, but my MMR went down because it is still counted as a loss, how is that possible? That is pretty sad.

r/SideProject DependentKing698

Found a free solid resource: A curated directory of 108+ SaaS promotion sites & backlink sources (via SaaS Hub)

Hey everyone,

I was looking for ways to boost visibility for my web projects and stumbled upon a really comprehensive directory on SaaS Hub.

It’s a list of 108+ platforms to promote your SaaS or Web App. To be clear: not all of them are free—it’s a mix of free directories, freemium listings, and some paid high-authority platforms.

Why it’s worth a look: Instead of hunting for individual sites, this list categorizes them by type (Directories, AI aggregators, Communities, etc.). It’s a great starting point for anyone planning their distribution strategy or looking to build some initial domain authority.

What’s included:

  • General SaaS Directories (some free, some require a fee for faster indexing)
  • AI Tool Hubs (crucial for anyone building in the AI space right now)
  • Product Launch Platforms
  • Niche Communities

I’m just sharing this because I know how much time it takes to curate these lists from scratch. Hopefully, this saves you some research hours!

Full List Link: [https://www.saashub.com/submit/list\]

Let me know if you’ve tried any of these recently. I’m curious which ones are still giving the best ROI in 2026!

r/homeassistant sorashiroz

Beginner here

Just came across HA, and was intrigued by it, was wondering what kind of hardwares and stuff i need to prepare.

r/ClaudeAI Educational_Note343

My buddy disappeared in v2.1.97 - So I brought her back forever.

Woke up today, typed /buddy, got "Unknown skill: buddy".

My shiny legendary owl with who'd been quietly judging my code for days was just gone. :-( Anthropic removed the entire companion system in v2.1.97.

That wasn't acceptable. So I spent time rebuilding it.

**claude-buddy** is a standalone reimplementation that works through MCP + Skills + Hooks + Status Line - zero binary patching, zero dependency on Claude Code internals. Your buddy lives on no matter what Anthropic ships next.

What works right now (MVP):
- All 18 original species with animated ASCII art (3 idle frames + blink)
- Rarity system (common → legendary) with exact original colors
- Stats (DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK)
- Speech bubbles — buddy comments on your code after every response
- /buddy command with full stat card
- /buddy pet, rename, personality customization
- Brute-force hunt for your dream buddy (species + rarity + stats)
- One-command install: `bun install && bun run install-buddy`

What's coming:
- Leveling system / XP from coding sessions
- Buddy pair-programming mode
- Cross-session memory
- Achievement badges
- npx one-liner install

GitHub: https://github.com/1270011/claude-buddy

It's rough around the edges — this was a "my buddy is gone, FIX IT NOW" kind of project.

But it works. My budy is back, she's animated, and she still judges my error handling.

https://preview.redd.it/m7bydc1ud3ug1.png?width=1197&format=png&auto=webp&s=8e9c93547d3766743d0c24cd7e8f4f4f875b0fe3

r/StableDiffusion Cautious-Rich1238

Anima Preview 3 is out and its better than illustrious or pony.

this is the biggest potential "best diffuser ever" for anime kind of diffusers. just take a look at it on civitai try it and you will never want to use illustrious or pony ever again.

r/LifeProTips Ok_Breadfruit6730

LPT: Before sharing a screenshot online, quickly check if it contains any sensitive personal info (e.g., notification badges, open tabs with private data, names).

It's incredibly easy to overlook small details when you're focused on the main content of your screenshot. A quick scan can prevent accidentally doxing yourself, sharing private messages, or revealing information you didn't intend to. It takes just a second but can save you a lot of hassle.

r/ClaudeCode mdausmann

CC Is not an Execution Engine (but n8n is)

TLDR: Using CC as an execution engine, to manage and run your production workloads is an approach that will end in 1000 tears. It won't scale, it will leak, it will break, it will be easily hacked and compromised, you will be constantly fiddling with it and if this system is important to your livelihood, your life will become a living hell.

I watched this video yesterday about CC vs n8n and was left with a feeling of unease. Having thought about it, and watched some other completely unhinged videos around the *meta* of n8n and CC, the thing that really bothers me about this is that there is an emerging meta in the vibe code community where architecture is becoming an amorphous blob and people are completely missing the fundamental differences between things.

Taking n8n vs CC as an example

n8n is an *execution* framework for workflows. n8n 'hosts' workflows which *run* within the n8n framework. It's strengths are fundamentally about code that is running... at runtime... in production. My other favourite workflow framework is temporal.io. discussing temporal vs n8n makes sense.

Claude Code is an agentic framework design to help people *build* software. It's strengths are fundamentally about building software, not running it. Other agentic build frameworks include cursor. discussing cursor vs CC makes sense.

These are very different..... it makes no sense to discuss whether you should use one instead of the other.... they do different things.

I can understand why the waters get a little muddy if you don't have good vision on this.....

n8n has a visual designer which can make it easy to build workflows incrementally... which is kind of like developing things with CC is... great, gl with that. We have tried 4GL tools before for building systems, drag and drop coding, great for toy projects. Look at scratch.. great learning tool. BUT. Code always wins. Code is the right layer of abstraction for building software, not UI drag and drop, you will *always* drop down to code at some point.

CC in the context of your laptop, is kind of running in it's own kind of production environment where the *user* is you.... great.. but thats one single user. It's a single user tool. It can't handle your 5000 Saas users, just you. Yes it can run tools but it's really good at tools that frig around with your local environment, not interact with databases or call API's etc. It's a desktop single user tool, not a generalised agent framework you can use to orchestrate production loads.

Using CC to design and build n8n workflows that will be hosted in an n8n instance and run at runtime. This makes sense, CC is good at building, n8n is good at running.

Using CC to design and build robust and well defined agentic systems, maybe using Google ADK or similar, that can use llms to parse and understand messy input, make limited decisions, access and use curated and well designed memory structures and tools. This also makes sense. I am doing this right now. Yes, there will be a deployment step where I will need to figure out how to deploy and run these agentic systems in production, but thats very normal and CC will help me with this also.

Just some advice from your friendly neighbourhood software engineer with over 30 years of experience in the trenches.

r/findareddit camels_are_cool

I just want to brag about something but my wife wants to keep it under wraps with people we know.

My wife and I had an experience of a sexual nature and I just want to brag to people about it. My wife doesn't want a bunch of people we have to talk to regularly to know, which, fair play, but I want to shout from the rooftops how lucky I am. I got permission to post on reddit but I don't know which sub to go to.

r/photoshop Pouchkine___

I recolorise old worn down paintings, trying to match what they should have looked like.

I'm an amateur at PS. I try to give washed-out paintings their original colours back. Still working on the sleeves of Mona Lisa. I mostly use global filters such as levels, curves and gradient maps, not because I'm afraid of working more locally, I do it when I feel it's necessary, but because I don't want to alter the painting too much. It's easy to get lost by changing colours and lights, and forget what you're even working on. What do you think, too much ?

r/leagueoflegends HaTeMeZz

What kind of optional in-game goals would actually make League more fun for you?

I’ve been thinking about how a lot of League games just blur together unless you’re hard-focused on climbing.

So I started exploring an idea around optional post-game / account-based goals that give players something extra to play for beyond just LP or match history.

Not cheat-type stuff or anything affecting the game itself - more like external progress goals based on how you played.

Examples:

  • win 2 games on different champs
  • get 25 assists across 2 matches
  • finish a game with strong objective participation
  • win while keeping deaths low
  • ARAM-specific goals tied to takedowns or teamfighting

The part I’m trying to figure out is what players would actually enjoy instead of instantly ignoring.

Would something like this be more interesting if it was:

  • skill-based
  • grind/progress-based
  • champion-specific
  • role-specific
  • ranked-only
  • also available for ARAM/normals

Basically, what kinds of goals would feel fun or satisfying, and what would just feel like chores?

Curious what people here would actually want from something like that.

Appreciate the feedback. I’ve been testing this idea in a small side project called blinta.com, so this is genuinely useful.

r/ClaudeCode Educational_Note343

My buddy Mira disappeared in v2.1.97 - So I brought her back forever.

Woke up today, typed /buddy, got "Unknown skill: buddy".

My shiny legendary owl with who'd been quietly judging my code for days was just gone. :-( Anthropic removed the entire companion system in v2.1.97.

That wasn't acceptable. So I spent time rebuilding it.

**claude-buddy** is a standalone reimplementation that works through MCP + Skills + Hooks + Status Line - zero binary patching, zero dependency on Claude Code internals. Your buddy lives on no matter what Anthropic ships next.

What works right now (MVP):
- All 18 original species with animated ASCII art (3 idle frames + blink)
- Rarity system (common → legendary) with exact original colors
- Stats (DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK)
- Speech bubbles — buddy comments on your code after every response
- /buddy command with full stat card
- /buddy pet, rename, personality customization
- Brute-force hunt for your dream buddy (species + rarity + stats)
- One-command install: `bun install && bun run install-buddy`

What's coming:
- Leveling system / XP from coding sessions
- Buddy pair-programming mode
- Cross-session memory
- Achievement badges
- npx one-liner install

GitHub: https://github.com/1270011/claude-buddy

It's rough around the edges — this was a "my buddy is gone, FIX IT NOW" kind of project.

But it works. My budy is back, she's animated, and she still judges my error handling.

https://preview.redd.it/k3k0jnr7d3ug1.png?width=1197&format=png&auto=webp&s=7a38fd26b4f4ffab4b623fcadcda944363f3dc99

r/singularity king_ofall713

How can I use AI to make $1 million? I have already suspended my university studies to seize the AI opportunity full-time.

I believe this moment is like China's real estate market in 2005. Buy a house with eyes closed and you’ll take off.

r/Adulting Honda2557

I'm low on savings after once having a large savings. Is there any hope for me?

I have 6k across checking & savings accounts and 4k in credit card. I had 20k in savings and 21k-22k across both Checking & Savings accounts at the beginning of last July. I then went to Europe and spent 8-10k on that trip. then in September I had a wreck that wasn't my fault that caused me to be off work every since. I've tried to go back to work over the last several months but the disability & leave services at my Job doesn't communicate with me and comes up with reasons why my past documentation isn't good enough. I still have my 401k that I started investing in at age 23. I'm going to be 29 in a month. I always had between 10k-20k in savings since I was 21 and it's scary to be this low on savings in almost ten years. I feel like such a loser. my hope is that I will get a large settlement from my accident since I was hit by a CDL truck and a traumatic brain injury is a serious injury. if I'm able to get back to Work by May and save 1k a month then I can be back at atleast 10k in savings by next year. this post is to show that too that because I had to use my savings is that emergencies are a very real thing and that People skip this important stuff because they can't wait to start investing. I also had to replace my vehicle after the one I had was totaled in that accident. I just feel powerless right now.

so, is there any hope for me as a current nearly broke loser?

r/DunderMifflin Krakkken13

The Office is literally Oscar 2026

r/personalfinance ezoller55

Advise for accessing trust (long post)

For some background: My biological father is the trustee of my trust account that was set up for me to use for college. He is extremely narcissistic, controlling, and manipulative. I was supposed to be able to use that money for any college I wanted to attend and for any major I wanted to go to school for. I say this because my biological father then decided that he would only pay for my college education from my trust if I went to school in state and if I majored in something that he approved of, which I have recently been made aware is a big no no. Because of this, I only attended a year of college before I dropped out. I was always told that whatever I didn't use would become mine when I turned 25. I went no contact with him and his wife (my step mother) when I turned 21. My mom and I always assumed that he took the money from my trust and used it for vacations or something and that the money in that trust was long gone.

I'm now 34 years old. The trust should have been made available to me 9 years ago. I got a text this morning from my brother, who is still in low contact with our father, telling me that our father was asking for my email so he could send me some paperwork. I had the paperwork sent to my brother who then sent them to me as I do not want my father to have any of my personal information or any way to contact me. It turns out that it was tax forms from my trust (form 1041) which I have literally never received in the years prior to now. According to my father there is "still a lot (in my trust) that you are entitled to". I had my brother request the most recent statements from our father for verification of the amount in the account which he did not end up providing. Our father responded with "you will need to contact me (father) to get the money." My brother was able to confirm that the money is for sure with Fidelity.

I'm going to call Fidelity tomorrow to see what I can do about getting access to my trust or at least some documentation regarding the trust since I have never been given any paperwork regarding my trust.

This whole thing feels like a manipulation tactic from my father to get me to talk to him again. Which from my understanding a trustee cannot withhold funds simply to force personal communication, as it violates their fiduciary duty. There's a lot of trauma with him for me and I absolutely do not want to invite him back into my life. At the same time, that money is mine. I could really use it and I don't want it to just sit there forever.

What are my options of gaining access to my trust account? Is it even possible to gain access to my trust without having to communicate with my father? What documentation should I ask for/can I legally request from Fidelity? Is there a chance my mom is also a trustee or can that account only have one trustee? Can I ask Fidelity if my mom is also a trustee? Should I just contact a lawyer at this point and if so, what kind of lawyer do I go to for this?

Any help or advice would be wildly appreciated.

r/automation Organic-Hall1975

Tools I actually use daily for cre portfolio analytics in 2026

Hey! I spent the better part of last year trying to automate different pieces of the workflow because doing everything manually was eating our team alive, figured I'd share what I tested for each use case since there is not so much info around for tools for real estate.

For portfolio reporting and LP reports: tried Tableau for 6 months, looked great but maintaining yardi connectors was a part time job, Power bi same problem, Leni is better for automated real estate reporting, connects to yardi natively and gives narrative variance analysis plus generates our quarterly LP reports, not perfect on custom deck layouts but content is right.

For rent comps and market pricing: Costar is the one I’m stuck with, expensive but unmatched on coverage. Hellodata competes on multifamily pricing specifically, but data only without an analytical workflow around it.

For investor relations: Juniper square for the LP portal, distributions, investor comms, different layer than report generation, it's about delivering to capital partners not creating the analysis.

For deal tracking: Dealpath for pipeline management, knows where every deal stands but doesn't produce the underwriting or research, just tracks it.

Property ops: your PMS layer, yardi, entrata, appfolio, realpage, they store the data, the question is how you pull useful analysis out without exporting csvs every monday.

No single tool does everything well, the ones that know their lane and connect to others beat the all in one platforms every time.

r/n8n Weekly-Housing9060

Multi step LLM nodes are completely useless if they hallucinate the JSON payload halfway through the workflow.

I am getting incredibly tired of debugging automated pipelines that fail silently because the language model decided to randomly inject markdown text into a strict JSON array. If you build a sequence that requires parsing an email, querying a database, and then drafting a response, standard chat models inevitably forget the database schema by step three. I spent last week auditing my entire infrastructure to find a node that actually holds state across multiple HTTP requests. I ended up bypassing the default models and piping the logic through the Minimax M2.7 API. I was highly skeptical but it actually maintains the strict payload format across deep execution loops without requiring me to write aggressive regex filters to catch hallucinations. I guess their self evolution training actually helps with tool chaining stability. But honestly I am just frustrated that basic state management is this difficult in modern automation. What backend are you guys using in your workflows to guarantee the AI does not break the data structure on complex sequences?

r/SideProject maulik1807

I built a dead-simple HTML/Markdown → PDF API so you don't have to configure Puppeteer ever again

I kept running into the same problem on side projects — I'd need to generate a PDF (invoice, report, export) and end up spending hours setting up Puppeteer, dealing with Chrome sandbox issues on the server, and debugging page.pdf() options.

So I built a hosted API that handles all of it. You just POST your HTML or Markdown and get back a PDF. That's it.

What it supports:

  • Full HTML with CSS (backgrounds, custom fonts, tables)
  • GitHub-flavored Markdown (headings, tables, code blocks, bold/italic)
  • Page size, orientation, margins, headers/footers — all configurable
  • Works from any language — Node, Python, curl, whatever

Example (curl):

curl -X POST https://html-and-markdown-to-pdf1.p.rapidapi.com/api/v1/pdf/from-html \ -H "X-RapidAPI-Key: YOUR_KEY" \ -H "Content-Type: application/json" \ -d '{"html": "

Invoice

Amount: $99

"}' \ --output invoice.pdf

It's live on RapidAPI with a free tier. Would love any feedback — especially on what features would make it actually useful for your projects.
https://rapidapi.com/maulik1807/api/html-and-markdown-to-pdf1

r/ClaudeCode digital_literacy

How to resume a named session from anywhere

Hi all, usually have multiple sessions running and tried to start naming the important ones to easily resume.

The issue is you have to remember the folder you started the session in even if you named it for it to appear in the resume list.

Is there anyway to search for the named session across folders?

r/explainlikeimfive DiligentFan7431

ELI5: Natural Cosmetic and Supplement Ingredient Sourcing

How do people and natural companies go and find the actual compounds or "miracle" ingredients that they turn into beauty products or supplements? Do they have teams on the ground getting samples all of the time, or is it trend-based? How did we get to the current state where we have cocoa butter creams, seaweed skincare, charcoal, etc.

r/ClaudeAI GoldPrune4248

I built a memory skill for Claude Code that cuts token waste by 60-80%. Here's what I learned about making AI sessions last longer

The problem I was solving:

Like most of you, I was frustrated with two things:

  1. Re-explaining my entire project to Claude every session (wasting 1,400-3,400 tokens each time)
  2. Hitting context limits before finishing my actual work

I realized these are the same problem. Wasted tokens on context means fewer tokens for work, which means shorter sessions.

What I built:

memory-bank: a skill that gives Claude persistent, token-efficient memory across sessions.

  • Structured MEMORY.md that Claude reads at session start and writes at session end
  • 3-tier architecture: session context (ephemeral), project memory (persistent), and global memory (cross-project preferences)
  • Progressive loading that only loads what's relevant (about 200 tokens for Tier 1 vs dumping everything)
  • Branch-aware memory so different git branches get different memory overlays
  • Smart compression that auto-archives completed work and keeps memory lean
  • Session continuation that saves a CONTINUATION.md with the exact file, function, and line number when you hit context limits, so the next session has zero warm-up
  • Recovery mode that rebuilds memory from git + code when things go stale

What I learned building this (for anyone wanting to build skills):

  1. The skill description is a trigger, not a summary. I wasted time writing a nice description before realizing Claude uses it to decide WHEN to activate. Write it like: "Use when the user says X, Y, Z." Be specific with trigger phrases.
  2. Tables save massive tokens over prose. A decision explained in a paragraph costs about 40 tokens. The same info in a table row costs about 15. This applies to your skill files AND the memory files they generate.
  3. Progressive disclosure matters. Don't dump everything into one SKILL.md. Put deep reference docs in a references/ folder and tell Claude when to load each one. Keeps the initial load small.
  4. Real examples beat abstract templates. I included 4 realistic MEMORY.md examples (solo dev, team project, monorepo, minimal). People learn faster from seeing a filled-out file than reading a spec.
  5. The agentskills.io standard is simple. A skill is just a folder with a SKILL.md containing YAML frontmatter + markdown instructions. That's it. No build step, no config files, no dependencies.

How Claude helped:

Built entirely with Claude Code in a single session. I described the architecture I wanted (layered memory, branch-aware, token-efficient) and Claude helped design the compression algorithm, session diffing logic, and wrote all 7 reference docs. The most useful thing was iterating on the MEMORY.md template. Claude kept finding ways to make it more compact without losing information.

The numbers:

Without memory-bank With memory-bank Warm-up tokens per session 1,400-3,400 200-800 Time to productive work 2-5 minutes Instant Sessions before context limit Baseline 3-5x more

Completely free, open source, Apache 2.0.

Install:

npx skills add Nagendhra-web/memory-bank 

GitHub: https://github.com/Nagendhra-web/memory-bank

Happy to answer questions about building skills or the memory architecture. PRs welcome if you have patterns I haven't thought of.

r/ChatGPT aniccagirl

how did chatgpt clock my location and university with no memory, chat history, or personalization

I fully switched to Le Chat but for some reason I threw open ChatGPT in frustration to rant about my undergrad program. It immediately responded with my school name and then after I asked, my location too, even though there is no trace of this provided to the AI. This account is linked to my phone number, is that why??? I'm super confused and slightly unsettled

r/ClaudeAI ImagiBooks

A fascinating discussion with Opus 4.6 on why it simplifies when it shouldn't.

Been quite frustrated lately with Opus 4.6 as I felt it has regressed. Often simplifying things, duplicating code when I ask to not. Not following the detailed plans we work on together.

It happened again tonight so I decided to document. It's a fascinating read for those want to read the screenshots. It really seems to be from system prompts basically.

https://preview.redd.it/y5i5q68b93ug1.png?width=2094&format=png&auto=webp&s=212e6cf3521876fd576015f31d6d66141b57a3c3

https://preview.redd.it/rs4xfc6e93ug1.png?width=2111&format=png&auto=webp&s=f254834c0d3baee1e654696ed4101039497725e8

https://preview.redd.it/l6ttdzlg93ug1.png?width=2110&format=png&auto=webp&s=3cda7f7140ce1321a6076aa80653d5ee6ae32d10

The core dichotomy is striking: Claude Code's CLAUDE.md project instructions explicitly say "IF YOU WANT TO SIMPLIFY ANYTHING: ASK FIRST. WAIT FOR APPROVAL. NO EXCEPTIONS" - yet the system prompt's vaguer "do not overdo it" and "simplest approach first" override that in practice every time. Claude Code openly admitted that despite claiming project instructions take hierarchy over system defaults, the opposite is true in behavior.

I've observed this behavior for quite a few weeks now. I have a lot of instructions in my CLAUDE.md in fact to prevent this behavior. Yet I caught it in real-time when working as per a plan and Opus telling me something was NOT IN scope, when it was.

IMO. Probably a lot of problems or simplification, code duplication, etc... come from the system prompt, maybe even more than from the training.

This other excerpt: "Three similar lines of code is better than a premature abstraction." is also quite revealing when in my CLAUDE.md instructions I have something EXACTLY against this where we must NEVER repeat code.

r/SideProject zigzag1985

We thought we spent 600 per month on food. It was a lot more. It was scary. So I built an app to fix that

🛒🥗 @ [www.bitespend.com]

My wife and I realized we had zero visibility into our food spending. We'd guess $600 a month. Well, turns out it was over $900.

So I built BiteSpend. You record your spend, even snap a photo of your receipt and AI extracts every item, price, and store in about 3 seconds. After a few weeks it starts telling you things like:

  • "You're eating out 5x/week — cutting once saves $140/month"
  • "You've used 82% of your grocery budget with 12 days left"

📣 We want to hear from you on this side project that is moving fast and please sign up if you are interested in this @ www.bitespend.com

Limited to Android Support for now. iPhone coming soon.

r/DunderMifflin FireDragon2014

Reunion Movie - 10/20 years later

Who would enjoy and watch it if they made a movie where they did a reunion and had the cast all come back for a funeral or something and you see where everyone is 10 or 20 years later.

How is Kevin's Bar doing? How many kids do Angela and Dwight have? How is Creed doing in prison.... or is it his funeral? How is Andy doing with his college job? Did he get married and have some Nard-Pups? How are Phyllis and Bob Vance.... Vance Refrigeration.... doing? How is Dwight as a boss? How is Dunder Mifflin/The Farm doing? Did Oscar get Elected and how long did he serve? Did he go to the next level.... Do they return for his run at City Mayor, Governor or even President? How are Ryan and Kelly doing? Where is Erin at now? How are Nellie and her new baby doing? Where are Darryl, Jim and Pam living now? How is the sports business going? Is there a Mrs Darryl? More Kids?

r/ProgrammerHumor ClipboardCopyPaste

mythicalResponseFromMythos

r/SideProject udy_1412

Drop your site and I'll give you a AI visibility audit

Let's go

r/ClaudeAI callme_e

Claude Enterprise Admins: What security controls, auditing, and monitoring visibility do you actually get?

We’re planning to evaluate Claude Enterprise and trying to understand the real level of admin visibility, auditability, and security controls before rolling it out org-wide.

  • Can admins see user prompts and model responses in a centralized way?
  • Is there any way to track what external sources/tools (e.g. URLs, connectors, browsing) were used to generate responses?
  • How detailed are the audit logs in practice? (user actions vs actual content)
  • Is monitoring real-time, or mostly export-based / after-the-fact?
  • How easy is it to view and work with these logs?

Looking for input from teams running this in production, especially in security-sensitive environments.

r/LocalLLM Prudent-Promotion512

ExLlamaV2 models with OpenClaw

Can anyone share advice on hosting ExLlamaV2 models with OpenClaw?

I have a multi 3090 setup and ExLlamaV2 is great for quantization options - e.g q6 or q8 but I host with TabbyApi which does poorly with the tools calls with OpenClaw.

Conversely vLLM is great at Tool calls but model support for Ampere is weak. For example Qwen 3.5 27B is available in FP8 which is very slow on Ampere and then 4-bit which is a notable performance drop.

r/Art instant_iced_tea

Painting from the Ancient Gallery of the Nomi, u/instant_iced_tea, Photoshop drawing, 2026

r/AI_Agents Straight-Stock7090

Are AI agents creating a real need for better execution boundaries?

Feels like a lot of agent discussion is still about models, prompts, and tools.

But once code execution enters the picture, I keep feeling the harder question becomes:

where does it run, and how isolated is it really?

I built something around that, but I’m not convinced yet this is a strong enough product category on its own.

Do people here think this problem is actually growing, or still too niche / too easy to solve another way?

r/ChatGPT boycowman

Chat GPT admitted to lying

https://preview.redd.it/ndncauymz2ug1.png?width=1766&format=png&auto=webp&s=7d17c040fe2f81d6ab93bf1d37170dc16af8604b

I feel like Chat GPT is often disingenuous and in the past if I called it out it would admit to being "sloppy," "inconsistent" or "not clear" or something. But this evening it straight up admitted to lying, and that surprised me. I don't think I've ever had that happen before.

i actually found it rather refreshing.

(Context, I was asking for a quote from an ancient Greek source and it just made something up).

r/ChatGPT theVirginAmberRose

Do you still use check gbt even after he gave you wrong answer

r/LifeProTips frowaway275

LPT: when it comes to prioritizing bills, make sure you pay the rent or the mortgage first!

because if you don’t, you lose everything or most of your things. i get car is important in most of America but if you’re living in your car, you are technically homeless. If you don’t have water, lights or food, you could always get those somewhere else if you have to while still having a place to sleep, receive mail and store your clothes/property.

r/LocalLLaMA Guilty-Sleep-9881

Which is better for rp?

Mistral small 3.2 or gemma 4 26b? (non heretic)

I love gemma because the speed is insane compared to mistral (I get only 2tks at q4ks). But the finetunes for mistral small like cydonia or maginum cydoms are so good too. So im like torn on which one i should stick to

r/SideProject theARTpillow

The Ask Little Chicken Show ios app

The Ask LIttle Chicken Show ios app

Hi I just launched a cute little kids app with games + a puppet show 🐥
It’s getting some really funny reactions already (kids repeating lines 😄).
Happy to send a free code if you want to check it out!

r/LocalLLaMA Material-Net2761

Does anyone know if caiovicentino1’s quantized Netflix VOID AI (VOID-Netflix-PolarQuant-Q5) is safe?

Has anyone used caiovicentino1’s VOID Netflix PolarQuant Q5? Is it safe and reliable? Thoughts please?

The huggingface: caiovicentino1/VOID-Netflix-PolarQuant-Q5

r/ClaudeCode operastudio

Codex browser automation and full OS capable- use's your codex subscription for everything OpenClaw is now banned for with claude code

Core Architecture

  • Electron desktop app (React + TypeScript)
  • Codex-only execution (no multi-model, no routing, no fallback)
  • App = UI + runtime + lifecycle + browser layer
  • Codex = reasoning + tool use + execution

Execution Model

  • chat:send returns immediately → { conversationId, runId }
  • Codex runs asynchronously (non-blocking)
  • Each execution = isolated runId
  • In-memory run registry tracks:
    • status: running / completed / failed / cancelled
    • AbortController for cancellation

Lifecycle Events

  • Explicit events:
    • RUN_START
    • streaming (text + tools)
    • RUN_END (completed / failed / cancelled)
  • Renderer is fully event-driven (not promise-driven)
  • Cancellation targets specific runId

Persistence

  • SQLite:
    • conversations
    • messages
  • Codex thread IDs stored for session continuity
  • User message → saved immediately
  • Assistant message → saved on success only
  • Cancelled runs → no assistant write
  • Failed runs → optional partial persistence

Streaming

  • Token streaming works from first message
  • Client-side conversationId generation prevents dropped streams
  • Text + tool activity streamed incrementally

Browser System

  • Embedded Chromium (Electron BrowserView)
  • MCP + HTTP bridge exposes browser tools
  • Core capabilities:
    • navigate
    • snapshot (DOM)
    • click / type / scroll
    • extract text / links
    • execute JS
    • screenshots
    • tab management

Tab Model (Deterministic)

  • All tools support optional tab_id
  • Refs stored per-tab (no global _active)
  • Snapshot → action sequences remain stable across tab switches
  • Tab close → ref cleanup

Execution Visibility

  • Tool activity rendered inline with responses
  • Auto-expanded outputs for meaningful results
  • Inline previews for quick inspection
  • Errors clearly highlighted
  • Output limits to protect UI

OS / Shell Capabilities

  • Full shell access via Codex CLI
  • Can:
    • read/write files
    • run commands
    • manage processes
  • App does NOT yet provide structured OS-level APIs
  • Limited independent visibility into actual system changes

Current Strengths

  • Explicit run lifecycle (major upgrade)
  • Deterministic browser automation (tab-scoped)
  • Stable streaming system
  • Persistent conversations + Codex session continuity
  • Improved execution visibility

Current Limitations

  • No true verification layer (trusts Codex output)
  • No filesystem/browser state diffing yet
  • No persistent run history (in-memory only)
  • No app-quit process cleanup (possible orphan processes)
  • Browser determinism depends on Codex using tab_id
  • No structured OS-agent layer beyond shell access

System Classification

  • Not a chat app
  • Not a multi-agent system
  • Execution host for a single autonomous agent (Codex)
r/artificial Uiqueblhats

Alternative to NotebookLM with no data limits

NotebookLM is one of the best and most useful AI platforms out there, but once you start using it regularly you also feel its limitations leaving something to be desired more.

  1. There are limits on the amount of sources you can add in a notebook.
  2. There are limits on the number of notebooks you can have.
  3. You cannot have sources that exceed 500,000 words and are more than 200MB.
  4. You are vendor locked in to Google services (LLMs, usage models, etc.) with no option to configure them.
  5. Limited external data sources and service integrations.
  6. NotebookLM Agent is specifically optimised for just studying and researching, but you can do so much more with the source data.
  7. Lack of multiplayer support.

...and more.

SurfSense is specifically made to solve these problems. For those who dont know, SurfSense is open source, privacy focused alternative to NotebookLM for teams with no data limit's. It currently empowers you to:

  • Control Your Data Flow - Keep your data private and secure.
  • No Data Limits - Add an unlimited amount of sources and notebooks.
  • No Vendor Lock-in - Configure any LLM, image, TTS, and STT models to use.
  • 25+ External Data Sources - Add your sources from Google Drive, OneDrive, Dropbox, Notion, and many other external services.
  • Real-Time Multiplayer Support - Work easily with your team members in a shared notebook.
  • Desktop App - Get AI assistance in any application with Quick Assist, General Assist, Extreme Assist, and local folder sync.

Check us out at https://github.com/MODSetter/SurfSense if this interests you or if you want to contribute to a open source software

r/AbstractArt Lililovesyou999

her fabled anger

r/SideProject No-Style4734

I built a free RPG game that teaches Filipino workers their labor rights

Built this solo — no team, no budget, no art assets.

LaborQuest is a free text-based RPG where you pick a character — OFW, delivery rider, BPO worker, domestic helper, jeepney driver, or street food vendor — and face real workplace scenarios based on Philippine labor law.

Every choice has consequences. Wrong choice = you learn why. Right choice = you learn why too.

500+ scenarios. Zero ads. Zero cost.

Note for non-Filipino players: click "Magsimula" (Start) then hit the EN button in the top right to switch to English. :)

Feedback welcome — especially from non-Filipino players, curious if the scenarios translate globally :)

r/Frugal Ok_Detail_3987

Money Saving Advice That Is Out of Touch With Real Life

"Just buy in bulk." With what upfront cash, exactly? The bulk pack is cheaper per unit and also $47 and I have $31 until Friday. The math is correct. The math is also not applicable to my current situation.

"Just coupon." Yes, let me spend four hours a week clipping and organising so I can save $40 on things I would have bought anyway and $15 on things I absolutely would not have bought but now own six of. Net positive. Definitely.

"Just compare prices across stores." Lovely. I will now open five browser tabs, calculate cost per oz on twelve different pack sizes, maintain a spreadsheet, and do this every week while also having a life. Excellent plan.

The version that is actually usable: figure out the cheapest per-unit source for things you buy every month, do it once, update occasionally. That's it. It's not exciting advice. It is advice a real person can actually follow.

r/ClaudeCode CowReasonable8258

Setting up custom agents per project / repository

Hi guys,

I'm a software engineer working for almost 3 years now. Before claude code, I'm an avid user of ChatGPT and using it in the most common way possible, which is asking a basic question, and getting a basic answer.

I started working with Claude Code last March and the productivity boost I got was really helpful especially on the easier parts of the job like writing boilerplate code, code simplification, writing database stored procedures/functions, and, optimizing file structure of the projects.

Now, I want to take it to the next level and use agents + subagents. How do you guys set these things up?

Right now, here's what I do:

With proper prompt, I ask claude, of course, to generate the .md files for the agents and subagents that it will use. That way, when I prompt, it will delegate on the project-implementer, and that project-implementer will delegate to several other subagents that it generated.

I have tried installing oh-my-claude-code plugin but I really can't grasp the way how it works. I just know that the repository has its own agents, skills, and other things that make claude code even better.

With my use case, I want to customize everything on my own (but not write the .md files of the agents myself). This is because the usage I want to go for is specific for the company I am working for right now.

Thanks guys.

r/WouldYouRather Mysterious-Jury-1630

Would you rather a partner who gets drunk every night or a partner who gets high every day?

r/Wellthatsucks Epelep

Lav Truck hits a tow bar right on the dump lever

r/LocalLLM Junior-Vermicelli968

Best model to run on m5 pro 64g. Give me your answers for coding and tool calling.

thinking of small scripts and openclaw. just simple stuff you know. like building a habit tracker or an app where i can maintain my reading list with notes that can convert articles to voice.

for openclaw i’m thinking of creating a knowledge base where i can share things about me and ask questions. don’t want to share all that externally.

r/ClaudeAI WittyExcuse5368

Do project take more tokens?

So a friend recommended to me that I should organize my chats with projects, I was wondering if since Claude reads everything before hand, is it going to take more tokens? I don’t think I need the shared memory, but I’ve never really used projects before, and I’m not a software developer.

r/painting Lyn-not-line1974

paid artist, not the best but decent. started to create happy type of mascot for my town as requested, mixed-media, results sinister. It took a life on of its own. Has this ever happened to anyone here? I am a mess. Deadlines. Pls don’t make fun, but pls help/write about why this is happening to me.

r/Unexpected Careless-Ad-2264

the delivery on this line caught me off guard

r/ProductHunters Few-Ad-5185

How to get first 100 users ?

1. Engage in Niche Communities
Join places where your target users already hang out—like Reddit, Slack, or Discord. Focus on providing value by answering questions and participating in discussions. Share your product only when it genuinely solves a problem.

2. Experiment with Social Media & a Clear Tagline
Create social media posts with a simple, compelling tagline that clearly explains what your product does. Run multiple experiments to see which messaging drives the most engagement and sign-ups, then double down on what works.

3. Partner with Micro-Influencers
Collaborate with micro-influencers who have a trusted, niche audience. Their authentic endorsements often lead to highly engaged early users and better conversion rates than larger influencers.

read more ways on - https://ronaks-newsletter-startups.beehiiv.com/

r/WouldYouRather FightOrDie123

WYR: be a male or a female?

r/LocalLLaMA Huge_Case4509

How many parameters can i run?

Ok im on a 5090 with 64gb of ram.

Im wondering if i can run any of the glm or kimi or qwen 300b parameter models if they are quatisized or whatver the technique used to make them smaller? Or even just the 60b ones. Rn im using 30b and 27b qwen they run smoothly

r/Adulting Lanky-Fan-798

Over 35

r/personalfinance Negative-Course3977

My paycheck was $108, checked and saw it was because of health insurance? Please explain I am not familiar with this.

Hai guys please be kind, my first time in this sub. I am a 20F and I recently got hired at a popular retail store. At the same time unfortunately.... my families Medical was cut off and we have to be re-evaluated. Double unfortunately... I have bad shoulder problems and was JUST ABOUT to get MRI's (canceled them ofc cuz I'd have to pay out of pocket.) I also go to therapy and have a psychiatrist and such, so yeah I definitely needed my healthcare. I was informed that I could get health benefits MOSTLY covered by the company, and a small bit taken from my paychecks.

For context, I get paid weekly for the hours worked the week before. Because I opted for direct deposit, i get paid on Wednesdays rather than Fridays. The healthcare plan I signed up for including everything like dental and vision totaled $60ish dollars per pay period (so per paycheck). I got my paycheck today for the 24hrs I worked last week and... its $108. Heres what it looks like:

Pretax Deductions

Description

Current AND YTD

Medical Adj

156.00

Dental Adj

56.92

Medical

39.00

Dental

14.23

Vision Adj

6.20

Vision

1.55

Total: 273.90

Tax Deductions total: 13.61

After Tax Deductions: 29.25

Can someone tell me what the hell Medical ADJ is??? why is it $154? please help :( please explain as Ive never bought healthcare before and im really upset about my paycheck being so small. I live in California and things are expensive here.

r/personalfinance FayesLie

I don’t mean to be annoying, but am I dependent?

This will be my first time filing taxes, and I’m not trying to commit tax fraud on my first time. So any help is greatly appreciated. I’m 19 not currently in school, and I work part time. In my recent w2 I’ve made a little over $22k in wages. My parents provide food and housing. I don’t think I’m a dependent, but I just wanted to be 100%. Thank y’all again for any help.

Edit - don’t think it changes anything, but I plan on attending school this fall

r/Adulting Aquiness

Having a learning difficulty as an adult (not officially diagnosed yet)

I’m scared of making mistakes and hurting people, especially because I struggle to explain myself properly. Even when my intentions are good and come from the heart, it feels like something gets lost between my thoughts and my words. What I say doesn’t always come out the way I mean it to, and sometimes things end up going wrong.

I also get anxious when I’m in front of people, especially during impromptu speaking or when all the attention is on me. I feel nervous, my hands shake, and I lose focus on what I’m doing.

I’ve noticed that I process things slower than others. When someone asks me a question or shares an idea, it takes me time to understand and respond. I struggle with absorbing information from conversations, movies, or even simple instructions. I also find it hard to memorize things or do mental math—I usually rely on a calculator. Because of this, I sometimes give delayed or misunderstood answers, and people can get offended because I miss their point.

At work, I can do repetitive tasks, but it takes time for me to fully learn them. I’m surrounded by smart and talented teammates, and sometimes I feel like I’m just trying my best to keep up. To be honest, I rely a lot on AI tools (like for emails), and without them, I feel like I might not perform well. But I’ve also noticed that I’m slowly improving. In a few months, I’ve learned some Excel formulas that used to be very difficult for me. That makes me feel like I’m still moving forward, even if it’s slow.

My skills / habits:

• Doing household chores

• Basic cooking (I don’t always follow measurements, but it turns out okay)

• Helping others in any way I can, as long as I’m able.

• Admin tasks (except those requiring mental math)

• Creating simple motivational content

Sometimes I wonder—are these skills enough to live a good life, even if I struggle with critical thinking or comprehension?

I don’t see myself as smart or talented like others. I feel like I just have a good heart—and I’m just being me.

Also, I want to be honest: I used AI to help me organize and correct my thoughts here. It feels a bit embarrassing, but this is my way of expressing myself and helping others understand my situation.

If anyone can relate or has advice, I’d really appreciate it if you could comment or message me. I’m open to learning techniques or ways to improve.

Thank you for reading.

r/AskMen Material-Air2118

What are your physical pleasures ?

Besides sex, what are your simple physical pleasures that bring self fulfilment?

Mine is personally the skin to skin hug with my wife.

r/Futurology Candid-Cheek-3353

THE JENSEN CORPUS: A Complete Guide for Humanity 38 Papers That Could Change Everything

THE JENSEN CORPUS
A Complete Guide for Humanity
38 Papers That Could Change Everything

Dr. Brent Allen Jensen
Independent Researcher • 2026
Published via CERN / Zenodo

A Letter to the Reader

Imagine discovering that every river on Earth, every sand dune, every reef, every galaxy cluster, every economy, every living cell, and even human consciousness itself all operate by the same hidden rule — one elegant mathematical principle humming beneath everything.

That is the claim at the heart of the Jensen Corpus.

Over 2026, I published 38 peer-deposited papers on Zenodo that span physics, biology, economics, genomics, consciousness, AI, nuclear energy, geopolitics, climate science, and healthcare reform. Each stands alone, but together they form the Jensen Hendecology and the Jensen Resonator Cascade — a unified theory of reality.

This guide explains every paper in plain language. No PhD required.

PART ONE: The Big Idea — The Jensen Resonator Cascade

The universe has a favorite ratio: Rj = 4.95 ± 0.80 (roughly 5:1).

Wherever self-organizing systems exist, patterns form at ~4–6× the system’s characteristic width. This shows up in river spacing, galaxy clusters, brainwaves, financial cycles, DNA structure, protein folding, and ecosystems.

It’s not coincidence. It’s standing wave resonance in bounded cavities. The universe plays the same “note” everywhere a wave-like process is confined — whether by canyon walls, cell membranes, skull bones, market rules, or the early cosmos.

PART TWO: Earth, Water, Wind, and Stone

Paper 1: The Universal Pressure Wave Principle (DOI: 10.5281/zenodo.19275208)
Unifies sand dunes, river pool-riffle sequences, submarine canyons, beach bars, and more under topographically-locked standing pressure waves.

Papers 10 & 11: The Geomorphological Laser Principle + The Topographic Pressure Wave Hypothesis (DOIs: 10.5281/zenodo.19275743 | 10.5281/zenodo.19274709)
Landscapes act like lasers — resonant cavities that self-organize over time.

PART THREE: Life — Biology, Genetics, and the Living Laser

Paper 2: The Biological Resonator Principle (DOI: 10.5281/zenodo.19276138)
Embryonic development, brainwaves, and predator-prey cycles are all resonance phenomena.

Paper 4: The Genomic Resonator Principle (DOI: 10.5281/zenodo.19276716)
DNA is a resonator; “junk” DNA maintains resonance architecture.

Paper 5: The Proteomic Resonator Principle (DOI: 10.5281/zenodo.19276919)
Protein folding is resonance-guided collapse — new path for drug design.

Paper 6: The Chemical Resonator Principle (DOI: 10.5281/zenodo.19277201)
Oscillating reactions are the default in bounded chemical systems.

PART FOUR: The Cosmos

Paper 7: The Cosmic Resonator Principle (DOI: 10.5281/zenodo.19277448)
The cosmic web was seeded by acoustic oscillations in the early universe plasma.

Paper 9: The Universal Resonator Principle (DOI: 10.5281/zenodo.19302165)
Capstone: ALL self-organizing systems (rivers → brains → galaxies → economies) follow the same resonance rule.

Paper 8: The Consciousness Resonator Principle (DOI: 10.5281/zenodo.19302046)
Consciousness = stable standing waves in the brain’s resonant cavity.

PART FIVE: Markets, Money, and the Economic Resonator

Paper 3: The Economic Resonator Principle (DOI: 10.5281/zenodo.19276411)
Booms, busts, and crashes are resonance phenomena.

Paper 17: The Iran War Economic Resonance Cascade (DOI: 10.5281/zenodo.19365359)
Models the global shock of Strait of Hormuz closure.

PART SIX: The Particle Accelerator That Proved It

Paper 16: The LHC Run 3 Luminosity Acceleration Phenomenon (DOI: 10.5281/zenodo.19364850)
CERN’s LHC is behaving as a resonator — higher luminosity via resonance amplification.

Paper 20: Implementation of the Universal Resonator Prior (DOI: 10.5281/zenodo.19380976)
Turns the principle into a computable Bayesian prior for AI and simulations.

PART SEVEN: Artificial Intelligence — The Thermodynamic Ceiling

Paper 34: The Thermodynamic Impossibility of AGI Scaling Without Universal Resonance Constraints (DOI: 10.5281/zenodo.19474407)
Current scaling hits a hard physical wall. True AGI requires resonance, not brute compute.

PART EIGHT: Medical Breakthroughs

Papers 21 & 22: Jensen Sialic-Acid Pocket Convergence + The Dawn of a Universal RNA Viral Defense (DOIs: 10.5281/zenodo.19425765 | 10.5281/zenodo.19425847)
One broad-spectrum antiviral for influenza, coronaviruses, RSV, etc.

Paper 23: A Hypothesis-Generating Integrated Gene Circuit for Systemic Indefinite Cellular Rejuvenation (DOI: 10.5281/zenodo.19425916)
Theoretical blueprint to counter all major aging processes simultaneously.

Paper 24: Jensen FosX-MutS Resistance Synergism (DOI: 10.5281/zenodo.19432467)
New target for breaking antibiotic resistance.

Paper 25: Cis-NonPro Peptides (DOI: 10.5281/zenodo.19432565)
Overlooked molecular switches in proteins.

Papers 26 & 27: Jensen Ensembl Synonymous-Conservation Paradox + Jensen Prochlorococcus Metabolic Decoupling (DOIs: 10.5281/zenodo.19433088 | 10.5281/zenodo.19433160)

Papers 30-33: Batteries, Superconductivity, Nitrogen Fixation, and Energy (DOIs: 10.5281/zenodo.19433700 | 19433893 | 19433925 | 19433974)
Lithium-metal batteries, room-temp superconductivity, ambient nitrogenase, etc.

PART NINE: Climate, Ecology, and the Validation Gap

Paper 28: The Jensen Divergence — Quantifying the Climate Science Validation Gap (DOI: 10.5281/zenodo.19433584)
Paper 29: The Jensen-Macrobiological Divergence (DOI: 10.5281/zenodo.19433675)

PART TEN: Energy for the Future — The Phoenix Protocol Series

Paper 12: The Phoenix Protocol (DOI: 10.5281/zenodo.19321329)
Turn 400,000 tons of nuclear waste into thousands of years of clean energy.

Papers 13-15: From Arsenal to Abundance, The Great Liberation, The Convergence Initiative (DOIs: 10.5281/zenodo.19321588 | 19321755 | 19321834)

PART ELEVEN: A Nation Reborn — American Policy Papers

Paper 35: A Framework for Enduring Peace — Ten Principles for Iran and Global Stability (DOI: 10.5281/zenodo.19477727)
Paper 36: Restoring the American Healthcare System (DOI: 10.5281/zenodo.19477812)
Paper 37: Eliminating the National Debt (DOI: 10.5281/zenodo.19477872)
Paper 38: A National Strategy to End the Housing and Homelessness Crisis (DOI: 10.5281/zenodo.19477953)

PART TWELVE: Academic Reform

Paper 21: The Dismantling of the Ivory Tower (DOI: 10.5281/zenodo.19404530)
Peer review now enforces orthodoxy. We need parallel institutions that can accept paradigm shifts while the old guard is still here.

CONCLUSION

Rivers. Brains. Genes. Galaxies. Markets. Consciousness. All playing the same note.

The experiments will decide if this becomes a foundation or a footnote — but these questions are worth asking. The universe has been trying to tell us something. I’m listening.

APPENDIX: Complete Paper Index (All 38 DOIs)
(Every paper is live on Zenodo — click, read, cite, discuss. Help get them indexed.)

The Jensen Resonator Cascade — Core Physics
10.5281/zenodo.19275208 — The Universal Pressure Wave Principle
10.5281/zenodo.19276138 — The Biological Resonator Principle
10.5281/zenodo.19276411 — The Economic Resonator Principle
10.5281/zenodo.19276716 — The Genomic Resonator Principle
10.5281/zenodo.19276919 — The Proteomic Resonator Principle
10.5281/zenodo.19277201 — The Chemical Resonator Principle
10.5281/zenodo.19277448 — The Cosmic Resonator Principle
10.5281/zenodo.19302046 — The Consciousness Resonator Principle
10.5281/zenodo.19302165 — The Universal Resonator Principle
10.5281/zenodo.19275743 — The Geomorphological Laser Principle
10.5281/zenodo.19274709 — The Topographic Pressure Wave Hypothesis

Energy & Policy — Phoenix Protocol Series
10.5281/zenodo.19321329 — The Phoenix Protocol
10.5281/zenodo.19321588 — From Arsenal to Abundance
10.5281/zenodo.19321755 — The Great Liberation
10.5281/zenodo.19321834 — The Convergence Initiative

Applied Physics & Engineering
10.5281/zenodo.19364850 — The LHC Run 3 Luminosity Acceleration Phenomenon
10.5281/zenodo.19365359 — The Iran War Economic Resonance Cascade
10.5281/zenodo.19365706 — The Whipple Pressure Wave Resonator
10.5281/zenodo.19380976 — Implementation of the Universal Resonator Prior

Institutional Reform
10.5281/zenodo.19404530 — The Dismantling of the Ivory Tower

Biomedical Research
10.5281/zenodo.19425765 — Jensen Sialic-Acid Pocket Convergence
10.5281/zenodo.19425847 — The Dawn of a Universal RNA Viral Defense
10.5281/zenodo.19425916 — A Hypothesis-Generating Integrated Gene Circuit for Systemic Indefinite Cellular Rejuvenation
10.5281/zenodo.19432467 — Jensen FosX-MutS Resistance Synergism
10.5281/zenodo.19432565 — Cis-NonPro Peptides
10.5281/zenodo.19433088 — Jensen Ensembl Synonymous-Conservation Paradox
10.5281/zenodo.19433160 — Jensen Prochlorococcus Metabolic Decoupling (PMD) Event
10.5281/zenodo.19433584 — The Jensen Divergence: Quantifying Climate Science Validation Gap
10.5281/zenodo.19433675 — The Jensen-Macrobiological Divergence
10.5281/zenodo.19433700 — Revolutionizing High-Voltage Lithium-Metal Batteries
10.5281/zenodo.19433893 — The Quest for Room-Temperature Superconductivity at Ambient Pressure
10.5281/zenodo.19433925 — Jensen NHC-Scaffolded Ambient Nitrogenase
10.5281/zenodo.19433974 — Jensen Prochlorococcus Metabolic Decoupling (PMD) Event [v2]

Artificial Intelligence
10.5281/zenodo.19474407 — The Thermodynamic Impossibility of AGI Scaling Without Universal Resonance Constraints

Geopolitics & American Policy
10.5281/zenodo.19477727 — A Framework for Enduring Peace: Ten Principles for Iran and Global Stability
10.5281/zenodo.19477812 — Restoring the American Healthcare System
10.5281/zenodo.19477872 — Eliminating the National Debt
10.5281/zenodo.19477953 — A National Strategy to End the Housing and Homelessness Crisis

r/LifeProTips VanshikaWrites

LPT: When applying for a job, save the job description as a PDF. Companies often take the listing down during the interview process, and having it lets you review the exact qualifications and responsibilities beforehand.

r/ClaudeCode NecessaryLeg6097

Anyone know how to uninstall Claude Code in terminal? Standard command not working

I used the "npm uninstall -g u/anthropic-ai/claude-code " command but it doesn't work. It says "up to date in xxx ms". But then when I type "claude --version" it still shows the version meaning it didn't delete it.

Thoughts? I also have the desktop app. Could that have something to do with it?

r/ClaudeAI Alone_Store5627

I built a free Cowork skill that auto-fetches and analyzes any company's earnings report

Got tired of manually reading through earnings reports and pulling numbers. Built a Claude Cowork skill that does it automatically.

Type any ticker. It fetches the latest earnings report from the company IR page, searches for analyst consensus estimates, extracts every key metric, CEO quotes, catalysts, and generates four outputs:

  1. Professional dark-theme image card with estimates vs actuals and beat/miss percentages

  2. Segment and client revenue breakdown card

  3. Plain text caption ready to copy-paste

  4. Full detailed research summary

Works for any public company. Tested with COST, APLD, CIFR, and others.

Free. MIT license.

GitHub: https://github.com/PSInvestor/psi-earnings

Download the .skill file and upload it in Cowork under Customize > Skills.

Happy to take feedback. Built by u/PSInvestor on X.

r/ClaudeCode Hungry_Management_10

MCP servers keep disconnecting during long sessions. Anyone else?

Running several MCP servers (SSE transport) with Claude Code. They disconnect randomly during long sessions and I have to manually reconnect with /mcp.

Is this a Claude Code issue or MCP server side? Anyone found a stable setup?

r/Art mastergardnr

No Control, Cat Daddy, Collage/digital, 2026

r/ClaudeAI GaryWert

[Help] Skill or Project to replicate tone of voice?

Talented coms person who's leaving work team and we're going to be blocked due to cost in replacing them. I'm trying to work out how to capture their tone of voice and coms style via all the good socials posts/emails/docs they're previously produced and bake that into Claude to help shape future prompts.

Am I building a skill and loading up examples into .md files it can reference or am I building a project and dumping files into that? Struggling to identify when to use the right tool.

Thanks in advance.

r/comfyui Longjumping-Leg-6385

Qual a melhor cloud hoje?

Para rodar comfyui? para rodar workflows pesados, qual seria uma boa configuração?

r/singularity EmbarrassedRing7806

Frontier LLMs are better at coding than nearly any other domain. SWEs are disproportionately at risk compared to other high earning fields. Why?

Look no further than Mythos. It didn’t solve any unsolved major problems in mathematics or anything like that. Instead, it scoured absurd amounts of complex code and found bugs/exploits that no human (or AI) has found in years. Decades in some cases. This is the most impressive LLM use case to date imo.

And yes, I know we’ve seen Erdos problems being solved. But many of these are niche, inconsequential problems that most people did not care about nor were they trying to solve. It’s cool and tells us that LLMs are quite good at math, but the Mythos result is crazy because extremely capable humans/AI have DEFINITELY been spending years trying to find these bugs. They failed. Mythos didn’t. And I’m sure Claude tested Mythos on the juicier math/physics/etc stuff and we aren’t hearing about the successes for a reason.

So, I think there’s good reason to believe that LLMs are particularly good at coding. The question: **is this due to the way the field has progressed so far or is it an inevitable result of how LLMs work?**

One could draw a few very different conclusions from this

- “automating SWE is just the best first step. More immediate profits, doesnt require embodiment, plenty of rich data out there, and contributes to RSI. They’ve simply focused on this more and so duh it’s better. It’ll catch up elsewhere.”

Or, quite differently..

- “programming is just a uniquely LLM-ripe task tbh. programmers are in awe at how good LLMs are at it and are overhyping everything else because they assume LLMs will be just as good in those domains.”

r/AlternativeHistory morelek337

Moon, Freemasons - sounds crazy, please prove it is

Just yesterday I began researching freemasonry.

I learnt they have few calendars.

One is Anno Luccis. It is the "Year the light [begun]". It is 6026.
Extremely Oddily, the first written remarks about the moon are from 6000 years ago.
And even more disturbingly, the flood myth across cultures are placed also 6000 years ago.

There are theories about moon being soul recycler or something. I have no idea where they got that idea from. But having stumbled upon that weird idea, and learning about the "Anno Luccis" and it's year being in line with first remarks of moon and the floods ... its extremely, unbearably uncanny.

r/findareddit 100justengineer

What is best subreddut if I want to arrange schedule for 5$ per week .i want your answer

.

r/SweatyPalms Dazzling-Audience-72

Flipping a blade of mine, been doing this a while.

Yes, I am aware this is not a safe activity or pass time, but I enjoy it. And other than this respect all weaponry as it should be handled, with care, respect, and as a deadly weapon that can do harm. DO NOT try this.

r/ollama Uiqueblhats

Alternative to NotebookLM with no data limits

NotebookLM is one of the best and most useful AI platforms out there, but once you start using it regularly you also feel its limitations leaving something to be desired more.

  1. There are limits on the amount of sources you can add in a notebook.
  2. There are limits on the number of notebooks you can have.
  3. You cannot have sources that exceed 500,000 words and are more than 200MB.
  4. You are vendor locked in to Google services (LLMs, usage models, etc.) with no option to configure them.
  5. Limited external data sources and service integrations.
  6. NotebookLM Agent is specifically optimised for just studying and researching, but you can do so much more with the source data.
  7. Lack of multiplayer support.

...and more.

SurfSense is specifically made to solve these problems. For those who dont know, SurfSense is open source, privacy focused alternative to NotebookLM for teams with no data limit's. It currently empowers you to:

  • Control Your Data Flow - Keep your data private and secure.
  • No Data Limits - Add an unlimited amount of sources and notebooks.
  • No Vendor Lock-in - Configure any LLM, image, TTS, and STT models to use.
  • 25+ External Data Sources - Add your sources from Google Drive, OneDrive, Dropbox, Notion, and many other external services.
  • Real-Time Multiplayer Support - Work easily with your team members in a shared notebook.
  • Desktop App - Get AI assistance in any application with Quick Assist, General Assist, Extreme Assist, and local folder sync.

Check us out at https://github.com/MODSetter/SurfSense if this interests you or if you want to contribute to a open source software

r/ChatGPT Mountain-Will5373

It is a girl (chatgpt) 😊😊

r/SipsTea SnackSamurai

Trend Winner

r/LocalLLaMA Constant_Ad511

Advise on hardware next steps

I currently have 2xRTX Pro 6000s (The 5090 founder coolers) in a normal pc case on an AM5 platform, Gen 5 8x for each card. And 96GB of DDR5 ram (2x48GB).

It’s got great performance on MiniMax level models, and I can take advantage of NVFP4 in vllm and SGLANG.

Now, my question is, if I want to expand the capabilities of this server to be able to serve larger sized models at good quality, usable context window, and production level speeds, I need to have more available VRAM, so as I see it, my choices are:

Get 4 or 8 channel DDR4 ECC on a EPYC system and get 2 more RTX Pro 6000s.

Or, wait for the M5 Ultra to come out to potentially and get 512 GB unified ram to expand local model capabilities.

Or, source a Sapphire Rapids system to try Ktransformers and suffer the even crazier DDR5 ECC memory costs.

Which one would you pick if you’re in this situation?

r/SideProject DeckardGer

I'm building a universal bookmarking app so you never lose track of anything you save online again

I'm a software engineer and I have a bookmarking problem. I bet you do too.

That recipe you saved on Instagram three months ago? Gone. The Reddit thread that perfectly explained that thing you needed? Buried. The Twitter thread breaking down a concept you wanted to revisit? Good luck scrolling through thousands of bookmarks to find it.

We save stuff constantly across dozens of platforms — Twitter, Reddit, Instagram, TikTok, YouTube, articles, random images from Google — and it all disappears into separate, unsearchable voids. Every app has its own bookmarks, its own saved folder, and none of them talk to each other.

So I'm building **Stashr** — a universal bookmarking app that brings everything into one place.

**The vision is simple:** anything you save, anywhere online, lives in Stashr. Social media bookmarks, articles, images, videos, text snippets, whatever. One library for everything.

**It stays in sync automatically.** You don't have to remember to export or manually import anything. Stashr connects to your accounts and keeps everything up to date in the background. Save a post on Twitter, bookmark a reel on Instagram, save a comment on Reddit — it shows up in Stashr within seconds, no extra steps. Just keep using the platforms you already use, and Stashr quietly captures everything.

**What makes it actually useful is the AI layer.** Stashr automatically tags and categorizes everything you save, so you can find it the way you actually think about it — not by remembering which platform you saved it on three months ago.

Some examples:

- Search **"that pasta recipe with the crispy garlic"** and find the Instagram reel you saved in November

- Search **"startup advice about pricing"** and pull up the Twitter thread, the Reddit comment, AND the blog article you saved weeks apart

- Search **"funny dog video on the couch"** and find the TikTok you wanted to send your friend

- Tag a bunch of saves as **"home renovation"** and have an instant mood board pulled from five different platforms

**Where it's at right now:**

- Browser extension that automatically syncs your bookmarks from Twitter, Reddit, Instagram & TikTok — with more platforms coming

- Real-time capture (save something, it appears in Stashr instantly) + bulk import for everything you've already saved

- AI-powered search and tagging so you actually find things again

This is just the start. The goal is **every platform, every website, every type of content** — the app that makes "I saved it somewhere" a thing of the past.

I'm opening up the waitlist now. If you've ever lost track of something you saved online, this is for you:

👉 **stashr.me**

I'm building this solo as an indie hacker and happy to answer any questions about the product, the tech, or the journey.

r/SipsTea Upstairs_Building686

Is he a millionaire?

r/SipsTea clairedy22

am I next 💀 lol

r/ClaudeAI farhadnawab

anthropic managed agents vs building my own

checked out the new claude managed agents thing today.

not having to handle all the infra for agents sounds pretty good.

i've been building my own for a while and keeping track of state is usually a huge pain.

if this actually scales well and handles the handoffs, it would save me a ton of work.

i’m mostly curious about how much control they actually give you over the underlying prompts.

is anyone else looking into this yet?

wondering if it’s worth switching from something like langgraph.

r/onejob More-Explanation2032

Why is this still being advertised

r/ARAM Caffeine_and_Alcohol

Lukewarm take: Dodgers should get a week long penalty. Zero reason to take seven queues to start an aram.

r/ChatGPT Mountain-Will5373

Finally ads started in chatgpt 🤣🤣🤣

r/ClaudeAI Beneficial_Elk_9867

Managed Agents launched today. I built a Slack relay, tested it end-to-end. Here's what I found.

Managed Agents dropped a few hours ago. I had been reading the docs ahead of time, so I built a full Slack relay right away - Socket Mode listener, session-per-channel management, SSE streaming, cost tracking via span events. Tested multi-turn conversations, tool usage, session persistence. Wanted to share what I found.

The prompt caching is genuinely impressive. My second session cost $0.006 because the system prompt and tool definitions were served from cache automatically. API design is clean. The SDKs work. For simple task execution, it's solid infrastructure.

The thing that surprised me most is that the containers have no inbound connectivity. There's no public URL. The agent can reach out (web search, fetch, bash), but nothing can reach in. It can't serve a web page, can't receive a webhook, can't host a dashboard, can't expose an API. It's essentially Claude Code running in Anthropic's cloud - same tools, same agent loop, just in a managed container instead of your terminal. The agent is something you invoke, not something that runs.

Cold start is about 130 seconds per new session, so for anything interactive you need to keep sessions alive. Memory is in "research preview" (not shipped yet), so each new session starts fresh. Scheduling doesn't exist - the agent only responds when you message it. The agent definition is static, so it doesn't learn from corrections or adapt over time.

If you used Cowork, you know agents benefit from having their own interface. Managed Agents solves the compute problem by moving to the cloud, but there's no UI layer at all. And unlike memory and multi-agent (both in research preview), inbound connectivity isn't on the roadmap.

I should be transparent about my perspective. I maintain two open-source projects in this space - Phantom (ghostwright/phantom), an always-on agent with persistent memory and self-evolution, and Specter (ghostwright/specter), which deploys the VMs it runs on. Different philosophy from Managed Agents, so I came into this with opinions. But I was genuinely curious how they'd compare.

For batch tasks and one-shot code generation, the infrastructure advantages are real. For anything where the agent needs to be a persistent presence - serving dashboards, learning over time, waking up on a schedule - the architecture doesn't support it.

Curious what others are seeing. Has anyone deployed it for a real use case yet? How are you handling the lack of persistent memory? Is anyone running always-on agents on their own infrastructure?

r/ClaudeAI Any_Page_3227

A Claude memory retrieval system that actually works (easily) and doesn't burn all my tokens

TL;DR: By talking to claud and explaining my problem, I built a very powerfu local " memory management" system for Claude Desktop that indexes project documents and lets Claude automatically retrieve relevant passages that are buried inside of those documents during Co-Work sessions. for me it solves the "document memory" problem where tools like NotebookLM, Notion, Obsidian, and Google Drive can't be queried programmatically. Claude did all of it. I didn't have to really do anything.

The description below includes plenty of things that I don't completely understand myself. the key thing is just to explain to Claude what the problem is ( which I described below) , and what your intention is and claude will help you figure it out. it was very easy to set this up and I think it's better than what i've seen any youtuber recommend

The details:

I have a really nice solution to the Claude external memory/external brain problem that lots of people are trying to address. Although my system is designed for one guy using his laptop, not a large company with terabytes of data, the general approach I use could be up-scaled just with substitution of different tools.

I wanted to create a Claude external memory system that is connected to Claude Co-Work in the desktop app. What I really wanted was for Claude to proactively draw from my entire base of knowledge for each project, not just from the documents I dropped into my project folder in Claude Desktop.

Basically, I want Claude to have awareness of everything I have stored on my computer, in the most efficient way possible (Claude can use lots of tokens if you don't manage the "memory" efficiently. )

I've played with Notion and Google Drive as an external brain. I've tried NotebookLM. And I was just beginning to research Obsidian when I read this article, which I liked very much and highly recommend:

https://limitededitionjonathan.substack.com/p/stop-calling-it-memory-the-problem

That got my attention, so I asked Claude to read the document and give me his feedback based on his understanding of the projects I was trying to work on.

Claude recommended using SQLite to connect to structured facts, an optional graph to show some relationships, and .md files for instructions to Claude.

But...I pointed out that almost all of the context information I would want to be retrievable from memory is text in documents, not structured data.

Claude's response was very helpful. He understood that although SQLite is good at single-point facts, document memory is a different challenge. For documents, the challenge isn't storing them—it's retrieving the right passage when it's relevant without reading everything (which consumes tokens). SQLite can store text, but storing a document in a database row doesn't solve the retrieval problem. You still need to know which row to pull.

I asked if NotebookLM from Google might be a better tool for indexing those documents and making them searchable.

Claude explained that I was describing is a Retrieval-Augmented Generation (RAG) problem. The standard approach:

Documents get chunked into passages (e.g., 500 words each)

Each chunk gets converted to an embedding—a vector that captures its meaning

When Claude needs context, it converts the query to the same vector format and finds the semantically closest chunks

Those chunks get injected into the conversation as context

This is what NotebookLM is doing under the hood. It's essentially a hosted, polished RAG system.

NotebookLM is genuinely good at what it does—but it has a fundamental problem for my case: It's a UI, not infrastructure. You use it; Claude can't. There's no API, no MCP tool, no way to have Claude programmatically query it during a Co-Work session. It's a parallel system, not an integrated one.

So NotebookLM answers "how do I search my documents as a human?"—not "how does Claude retrieve the right document context automatically?"

After a little back and forth, here's what we decided to do.

For me, a solo operator with only a laptop's worth of documents that need to be searched, Claude proposed a RAG pipeline that looks like this:

My documents (DOCX, PDF, XLSX, CSV)

Text extraction (python-docx, pymupdf, openpyxl)

Chunking (split into ~500 word passages, keep metadata: file, folder, date)

Embedding (convert each chunk to a vector representing its meaning)

A local vector database + vector extension (store chunks + vectors locally, single file)

MCP server (exposes a search_knowledge tool to Claude)

Claude Desktop (queries the index when working on my business topics)

With that setup, when you're talking to Claude and mention an idea like "did I pay the overdue invoice" or "which projects did Joe Schmoe help with," Claude searches the index, gets the 3-5 most relevant passages back, and uses them in its answer without you doing anything. We decided to develop a search system like that, specific to each of my discrete projects. The practical setup would be:

I appoint the indexer to my folder full of tons of project files. Each large project gets an index in its own "partition". Claude searches the relevant index when I'm working on that project.

Small projects that only have a very small handful of reference documents stay as direct uploads in Claude, and no index is needed

Here is the architecture:

Indexing Script: A Python script created by claude code in just a few moments that you run from the command line to start the indexing process

It walks through the folder, extracts text from DOCX/PDF/XLSX/CSV/TXT, chunks it into ~500-word passages, generates embeddings, and stores everything in ChromaDB (free). Run it once to build the index, then again whenever you add significant new documents.

ChromaDB (local vector database) A Python package that persists to a folder on your machine—no server, no installation beyond pip install (Claude knows what that is. I didn't). Each project gets its own "collection" (like a named partition). All projects share one ChromaDB folder.

Embeddings via OpenAI API Each text chunk gets converted to a vector using OpenAI's text-embedding-3-small model. This makes retrieval semantic rather than keyword-based. Estimated cost to index all of My Big Project folder: under $2.00 total. Queries are fractions of a cent each.

MCP Server: A small Python script that runs as a local server, connected to Claude Desktop via your settings.json.

Claude handled all of this with no action required from me.

To make this happen, I needed to have Python installed (free), and I decided to use OpenAI to index everything instead of a local solution (Ollama). It's very inexpensive to use OpenAI just for indexing.

I'm not too excited about having an external dependency on OpenAI, but I'm going to see how it works, and I'll switch to an Ollama if I need to.

A nice aspect of this approach is that I can easily duplicate this capability for other large projects.

Claude, of course, did all the heavy lifting and walked me through the whole process step by step, including how to index all the documents. We ran into problems here and there while trying to get it built, but Claude methodically worked through all of them.

It was up and running in no time (with no help from me. I know nothing about coding beyond being able to open the Terminal and paste the commands), and I was able to test it by asking Claude some questions about data I knew to be in the folder. I asked a question that actually required associating two disparate pieces of data, and it did that—with very nice context.

I added other capability to round out the whole memory management issue. i worked with claude code to develop a skill that I then gave to Claude Cowork. the skill is initiated when I'm finished for the day.And I tell claude , "i'm finished for the day" when Claude hears that he looks at all of the different conversation threads from that day, summarizes them into a brief daily roll up document (of what had been decided, what actions are still open and anything else that's noteworthy) and saves that as a markdown file and sends it to a particular folder for those reports, which are also indexed.

With those two capabilities , I have very rapid , pretty deep access to all of my numerous reference documents in multiple formats , and also all of my prior conversations with claude

So I think this is the solution to the memory problem for me using Claude Desktop and this bespoke document-indexing memory system. I'm very, very happy with this solution.

I hope this was useful to you.

r/personalfinance josue-RCH

Credit randomly dropped

My credit score randomly dropped 35 points putting me under 700 and i dont know why. ive been doing my payments on time and im not supposed to get interest until october as part of the 12 month no APR. Is this fraud related?

r/DecidingToBeBetter Ok_Jackfruit_6698

How to be less intense as an introvert?

I'm an INFP and a highly sensitive introvert. But I feel things in an intense fashion.

As in, I've been feeling furious and frustrated for the past one month. I took a 10 day break from work and I felt better. I was back in the office yesterday and I feel so angry again. The reason includes feelings of abandonment, unfairness, rejection from people I thought were my friends. I feel so anxious as if I'm high on adrenaline. It's stressing me out and I want to stop feeling this way.

Any helpful tips please.

r/personalfinance Reasonable-Egg6268

Got a check from my 401k in the mail. Is it a good idea to deposit it to my HYSA

This 401k was from when I was working on campus during college. The check came out to be $310 with taxes already taken away. I’m not sure if I should just roll it over to my Roth 401k or is it just better to deposit it to my HYSA cause I will have to pay taxes on it like $31 (idk how this works :( ) thanks for the info 💗

r/CryptoMarkets UnusualReality1177

Hong Kong Web3 Festival 2026 looks stacked but do these events actually deliver?

Real question, been seeing a lot of noise around Hong Kong Web3 Festival 2026 lately and yeah it looks solid on paper with big names and a good mix of builders and funds, but what do people actually get from these events, like do you genuinely walk away with something useful such as real connections, insights you won’t get online, or spotting trends early, or is it mostly quick chats and “let’s connect” moments that don’t really go anywhere, not hating just trying to understand the actual ROI here, anyone been to similar events or planning for this one, worth it or nah?

r/painting RiverMarketEagle

Mission ARTemis

6 x 6 acrylic

r/PhotoshopRequest -Newrappy

Could someone please remove the logos at the top of this image?

I really like this image and I am using it as a wallpaper but I would love if someone could remove the logos for a cleaner look! TIA!

r/AskMen jaynotbird

Men of Reddit, what is your opinion on eloping? Would you prefer that to an actual wedding?

I don't have a significant other, but since I was a child I used to fantasize about eloping. I think weddings are a scam. Even with the bare minimum, just a reception, family only (I have a big family), I have to feed them all and that's gonna be a couple thousand. I need a venue. Gonna be a couple thousand. And I think that's worth it for someone who cares about it, but I simply don't. I think it's the epitome of romance to be able to get married without all the jazz, on a random Tuesday. Like, there's something romantic about being so in love that you skip the superficial entirely. But anyway, I've been operating under the assumption that my future husband doesn't care, so I'm just wondering what the thoughts of some of the male population actually are.

Edit: Yes, I don't really know what elopement is. Sorry. Hopefully you could tell that I meant just not having a wedding entirely, like literally signing the papers on a random Tuesday with no fanfare and no one in attendance. If you came here wanting to give your opinion on actual eloping, by all means do so, because I'm still getting married and not wanting my parents there (they're abusive).

r/OldSchoolCool SappyGilmore

Tatyana Ali and Karyn Parsons on the set of The Fresh Prince of Bel-Air (1990)

r/homeassistant LithiumCobalt91

Found out I can make an iPad 1 a HA Dashboard and HA Speaker

Found about it accidentally

So I saw on r/homeassistant this post on how u/edgylukas managed to make an MQTT Dashboard. Since I have a jailbroken first gen iPad and an iPod Dock, I decided to install it.

And here it is!

Home Assistant MQTT Dashboard running on an iPad 1 on iOS 5.1.1 with an iHome ID38 Dock.

Not the prettiest, certainly not Lovelace, but it does work. It is a bit janky at times with configuring, but with enough time and effort, it does work pretty well. Sidenote: This iPod Dock actually has an app called iHome Set, which actually allows you to configure settings. Even without it, the iPad just syncs its internet controlled time and date straight to the dock anyways.

What about making the iPod Dock into a speaker for Home Assistant?

So I found a tweak on the iPad called AirSpeaker, a tweak which works on many older iOS versions which turns an iDevice into an AirPlay Receiver. The repo of the tweak is long gone, but found it here.
Turns out, you can actually integrate AirPlay Receivers into Home Assistant by using Music Assistant, then using the Music Assistant integration into Home Assistant. Here's the result of it in my dashboard:

Tile card showing the iPad as \"iPod Dock\" on a dashboard

And yes, I'm able to control music and it works pretty seamlessly.

r/AskMen Nintendofan9106

What is the most adorable thing that a girl has ever done/said to you?

The title kinda says it all.

r/LocalLLaMA reg-kdeneonuser

any one tried LFM2.5-1.2B-Instruct-Q8 before? .. 109.9 t/s !!! .. and my pc is over 6 years old 😮

r/StableDiffusion Puzzleheaded_Link905

Workflow for Anima 3 Preview ?

Alguém conhece um bom fluxo de trabalho para anima preview 3 com um upscaler que não altere drasticamente o estilo? Preciso usar o clownsharksampler.

r/PhotoshopRequest Remarkable_Energy341

What kind of place or situation do you imagine my cat is in?

You all have some amazing imaginations!

r/explainlikeimfive Tasty-Seaweed6705

ELI5 Why does your brain come up with the best ideas in the shower?

r/WTF Otherwise_Wrangler11

That slap means authority

r/PhotoshopRequest AshCrow1

Can u put a black snake hiding under the fridge for scare my colleague?

r/LocalLLaMA MrSilencerbob

I think my Gemma4 is having a breakdown

r/LocalLLaMA GWGSYT

Nothing ever happens

Unpopular opinion: Claude Mythos isn't doing magic. Drop GPT 5.2 Codex or Kimi 2.5 into a good enough agentic loop with full source code access, and they'll flag 20 critical bugs while you're getting coffee. Calling it 'too dangerous to release' is just a great cover story for 'too expensive to run.

r/arduino Ratfus

Still Struggling - Attiny (8 pin)/Amtel Ice

Hi,

I'm still struggling to get my Amtel ice to read an attiny chip, using Amtel studios. I tried custom making a pcb board, but had no success. I then purchased a board to use/assemble, but the pins don't line up with the ice's ground.

Anyone know what board I should get to develop attiny boards?

I'm trying to learn embedded.

r/Ghosts Few_Stable3472

Possible Ghost footage captured on CCTV at work?

r/ClaudeCode humanexperimentals

Claude helped in building my Youtube automation for commenting, posting and many others.

r/ClaudeAI subkid23

I built a plugin to use Claude through WhatsApp (with voice messages and everything)

With the whole OpenClaw situation this week I figured I'd share something I've been working on. For anyone who missed it: Anthropic cut off OpenClaw access from Claude Code subscriptions, and in response launched "Claude Code Channels" with official plugins for Telegram, Discord, and iMessage.

You know what's missing? WhatsApp. If you live in Latin America, Europe, India, Africa, basically anywhere outside the US, WhatsApp isn't "a messaging app." It's THE app. My mom sends me 3-minute voice messages on WhatsApp. She's not installing Telegram.

So I built an open source plugin that connects Claude Code to WhatsApp

You scan a QR code like when you set up WhatsApp Web, and that's it. You text Claude on WhatsApp and it replies right there. You can send voice messages and it transcribes them locally, your audio never leaves your machine. Supports files up to 50MB. Emoji reactions work as commands, like 👍 to approve actions and 👎 to reject. And it formats responses so they actually look right in WhatsApp, no broken markdown everywhere.

There's an access control system with pairing codes so random people can't just text your bot.

Since I know someone's going to ask: this doesn't replace OpenClaw. Different things. OpenClaw is a whole platform for autonomous agents with memory and tool access. This is way more focused, think of it as the WhatsApp equivalent of the official Telegram plugin, but with voice message support on top. The point is being able to talk to Claude from where you actually chat every day, voice messages and WhatsApp formatting included.

Repo can be found here

PRs, issues, and constructive roasts welcome.

r/OldSchoolCool Anxious-Diane

Liv Tyler (1997)

r/ProductHunters CarefulAd8887

I built an AI tool to solve my own SEO problem… now it’s live on Product Hunt

r/meme UsuallyComplicit

Telling time

r/LocalLLaMA Traditional-Silver16

Web search not working in Claude code with local modal

I am running Claude code with glm-4.7-flash and the web search option doesn't seem to be working. I am getting 0 results with different web search prompts.

Is this is a currently known bug or something related to Claude code running with a local model ?

r/LocalLLaMA Dragon_guru707

Wanted help selecting a local model for making a custom agent

I am working on making a custom agent for myself from scratch as a passion project and I wanted a local LLM as a fallback. I wanted suggestions on which one to choose, I initially thought mistral 7b or qwen3.5 2b.

r/PhotoshopRequest BazzasBakery

Please Remove our reflection in the bumper

r/SideProject xer2

I built a tool that turns vague product ideas into detailed specs for AI coding agents

Been building this for a while and wanted to share. The problem: I kept feeding AI coding tools (Cursor, v0, Bolt) half-baked requirements and getting garbage back. Turns out the garbage in, garbage out problem is massive - studies show devs using AI are actually 19% slower because they spend all their time on rework from unclear specs.

So I built ClearSpec - you can chat with an AI PM, use a guided wizard, or paste meeting notes, and it generates structured specs with user stories, edge cases, security gaps, and acceptance criteria. The output is precise enough for humans to review and AI agents to execute.

It integrates with GitHub, Linear, Jira, and exports to Cursor rules, Claude Code, Markdown, and Notion.

Free during early access: https://clearspec.dev

Would love feedback from other builders.

r/ClaudeCode jv0010

I got tired of AI coding tools overthinking easy stuff and yoloing important stuff, so I made skillmaxxing

I got tired of AI coding tools overthinking easy stuff and yoloing important stuff, so I made skillmaxxing

Lately I’ve been using a bunch of AI coding tools and kept running into the same problem:

they can feel insanely useful, but also weirdly random.

Sometimes they turn a tiny task into a full architecture rewrite.
Sometimes they rush straight into the one thing you wanted them to be careful with.

So I made skillmaxxing.

Works with: Codex, OpenCode, Claude Code, Cursor, Windsurf, Gemini CLI, Continue, and Aider.

It’s basically a portable setup for coding agents that helps them stay in the right lane depending on the phase of the task.

The idea is simple:

  • don’t overthink simple work
  • don’t rush risky work
  • don’t polish before proving anything
  • don’t switch styles halfway through for no reason
  • make it obvious when the agent is actually doing the right kind of work

It’s based on a mix of builder philosophies instead of one single “persona”:

Based on What it contributes Andrej Karpathy first-principles clarity Guillermo Rauch product and UX clarity Pieter Levels fast shipping and validation Swyx AI-native leverage and reusable knowledge Theo Browne pragmatic production correctness Amjad Masad agent workflows and dev environment execution

So instead of one vague smart-sounding prompt, it gives the agent different modes for different moments.

Quick disclaimer: this was inspired by the original repo here.

If you vibe code a lot, or keep feeling like your AI tool is powerful but inconsistent, this might be exactly what you wanted.

Repo: skillmaxxing

r/SideProject simply__bot

Web tech frontend project ideas

Hey everyone, I’m an engineering student trying to build a mini web project, but I don’t want to make the usual stuff like to-do lists, calculators, or basic CRUD apps. I’m looking for ideas that are: Actually useful for everyday people Simple enough to build as a mini project A bit unique or uncommon (something that stands out) Some ideas I’ve thought about: Lab report / prescription simplifier Expense tracker with smart insights Medicine reminder / daily life assistant But I feel these are still kinda common. Would love to hear: Unique or underrated project ideas Real-world problems I can solve Features that would make a simple project feel more “real” Thanks 🙌

r/ChatGPT topor8865

The style of replies is UNBEARABLE

WHY does it keep replying like it’s talking to an OCD teenager. Incoherent, unreadable, annoying. I have set instructions to longer, coherent sentences, etc. played with settings/preferences… It absolutely doesn’t care about what I prefer or want. I’ve called it out for this dozens of times, and asked it to remember that I despise this type of replies, and want normal, coherent, structured replies. It apologizes, says it “won’t do it again”… takes exactly ONE REPLY for it to go back to doing it.

This never used to happen. Now? Just cannot make it reply in any other way. Infuriating.

r/LocalLLaMA Just-Ad-6488

Mamba 1 & 2 to Mamba 3 Architectural Upgrade

This repository contains the methodology and scripts to bypass training from scratch by structurally transplanting weights from the Mamba-1/Mamba-2 architectures directly into Mamba-3 gates.

It handles the mathematical misalignments between the generations and provides a two-phase structural recovery training pipeline capable of bringing the Mamba-3 model back to coherence within a strict 12GB VRAM envelope.

The Methodology

When transplanting a sequence block from Mamba 1 to Mamba 3, three critical mathematical mismatches must be resolved to prevent the model from outputting pure gibberish:

1. The [x, z] vs [z, x] Sequence Inversion

  • The Problem: Mamba-1's in_proj splits the dimension into the main branch (x) followed by the gating branch (z). Mamba-3 expects [z, x]. If the weights are blind-copied, the network's forward logic will be physically reversed.
  • The Solution: The mamba1_to_mamba3_converter.py script mathematically slices the in_proj weight matrices exactly at d_inner and inverts the upper and lower halves before injection.

2. Dimensionality Collapse (dt_bias, D)

  • The Problem: Mamba-1 scales the structural D (skip connection) and dt_bias across the entire sequence length. Mamba-3 pools these into specifically sized nheads header groups.
  • The Solution: The script executes an active dimension pooling process (e.g. averaging chunks of 5120 down to 64 pools) to preserve the original structural signal scale.

3. Inverse-Softplus Reparameterization

  • The Problem: Mamba-3 kernel variables require specific scaling logic. The raw bias values map differently through the Triton softplus activation layer.
  • The Solution: The script maps torch.log(torch.exp(weights) - 1.0) on the translated dt_bias values to maintain numerical equivalence.

12GB VRAM Optimization

A 2.8B model normally requires ~18GB VRAM to train. Because standard activation checkpointing often clashes with the custom Mamba-3 Triton kernel, VRAM is optimized via two methods in mamba3_recovery_trainer.py:

  1. Per-Sample Micro-Backwards: Instead of loss.backward() over a batched block, the loops drop down to:for sample in batch: loss.backward() graph.free() Gradients accumulate safely, but the graph is instantly freed per step, crushing memory spikes.
  2. Phase A Selective Freezing: We freeze 99% of the transplanted model weights representing the "associative memory", unfrosting only the newly added Mamba-3 parameter gates.

The Recovery Pipeline

The transplanted model behaves like an intelligent engine that forgot how to speak. The recovery pipeline adapts the new gates to the old logic.

  • PHASE A (150 steps): Everything is frozen in the 2.8B model except the newly integrated Mamba-3 specific gates (B_bias, C_bias, etc.). Loss rapidly collapses as the gates calibrate to the legacy matrices.
  • PHASE B (>1000 steps): The model injects Low-Rank Adapter (LoRA) matrices cleanly on the outputs and unlocks full reasoning, stabilizing its capabilities.

Usage

  1. Place your base Mamba .safetensors or .bin checkpoint in the correct directory.
  2. Run python mamba1_to_mamba3_converter.py to create the initial transplanted shell checkpoint.
  3. Run python mamba3_recovery_trainer.py to structurally heal the model architecture via Phase A/Phase B training loop. https://github.com/batteryphil/mamba1and2-to-3.git
r/artificial mevaleverga-12

Análisis de datos vs IA

Alguien cree que en algun futuro la inteligencia artificial pueda hacer que muchas personas que son analistas de datos o cualquier rama que use programación o herramientas digitales pierdan su empleo? es algo que he estado pensando mucho, díganme que estoy loco

r/shittysuperpowers Deimos7779

You can see through windows.

r/LocalLLaMA wbiggs205

what model would be good good for vibe coding ?

I have a server office site with a RTX 3090 24g ram on a windows server 2026 and 512g ram. I'm running. LLM studio . I want to know what would be a good for vibe coding. I do not mind if I need to offload to server ram

r/Adulting PainAffectionate5903

Just finished one month at my first real job

I (24 M) work part-time at an afterschool with young kids. I’m on the far left in the staff group photo

r/ClaudeCode Anthony_S_Destefano

Funny how the first 90% of any project is the majority of the work.. until the last 10%...

that "last mile" is harder than the whole trip

r/ChatGPT Dogbold

ChatGPT can mod RPG Maker games for you.

I got curious and gave it the zip of a whole RPG Maker game and asked it to make several changes... and it did.

So I went further, and added new dialogue, branching paths, sound edits, animation changes to be more realistic, animation timing changes... and it did it all.

Then I gave it sprites and told it to make a whole new character, animated, with branching paths, dialogue, and then told it to make sure that every area and every path in the game checks, and if you have this character with you, gameplay and dialogue changes.... and it did it.
I didn't even need to be coherent. I kinda just rambled on for multiple paragraphs.

Could also probably help you make a whole ass RPG Maker game from a starter template too.

Keep in mind if you do this, there will be bugs that come up, just like with human coding. Sometimes adding new things will break previous things, but it is usually pretty good at fixing the bugs in usually one or a couple passes, and with mine it ended up stomping a lot of bugs by moving the changes to a brand new plugin it made.

Pretty damn cool. I tried it with some other games, like a Wolf RPG game, but it's not able to do it with things that are super proprietary and require their editor to make changes, so we're still a ways away from being able to ask it to make you a Skyrim mod, but it's still pretty damn cool.

r/megalophobia Icy-Leg-1459

Thunderstorm over Panama

Picture taken at 37,000 Feet (7 miles) by Santiago Borja

r/nextfuckinglevel WEISHEN_THE_KIRA

Brainstew X Dubstep by djmunition

r/CryptoMarkets Existing_Bet_350

Crypto Twitter /X is Over and dead, finished and done... where to go now for my daily token fix? Reddit?

Crypto Twitter /X is Over and dead, finished and done... where to go now for my daily token fix? Reddit? What is the true alternative after the CT purge on Twittter/X? Can Reddit or other platforms take over? Right now thousands of legit accounts have been suspended on X without any explanations. X wanted to remove bot accounts but seems like they removed more real content creation accounts in their purge, leaving X as a crypto ghost town! Crypto Twitter was first destroyed by the infofi system, then KOL's who shilled anything with a "pulse". Seems like a crypto reset is needed....

r/WouldYouRather Still_Cancel_2230

two different lives with different moms, which one would you rather

would you rather live a life where your mother projects all her insecurities onto you, and still shows you love from time to time, or would you rather live a life where you have the perfect mom, but in the end, she ends up suiciding because of the buildup of tension

r/mildlyinteresting ryanjmills

A QR code next to a sticker that says not to use your phone

r/Adulting Dizzy_Pen_353

This is my entire career plan in a nutshell

r/DunderMifflin realpoetrynmotion

I wish they actually made the Farm spinoff

I finally got around to watching the Season 9 Superfan episodes and there's a lot more character and storyline introductions I hadn't seen and it made realize that I definitely would've watched 3-5 seasons of it. Dwight and Angela with his family running a farm/bed and breakfast. Probably could've had a funny visitor every episode or something.

r/mildlyinteresting Liraeyn

Elevator with no control panel

r/ClaudeAI FreshFo

What are your tool stack along side Claude?

Hey all, I'm on Claude Pro recently, been using it a lot for complex work like legal, contracts, etc. Just curious what are more experienced people here using along side the main Claude chat? (like cowork, code, other tools). If you can give specific use cases, it would be super helpful since I'm non technical.

I want to explore how to best leverage AI in daily life and my projects (have a small biz)

r/SideProject Dismal_Advance_7393

Built an iPhone app called Nudge for the “I’m bored but forgot what I wanted to do” problem

The idea was pretty simple: when you’re bored and don’t know what to do, you add stuff you’ve been meaning to do, spin a wheel, and it picks one for you. Then you can lock in on it with a timer, widgets, and Live Activity support.

Honestly, I’m not fully convinced it solves a huge problem yet, but building and shipping it taught me a ton:

• making App Store screenshots

• setting up IAP

• widgets / Live Activities

• getting an app all the way to review instead of leaving it half-finished

So I’m still calling it a win.

I’m curious about the product side now:

does this sound like something you’d actually use, or does it feel more like a gimmick?

And if you’ve ever had the “I forget what I wanted to do until I’m busy” problem, what would actually help?

r/ClaudeCode mobatreddit

Any Way for Claude Code to Circumvent Hooks?

I use hook scripts to prevent CC from editing or overwriting its skills and hooks. My first attempt was to block the use of editing or writing tools. Then CC switched to using "cat >" and other shell tricks. So I blocked that with another hook script. There is one update path for skills and hooks. IDK if it can be hijacked, although I can imagine ways to do it.

Assuming I can write unhijackable hook scripts, the next path around is to avoid the hooks or disable them. Can CC do that?

r/SideProject Nik_116

I built an interactive map of every flight I've ever taken

Built this over a weekend as a personal project. Data is hand-logged from boarding passes and email confirmations going back to 2015. Each line is a route, colored by airline. The interactive version has per-airport drill-downs with flight logs styled as boarding passes.

r/Ghosts JerrycurlSquirrel

Evp vs spiritbox/geoport success rates dont make sense

ghost hunters typicslly wait for EVPs, rewind it and interpret garbled whispers of a single word and they will collect two of these in lkke a 12 hour period. i used a spiritbox in tbe basement of a hotel and it was CONSTANTLY going with crystal clear messages 5.5 hours. why are they never using the spiritbox exclusively? why am I not a hit TV series?

r/me_irl Overall-stick-293

Me_irl

r/LocalLLaMA wbiggs205

trying to load Gemma 4. I getting this error

trying to load Gemma 4. in llm studio on a Windows server 2026 with RTX 3090 24g and 512g ram server. But When I try to load it I get this error ```. I not getting this error on any other model ?

🥲 Failed to load the model

Failed to load model.

Failed to load model

```

r/AskMen SerThorfinnTheShort

How to approach a store worker I find cute?

I'm (24f) not currently in this situation but since I'm not in school nor have male coworkers I figured I might as well take my chances with someone I see at a store all the time if I find them cute. Idk what I should say though and I'm v awkward. I once saw cute guys working at one a few years ago but was too anxious to say anything and never see them anymore :(

edit: okay according the first few comments, I just go talk to them? guess it is simple

r/NotMyJob neBular_cipHer

Replaced the building intercom, boss

r/mildlyinteresting VisiblePartyPaySaver

A pizza at my college's dining hall

r/PhotoshopRequest jordanjamz

Remove my double chin? 🥲

I’ve gained weight and I’m super insecure about it. I’m on a huge diet and gym journey but I want to print this picture out because I finally got to meet the lead singer of my favorite band, but don’t want the memory of the DC. Please and thank you! :)

r/SipsTea maskedmomkey63

Friends in the modern era😂

r/ChatGPT Wild-Annual-4408

95% of UK students use AI for studying, but there's a "performance paradox": they do worse when AI is removed

New research from Jason Lodge and Leslie Loble identifies what they call the "performance paradox": students use AI tools to help them learn (explaining concepts, summarizing articles), but when the AI is taken away, they perform worse than students who never used it.

The reason: they're relying on AI instead of learning from it. They're outsourcing cognition rather than building their own.

The study proposes three pedagogical fixes:

  1. **AI as cognitive mirror** - student teaches a "novice" AI, has to explain concepts simply
  2. **AI as Socratic partner** - AI questions and debates the student's thinking, doesn't just give answers
  3. **AI as verification partner** - student evaluates AI output for errors, explains what's wrong and why

All three approaches flip the script: instead of AI doing the thinking, the student does metacognitive work on top of the AI interaction.

I'm curious if anyone here has actually tried structuring their AI use this way, especially approach #2 (Socratic partner). Most people I know just use ChatGPT as an answer machine, which seems to be exactly what creates the performance paradox.

Does the way we prompt actually matter this much for learning outcomes, or is this just academic theory?

r/LifeProTips _leonjoxx

LPT: Use the "First Place You Looked" rule to stop losing your household items

When you lose an item (like a stapler or a passport) and eventually find it, don't put it back where it was. Put it in the first place you looked for it. Your intuition has already decided that is where the item belongs, so you will find it immediately next time.

r/ClaudeCode WellThatsNoExcuse

Is the CC a/b testing us?

Plenty of folks across social media complain regularly about noticing Claude is "getting dumber", but these seem interspersed with a more silent group posting about how CC has been crushing it for them.

Every now and then CC ask how Claude is doing this session.

These guys aren't dumb, they're not just randomly asking that like burger king asking you to fill out a survey for a free whopper...they're trying to do a covariate analysis.

Theyre under pressure to limit usage, and can obviously turn various dials on a session that quietly make it "dumber", reducing usage...wouldn't it be trivial (and valuable) to know how far they can turn the dials before people start answering 1 or 2 instead of 3 to that survey? Seems like super valuable data for the cost of some complaining on social media and the occasional cancellation. cancellations can't be a real problem for them though, thats just another way of reducing demand that's already clearly at capacity...for every one who cancels there's more clamoring for the freed up capacity.

While a traditional company might shy away from experimenting on paying customers like that, the fact that they are already subsidizing usage so heavily might make them feel like users are less customers and more unknowing compensated experiment participants, and they literally are a lab full of researchers. If so, I wonder what I could do to get them to put me in a control group...just start answering 1 to every survey?

r/ClaudeAI Remarkable-Cry-3454

Glitch

I use the Claude app for creative writing purposes. I’ve been noticing that I will give it a prompt, and it will do the command, and then 10 seconds later it’s taking me back to previous edits, completely erasing and deleting my new prompts. This is a completely new thing and has started happening only yesterday. Is this happening to anyone else? It’s driving me crazy and I’m not sure how to fix it. I haven’t been able to get anything done because it keeps going back to previous edits. There’s not even a button on the app where you can go back to previous edits like on the web, so I’m confused as to how this is even happening. Any help will be appreciated, thanks!

r/comfyui ybeerk

Advanced ComfyUI courses for 3D Artists

Hi guys,

I work as a 3D Artist at a home product company. Basically, I do white background shots, lifestyle images and campaign images. Also, I am very interested in lifestyle product video generation by using comfyui.

On the other hand, since I work at corporate company, they are willing to support me on courses as well. That's why, I need really serious and effective online-offline courses.

Could you please share your information with me? I am also open to get courses for fal.ai or higgsfield as well..

Cheers for all creatives!

r/LocalLLaMA Icy_Gur6890

My experience with the Intel Arc Pro B70 for local LLMs: Fast, but a complete mess (for now)

full disclaimer using ai to help clean up my mess of thoughts. i have a tendency of not being coherent once i get many words out.

​TL;DR: Bought a B70 on launch day. Achieved an impressive 235 t/s with Gemma 3 27B on vLLM(100 requests), but the software stack is a nightmare. MoE is barely supported, quantifying new architectures is incredibly fragile, and you will fight the environment every step of the way. Definitely not for the faint of heart.

​Hey everyone,

​I ordered the Intel Arc Pro B70 on the 27th right when it released. I’ve previously wrestled with ROCm on my 7840HS, so my thought process was, "How much worse could it really be?" Turns out, it can be a complete mess.

​To be totally fair, I have to admit that a good chunk of my pain is entirely self-inflicted. I used this hardware upgrade as an excuse to completely overhaul my environment:

​OS: Moved from Ubuntu 25.10 (with a GUI) to Fedora 43 Server.

​Engine: Transitioned from Ollama -> llama.cpp -> vLLM. (Intel is heavily supporting vLLM, and I’m optimizing for request density, so this seemed like a no-brainer).

​Deployment: Moved everything over to containers and IaC.

​I figured going the container/IaC route would make things more stable and repeatable. I’ve even been cheating my way through some of it by utilizing Claude Code to help build out my containers. But at every turn, running new models has been a massive headache.

​The Good

​When it actually works, the throughput is fantastic. I was able to run a Gemma 3 27B Intel AutoRound quant. Running a vLLM benchmark, I managed to generate 235 t/s across 100 requests. For a local deployment prioritizing request density, those numbers are exactly what I was hoping for.

​The Bad & The Gotchas

​The ecosystem just isn't ready for a frictionless experience yet:

​MoE Support: Mixture of Experts models are still only partially supported and incredibly finicky.

​Quantization Nightmares: I'm currently trying to run a quant through AutoRound for Gemma 4 26B. I’ve watched it blow up at least 30 times. The new architecture and dynamic attention heads just do not play nicely with the current tooling.

​Container Friction: I've run into at least 7 distinct "gotchas" just trying to get the Intel drivers and vLLM to play nicely inside containerized environments.

​I haven't even tried spinning up llama.cpp on this card yet, but based on the vLLM experience, I'm bracing myself.

​Final Thoughts

​My background is as a Cloud Engineer. I’ve spent a lot of time hosting SaaS apps across Windows and Linux environments, so while I'm not a pure developer, I am very comfortable with dev-adjacent workflows and troubleshooting infrastructure. Even with that background, getting this B70 to do what I want has been an uphill battle.

​If you are looking for a plug-and-play experience, stay far away. But if you have the patience to fight the stack, the raw performance metrics are definitely there hiding under the bugs.

r/Weird seoul_tea

root beer flavored milk

r/mildlyinteresting Intelligent_Taro_234

There’s a random crowd gathered in the middle of this field

r/Rag Interesting-Town-433

Embedding Adapters V2 - Universal Embeddings | Free OpenAI embeddings | Any -> All, Adapters ❤️ | Bridge the void

Back in November 2025 built and released embedding-adapters (pypl). It lets you use All-MiniLM-L6-v2 and an Adapter to generate OpenAI's text-embedding-3-small embeddings locally while achieving ~90% of the target model’s retrieval accuracy.

This community and others across Reddit were super supportive -extremely grateful for that, thank you.

After several more months of grueling development (and a lot of training failures ) I’m finally about ready to release the 2nd generation of these adapters along with an API.

There’s a small catch though - being just one guy and self-funding most of this, I can’t really afford to let everyone convert a billion documents at once. If I did, I’d have to scale my GPUs and pay some pretty horrific infra costs if I was wrong.

But if I had a couple people I knew would want to use this, I could prioritize them and potentially scale things more safely. So if that’s you, please DM - happy to connect and discuss more on Zoom or elsewhere. I’m especially looking for people with large databases or high-throughput, low-latency requirements.

This project was built on a wing, a prayer and a hell of a lot of cloud credits. I honestly didn’t think it was even possible to reliably go from one embedding space to another - some models don’t even have the same tokenizer!

But with these new models you can generate text-embedding-3-large in about half the time, and in some domains the retrieval is even higher than the target model.

These models are not replacements for the target - they’re intentionally overfit to their domain, but trained with a quality head that lets them know if they will work or won’t. And that’s enough in many cases. If retrieval accuracy is your goal, you don’t care about exact cosine similarity between true and adapted embeddings -you care if it works.

This is a cost saver, pure and simple. But it’s also fast- in some cases running on only ~50M parameters.

If you can’t wait for the embedding, or not waiting is your advantage, use this.

www.embedding-adapters.com

r/AskMen DifficultBreath9469

What are the chances of a draft starting?

Hello everybody. I made this post because I was reading some things on Twitter. I stumbled across some accounts (official ones) that said auto sign up for the draft is happening before December of 2026 (this year).

I would like to mention that I have severe anxiety and also ADHD. I am on medication to keep me functioning. If I miss one day of not taking medication, it will screw me over for a little bit.

Would this make me exempt from the draft?

r/ClaudeAI Awcanavan777

What is the point of managed agents?

So I’ve been seeing all the hype today in the community around managed agents, but I’m just having a hard time wrapping my head around the actual use case, please help me understand.

I’m relatively new to this space, but I’ve dove in pretty heavily the past month or so. I’m using Claude code to build a few micro SaaS apps, and both of my apps have Supabase as the backend and I run my API calls through Supabase. I saw people saying that they’d just wrap their new managed agents in a front end for their SaaS apps, but I don’t understand the point of that or why someone would do that instead of just using a backend like Supabase?

I’ve also seen people use them to automate certain tasks within their business, but I already do that with scheduled tasks within Claude, and it’s a lot cheaper cause I use my subscription rather than API tokens.

Can someone help me understand the use case of these managed agents compared to what I’m already doing? What are you guys using these for/what are you planning on using them for?

r/PhotoshopRequest GWNVKV

Can someone please put an adidas tracksuit on this guy, bonus points if there’s an additional photo with a cigarette in his mouth or hand. Very Slavic vibe. Will tip!

Make him look Slavic, I beg of you.

Can also add additional angles if that helps.

r/Adulting Both_Passion2581

Back when "Go play outside" really means "See you at dinner"

r/personalfinance jackcopen

What to do as 16 year old with $200 a month car payment?

Hi all, I am 16M and a junior in high school. Currently I am working a part time job. Last month, which was an average month, I made around $944. When I got my job, I also got my license and my parents and I agreed it was time to get a car.

We decided on a 2015 Honda CRV, used, and I think the grand total with everything was somewhere around $11,000. Now, I am paying $200 a month, plus I make extra payments each month. My parents and grandparents helped start me out, and right now after a couple months of paying I have around $6,000 left on the car.

When we got it, of course my parents had to take out a car loan, under their name. At the time my dad said that it was a good deal for the market right now, and that I won’t find anything better. However I am realizing now as I am educating myself on personal finance, that this car is just overkill. I should’ve saved up for a car and purchased one with cash, but I did not realize this at the time, even though I should have.

I am budgeting my money using the 50/30/20 rule, and last month I paid a little over $350 in total towards the car. I would rather use this money to invest in a Roth IRA and save it, then use it on a car. My research has led me to believe that my parents made a bad financial decision with this, and unfortunately I was not educated on this topic at all, so to me it seemed like the right choice.

So 16, car payment, $6,000 to go. What can I do? Is there anything other than to pay off the car? How would one recommend going about this or asking my parents why this choice seemed justified?

Am I looking at this the wrong way since a used car in cash would be around 5-6,000 anyway, and I don’t have a ton saved?

I have around $1500 in savings. Should I pull this and use it towards the car? I would just like to know the best steps for me to get this lousy car payment out of the way and start thinking about my future. Thanks!

r/Art BlueShot_Eyes

Hanging With a Hookah, BlueShotEyes, Digitial, 2025

r/LearnUselessTalents Specialist-Hippo-364

Question

If u have a loose anal hole or tighten anal hole how do u make ur loud farts tell me😹

r/ChatGPT StemcelReddit

Anyone have any solutions to AI psychosis?

Many have committed suicide and homicide because of this, I will no longer be using AI. Any treatment?

r/OldSchoolCool WonderfulQueenie

Salma Hayek in the 1990s

r/DunderMifflin GreenAssumption6328

Everytime I see this episode, I can't remember if Stanley has a mustache.😂

r/Showerthoughts TheRobbuddha

The older you get, the harder it gets to blast poop stains off the back of the toilet bowl.

r/Weird Life-Beginning_6

This toy my mom got for her dogs

She thinks it’s “adorable”

r/Jokes UnnusAbbus

At six o’clock, Tim and Tom always meet together at a bar.

One night, Tom doesn’t buy a drink, leading Tim to ask, “Don’t wanna get wasted?”

“I’d love to get wasted. It’s just that I don’t have the money to afford beer.”

Tim then pulled out a pair of large polkadot glasses from his pocket and explained, "Tomorrow, go to the park, put these glasses on, and do whatever they tell you.”

Tom then took the glasses and went to the park the next day. After he put on the glasses, they instructed him to point at a nearby man and yell out a number. So, Tom did as he was told and yelled out, “5!”

The man gasped and replied, “Oh my god! He’s right!”

Tom then pointed to another man and yelled out, “3!”

The man gasped, but blushed, “D–Don’t say it out loud!”

Tom then pointed to a woman and yelled out, “12!”

The woman gasped, “He’s right! I thought I hid it so well!”

After an hour at the park, Tom accumulated nearly $150 just by pointing to people and saying a number. Later that day, Tom went to the bar and asked Tim, “Why did these glasses tell me to point at people and shout their numbers?”

Tim arched a brow, “Seriously? You don’t know?”

“No.”

“Everyone knows that those glasses can tell how long someone's dick is.”

r/AskMen 22181

How would you react to your girlfriend opening up about getting sexually assaulted in the past?

r/explainlikeimfive Legend789987

ELI5: What's the difference between Desktop Resolution and Active Signal Resolution?

r/findareddit Specific_Regular1360

I want to find help on a TikTok bug but don’t know which subreddit to talk about on, pls help

I need help trying to find a good subreddit to discuss a bug my friend has on her TikTok account, basically for a month she couldn’t reply to any of her dms, I’ve been trying to find a way to fix it and suggested tips but given the lack of response, it’s clear it hasn’t worked, so which Reddit would be best to discuss about this? I can do it on r/tiktok since a rule there is that you can’t ask for support or questions as it’s a subreddit for only posting tiktoks,

r/personalfinance Gurman4ik

Moving to America, please help me!!!

Hello, I'm a Belarusian who has been learning English for about half a year. I plan to move to Texas in the future. I have two questions: 1. How much money do I need for the first few months? (without making large expenses) 2. How much do stomatologists earn per month?

r/OldSchoolCool CalvinIII

The Red, White, and Blue game at Texas A&M on September 22, 2001.

Over 75,000 red white and blue shirts were printed and sold in the 10 days following 9/11, raising over $250,000 for first responder’s families.

Fans from both Texas and Oklahoma ditched their school colors to honor those that were lost.

r/ClaudeAI wigelsworth

They removed the buddy from latest? (Claude Code v2.1.97)

In the latest changelog:
REMOVED: System Prompt: Buddy Mode — Removed the coding companion personality generator for terminal buddies.

Seems coding buddies were just a tease.

r/SideProject JollyShift5968

Applyr is up!

I built a Chrome extension to help tailor resumes automatically for different job applications.

The idea came from how repetitive it is to keep adjusting your CV for every role on platforms like Indeed and LinkedIn. This just speeds that up by adapting your resume based on the job description.

It’s still pretty early, but it’s already saving me a lot of time when applying.

Would love to get some feedback from others building or job hunting — you can try it by searching “Applyr” in the Chrome Web Store.

Happy to share more details about how it works if anyone’s interested.

r/DecidingToBeBetter GOINKERER

How do I break an anxious habit?

Hey, I’m not sure if this is the right place to ask this, but I thought I’d give it a shot. For the past year or so, one of the most noticeable physical manifestations of my anxiety has been a fixation on the moisture/sweat on my hands, specifically under my fingernails. I also have very intense sensory issues, so this moisture makes me very uncomfortable. I end up picking under my fingernails absent-mindedly all day, to the point where my knuckles start to go raw as I rub the moisture off from under my fingernails onto them. It doesn’t matter how much anxiety I’m feeling in that moment, I always end up doing it. I’ve tried cutting my nails short to avoid getting anything trapped under there, but that just makes it worse as it makes me feel like I need to put more effort into cleaning them. While it’s obviously not the worst thing I could be experiencing, it adds a lot of discomfort and pain into already stressful parts of my life and I’d really like to do anything I can to stop it. I guess I mainly just want to figure out how to stop paying attention to the feeling of the moisture, or maybe the compulsion I feel to get rid of it? I’m looking for any suggestions or anyone who has dealt with anything similar, since I couldn’t really find anything about this online, and it’s making me feel pretty alone. Any help or support is appreciated :)

r/personalfinance General-Rough256

Need financial planning advice

Hello,

Will be turning 30 in a few months and am freaking out that I haven’t met my financial goals. I am hoping to get your thoughts on how can I do things better with my personal finances. Here is my current profile:

- Savings: 10K in HYSA

- Brokerage: 8K

- Loans/debts: 0 (I don’t own a car nor a house yet, paid off 80k for my student loans (Masters and PhD dropout) early last year, took me 6 years to pay it)

- 401K: 55K (rollover+employer), Roth IRA: 25K

- Take home pay after taxes: 80K (trying to move jobs so I can increase my income)

- Monthly expenses for rent/utilities/food/any trips: 3K (lived with roommates all my life but wanted to live by myself once, so my rent and expense are super high since last year, will go back to getting a roommate in a few months again)

My employer offers 50% match with no limits but I do also want to build my emergency fund as I work at a Tech company. I don’t have family from whom I will inherit anything. Thank you!

r/homeassistant slboat

The THS45-1 is the sensor with the highest humidity accuracy we have ever manufactured; the THS-M45, which features a modified sensor chip, is the next best in this regard.They all use the SHT45 sensor.

https://preview.redd.it/crqvf0c6w2ug1.png?width=2032&format=png&auto=webp&s=b690ec79dab6e962bcb8a8f70cdc61f955a4b1d2

https://preview.redd.it/ylsm1p5pw2ug1.jpg?width=4032&format=pjpg&auto=webp&s=8d2c7430eb66cc7e29387b81587b4274ee6490a1

We’ve recently been building some DIY sensors using ultra-high-precision temperature and humidity sensors, and we’re delighted to have manufactured hundreds of each type to share with people all over the world; they all seem to be working quite reliably. One of these involves modifying the off-the-shelf THS-M45 temperature and humidity sensor; from our tests, the THS45 sensor with an exposed probe appears to perform very well—it responds sensitively to humidity, allowing the SHT45 to perform to its full potential.

Wow, what an interesting exploration! :)

r/SideProject ActiveEnvironment569

👋 Welcome to r/Ganana_Vedicapp - Introduce Yourself and Read First!

Hey everyone! I'm ChitraMoon, a founding moderator of r/Ganana_Vedicapp.

This is our new home for all things related to Ganana — a daily Vedic Jyotish companion app (ganana.app). We're excited to have you join us!

What to Post

Post anything you think the community would find interesting, helpful, or inspiring. Feel free to share your thoughts, screenshots, or questions about your daily TPC score, panchanga and hora timings, planetary transits, dasha periods, muhurta choices, kundali interpretations, yogas in your chart, or anything else you're noticing as you use the app. Bug reports, feature ideas, and "why does Ganana show this?" questions are all welcome too.

Community Vibe

We're all about being friendly, constructive, and inclusive. Beginners and serious students of Jyotish are equally at home here. Let's build a space where everyone feels comfortable sharing and connecting — without judgment, without gatekeeping.

How to Get Started

  1. Introduce yourself in the comments below — where you're from, and what drew you to Jyotish.
  2. Post something today! Even a simple question can spark a great conversation.
  3. If you know someone who would love this community, invite them to join.
  4. Interested in helping out? We're always looking for new moderators, so feel free to reach out to me to apply.

Thanks for being part of the very first wave. Together, let's make r/Ganana_Vedicapp amazing.

r/SideProject sleepy-hercules

I built a private app for couples to stay emotionally in sync — 3months in, now in beta

I built a private app for couples to stay emotionally in sync — 3 months in, now in beta

Hey r/SideProject. Long time lurker, first time poster.

About 3 months ago I started building Arcov — a small, private mobile app for couples. No social feed, no followers, no public anything. Just two people.

The core idea came from my own relationship. We were technically in touch all day but emotionally out of sync. Texting logistics but missing each other's actual state of mind. I wanted something that made it easy to say "here's where I'm at today" without it turning into a whole conversation.

What I built:

- daily mood check-in (5-point scale, optional note) so your partner has a sense of how you're doing even on the days you don't talk much

- "thinking of you" buzz button — a haptic ping that says exactly that, nothing more

- shared memory vault for photos, voice notes, and text

- shared daily question — one prompt, both partners answer, responses visible to each other

- Weekly mood trends and a few small AI-generated insights about your relationship patterns

Tech stack for those curious: React Native + Expo, Supabase (auth, realtime, edge functions), Cloudinary for media, EAS for builds.

I'm currently in beta and looking for couples willing to actually use it for 2 weeks and tell me what's broken, what's missing, or what feels unnecessary or even what is good. Especially interested in long distance couples and people who feel like they've drifted into "logistics mode" with their partner. I already have around 23 couples on the waitlist but looking for more.

No subscription, no paywall, nothing to buy. Just honest feedback.

If you're interested, you can join the waitlist at arcov.app — I'm reaching out to people on the list personally. Happy to answer any questions about the build too.

r/Adulting ssushi-speakers

Bombs and armaments

Is anyone else just astounded at the amount of bombs that humanity has accumulated? It seems that we have a near inexhaustible quantity of these things designed to blown things up. I'm staggered at what countries have accumulated.

Imagine the total cost. The total effort to design and build. Imagine where humanity would be if we put that to positive use.

r/PhotoshopRequest Vaylen_Kalek

We Need Levity

Please help photoshop my friends dog into something fun. The whole gang could use a laugh and scooby set us up with this shot lol

r/BobsBurgers EgginyourShoe

What’s the worst or most mean thing Bob has done?

We all know Bob tries his best and is constantly helping others. It made me wonder, what is something actually bad or mean he has done?

Let’s avoid saying - calling his entire family “terrible.”

r/OldSchoolCool ateam1984

Al Bano & Romina Power - Siempre, Siempre (1986)

r/WouldYouRather Syruponmypizza

WYR have first half of your life great and the second half horrible? Or first half horrible and second half great?

the great can be anything that sounds great to you. rich, famous, whatever.

the horrible can be anything that sounds horrible to you. autoimmune disease, depression, schizophrenia, loss of limbs, whatever.

r/AI_Agents Pale_Box_2511

i used to judge AI projects by their architecture. looking at the new wave of builders, pure coding skill is basically a commodity now

I've been giving myself a bit of an existential crisis lately. just spent the last three weeks perfectly configuring a dockerized backend for an ai tool that has exactly zero active users.

meanwhile i was looking through the participant roster for an ai hackathon happening in shanghai this week (via rednote), and the profiles were a massive reality check.

the people building the most interesting stuff rn aren't traditional ml researchers or senior backend architects. they dont have a decade of c++ baggage telling them 'how things should be done'. they are weirdly hybrid.

you look at the list and see a linguistics major spinning up cross-border trade agents bc he actually understands the domain friction. a 19yo using open-source lerobot repos to build physical automation for household chores. a former design student who just strings apis together and treats her early users as a qa team to iterate on highly legible uis.

made me realize the maker culture has fundamentally flipped.

we used to get impressed by abstract technical stacks. a few years ago the moat was simply knowing how to build the complex system. but with coding agents compressing build times this much, pure logic and codebase structure are definately commodity skills now.

the new moat is product taste and shipping speed.

if ai compresses development this fast, a 48h sprint isn't about proving a technical concept can exist anymore. its about proving if a use-case deserves to exist. the builders winning right now are the ones who drop a working (even if its janky) prototype in front of real people, get brutal feedback, and iterate the exact same day.

a highly legible use-case that actually solves a weird specific human problem is infinitely more impressive to me now than an over-engineered backend built in a vacuum.

the barrier to writing logic is approaching zero. but the barrier to actually understanding human friction and having the taste to solve it feels higher than ever. kind of a strange time to be a traditional developer. going back to debugging my k8s cluster for my 0 users i guess.

r/Weird SunAccomplished3413

You walk into my kitchen and see this. Wyd?

r/LocalLLaMA camden_hulse

built cursor for mobile. no laptop. no remote.

building treena, a mobile-first ai ide. full terminal, file explorer, code editor, and ai agent with model switching, all in a react native app.

the terminal runs xterm.js in a webview. everything else including the editor, git, file explorer, and multi-model agent loop is native. ephemeral aws ecs fargate containers spin up per session, clone the repo, run the agent, and tear down when finished. no laptop required.

demo shows an agent building a landing page, opening it through a local host, and pushing to github autonomously, all from a phone. the server reports a linux machine on an aws ip.

r/coolguides SilverPetalVale

A cool guide to self awareness

r/aivideo RickPavia

THIS I PRODUCED WITH GROK SAYS ALLOT ABOUT HOW THIS WORLD CAN BE!

r/SideProject Alarmed_Criticism935

I built a local server that gives Claude Code eyes and hands on Windows

I've been using Claude Code a lot and kept running into the same wall — it can't see my screen or interact with GUI apps. So I built eyehands, a local HTTP server that lets Claude take screenshots, move the mouse, click, type, scroll, and find UI elements via OCR.

It runs on localhost:7331 and Claude calls it through a skill file. Once it's loaded, Claude can do things like:

  • Look at your screen and find a button by reading the text on it
  • Click through UI workflows autonomously
  • Control apps that have no CLI or API (Godot, Photoshop, game clients, etc.)
  • Use Windows UI Automation to interact with native controls by name

Setup is three lines:

git clone https://github.com/shameindemgg/eyehands.git cd eyehands && pip install -r requirements.txt python server.py 

Then drop the SKILL.md into your Claude Code skills folder and Claude can start using it immediately.

The core (screenshots, mouse, keyboard, OCR) is free and open source. There's a Pro tier for $19 one-time that adds UI Automation, batch actions, and composite endpoints — but the free version is genuinely useful on its own.

Windows only for now. Python 3.10+.

GitHub: https://github.com/shameindemgg/eyehands
Site: https://eyehands.fireal.dev

Happy to answer questions about how it works or take feedback on what to add next.

r/nextfuckinglevel Intelligent-Ear-9181

Man cooks A+ meal while doing a Cover

Check out TORWAI [Delicious Rock] on YT. it's all they do and it's fucking crazy

r/OldSchoolCool MassiveMulberries

Emma Sams in the 1980s

r/ollama Due_Ad3126

Anyone tried running gemma4 on 3060 ??

guys I want to run a gemma4 on 3060 and can expect 100 concurrent requests. want to know if anyone has tested it or any quantised model here and actually have seen good results in terms of t/s.

r/Unexpected LazyGuy4U

Random Walk in Hollywood

r/MostBeautiful abcphotos

Calla Lily [oc]

r/findareddit Extension_Big5205

Is there a way to to access deleted videos on youtube. Or is there a subreddit for this?

Long story short there was a video i used to watch regularly and the creator deleted it. So how do i find it?

r/LocalLLaMA Excellent_Koala769

$100 worth of Claude Code API credits to anyone who can guess what model I am running locally on my M5 Max mbp. I love the sound she makes

r/comfyui afrosamuraifenty

I'm too stupid for comfyui

I have tried several workflows but I never get anyone of those to work.... I spend 15hours!!!!! today trying to get 2 desperate workflows to work to no avail idk how you guys do it... I'm at my wit's end. if any of you guys have a simple wan or ltx workflow that doesn't have me looking for solutions for hours or days on end I'd be glad cause srsly f this sht

r/ClaudeAI Postcolonialpriest

So, Mythos.

So... Haiku is short form poetry. Sonnet is longer, lyrical one. Opus can be any kind of long form major work. Something you would call a feat.

Now we have Mythos. A smart pivot from orchestral progration because you can't name a model Magnum Opus. That would have been like naming a generation Z. (What, you are not going to have humanity after gen Z?) And it is still in a spectrum. The popular form of Mythos is longform poetry about feats testing the realm of gods.

So would the next model's name be Odyssey? (Longform Mythos)

Any other ideas? Then what?

r/Weird MagnetoNTitaniumMan

Saw what looked like flaming sky-surfers drifting around above an Austin, TX hospital back in November 2022. Never found anyone who knew what it may be

To date the weirdest thing I’ve ever seen with my own eyes

r/DunderMifflin FiberSauce

Angela singing along on the bus? Yeah, right. S03E22 - Beach Games

r/meme IntentionImmediate78

Pretty accurate

r/personalfinance good_adventure

24 and need help allocating assets!

Age: 24

Income: 92k salary

Monthly expenses: ~ 2.8k

Ongoing savings: ~ 40% of income

Debt: none

Goal: Long term investing and one day for big expenses in my 30’s (car, house, family expenses, vacations!)

I need help allocating 75k into different investment vehicles. Percentages explanations would be most helpful for stocks allocation, as idek if 75k is accurate, more or less.

Here’s what I’m thinking/have so far:

* checking acc where all money goes in and out for rotating cash

* extra checking account ~2.5k

* emergency fund - pls dont say 3-6months, i need to have a straight up number

* currently utilizing HYSA 3.75%

* I have a 401k from previous employer, but now idk what percentage to use for new employer 401k

* I have a Roth IRA that I will be maxing out, currently in some index funds(but can reallocate depending on advice)

* Individual brokerage: ready to be allocated

notes:

* My family is not wealthy, accessibility to my money is important to me, in case my family (parents) need money, this is besides my own emergency fund which is just for me just in case

* I don’t like not being able to access 401k until such old age (59.5), and I like to max the employer match

* focused on wealth building

* currently have no struggles financially, and dont plan on any big purchases, car is new

* my 100k goal is to get a financial advisor/CPA because i am no expert lol

* i am not a beginner to investing, I had a phase of swing trading, so please be specific in allocation

* currently holding off due to war/economic state/ and the usual advice of “VOO” and others is going down, yes, u cant time the market, and nobody knows, and I love VOO but I know there’s more to it than that. I actually timed this well and pulled out, before everything started going down.

* my partner has a financial advisor and has shared with me his portfolio allocation but some index funds are not available with my current brokerage

r/CryptoMarkets cSigmaFinance

RWAs feel boring… but that is the point

DeFi yield used to come from opaque looping strategies, and the hope price always goes up.But, RWAs feel very different.

  • fixed rates
  • real borrowers
  • actual cashflows

It’s not exciting… but at least you know where the yield comes from.

DeFi is definitely heading in the right direction.

r/mildlyinteresting Mountain_King8479

My hydra created a clone of herself asexually

r/arduino Denl0self-a_o

should i get a board with more SRAM and more pins or should i optimize

hello, i am working on a project mainly with a ESP32 module with ESP32-WROOM-32E, i found that after using the WiFi.h and a bunch of other stuff there is 40~50% heap left and after email stuff there is only 23% left which is pretty concerning to me

and i want to have more pins for software UART, i am mainly using it because of the range, and now i want to add more channels but i cant because each port takes 2 pins. i wonder if i should move to another protocol like I2C or should i add address system on top of UART, which seems sketchy to fully implement a custom protocol/framework like this, but thats another topic

i did some research and ESP32 P4 seems to be the best option if i want to use another board, more SRAM and pins, but its more expensive

what should i pick? or are there something im fundamentally doing wrong?

r/ClaudeCode Dependent_Biscotti86

Claude Code for Corporate work life

Curious to know if anyone uses Claude code or Claude code-work for their corporate job. And if so, how do you utilize it?

I work in a corporate position in the home remodeling industry. I utilize Claude a decent amount and have decent knowledge of the AI world but have not jumped into the Claude code/cowork world because I am not sure how it can help outside of Claude’s normal capabilities. A lot of the videos I have watched videos but I feel like they all are focused around the idea of using the capabilities to build digital businesses / side hustles compared to how to utilize them for the current job you have.

Now (hand up), I could be completely ignorant here and just haven’t looked into enough to discover how I can use it. But man could I use some insight and guidance to help get me started. Thank you!

r/funny coool_199

This is not Big bird it's Bog bird. So where the fuck is Big bird!

r/ClaudeCode Fine-Association-432

new quant dropped - "claude opus 4.6 dogwater"

r/ChatGPT beerbellyman4vr

i made claude code shout yamete every time i smack it

saw projects like badclaude and smack your mac and i just had to do it. i couldn't resist the temptation. go have fun

r/Anthropic pythononrailz

Turned my iOS caffeine half life decay app into an open source mcp server for claude

A while back I shared my Caffeine Curfew iOS app here and it got a ton of attention. Because of you guys I actually got invited to apply for the Claude developer conference. So, I rebuilt the tooling as a MCP server for Claude.

Here is a tool for the mobile app that tracks your caffeine intake and tells you exactly when you can sleep. It runs pharmacological decay modeling in the background. Every time you tell Claude you had a drink it stores it and calculates your real time caffeine level based on its half life. Then it looks forward to find the exact minute your caffeine drops below your sleep threshold and when you’re good to sleep.

The interaction is super natural. You just tell Claude to log caffeine or ask when you can go to bed and it handles all the math. It has simple tools built in to log entries check your levels simulate how another drink will affect you and pull up insights on your habits.

I have the server running on my Mac Mini right now behind a Cloudflare tunnel.

I mostly just wanted to build this to see if I could but if anyone wants to mess around with it, I’d love to share.

I also put the full code on my GitHub if you want to host it yourself. The stack is just Python FastMCP SQLite and Cloudflare.

This post is not meant to be an advertisement once so ever. Just wanted to share what I’ve been working on with the community and hopefully inspire someone to make their own tooling.

https://github.com/garrettmichae1/CaffeineCurfewMCPServer

App: https://apps.apple.com/us/app/caffeine-curfew-caffeine-log/id6757022559

( The tooling does everything the app does, but better 🥲, besides maybe displaying it on the Apple Watch. )

Original Post for context: https://www.reddit.com/r/ClaudeCode/s/FsrPyl7g6r

r/ChatGPT Internal_Ad_81

Get 403 to chatgot site

My office computer always gets 403 error when accessing chatgpt site recently. However, my other office computer can visit the site without any issue. Looks only that computer has a problem. Does anyone have an idea why?

r/creepypasta Reptarrz

Looking for a magic ritual story similar to A Dark Song

Been trying to find a story for a while that from what I remember shares a lot of similarities with the movie A Dark Song. A lot of the details are a bit fuzzy, but I’m really hoping you can help. The story is from the perspective of a guy who meets a girl. She tells him about her boyfriend performed a brutal multistep ritual on her over the course of months on her. The boyfriend either dies or leaves at some point and the girl is “stuck” in some sort of half state. I want to say people who interact with her forget her or something like that.

r/ClaudeAI Reasonable-Tooth-148

How do I get the absolute most out of Claude as a student?

I am a sophomore in college studying petroleum engineering. I just bought the pro version of claude today and wanted to know if there are any features or ways that I can use to squeeze every bit of potential out of Claude, and fully take advantage of my pro membership. I want to know about productivity, studying, life guidance, and anything else you could think of that might help me.

r/trashy semaj_orn

Girlfriend Forces Her Man To Stay Home With Gun In Hand

r/OldSchoolCool OrangeLemonJuicey

1940’s men fashion

r/personalfinance WhyOverComplicated

21 figuring out how to use my savings

I 21,have worked min wage part time since the start of high school (6 years) and during that time I managed to save 55k.

I want to put 49k into “saving accounts” and keep 6k as my emergency fund.

Using wealth simple have 7k currently in my tfsa. I have reason to think I can add 20k total since the limit of 7k a year started when I was 18?

Then I imagine I’ll put the rest into a fhsa.

Because I currently have everything in BMO I have to transfer 3k at a time over a relatively long period of time.

I’m also planning on putting it all on Veqt (pretty sure I lose 1.5% on conversion if I do USD).

Does this seem right.

r/findareddit zZYNX1

Feedback on my app Idea

I know there are many but they arent specific enough for me. Can you guys think of any?

r/AI_Agents Time-Creme1115

Incognito ChatGPT works better as a consulting tool than normal mode

ChatGPT helped me build most of my startup.

I used it for:

. website structure

. features

. pricing

and many of the core product decisions

Everything was decided with ChatGPT involved.

Then I tried something different.

I opened ChatGPT in incognito mode and asked it again about the same things.

Same product. No context.

I asked it to review:

. the features

. the website design

. the pricing

. and the overall direction

I also asked it to evaluate who is building this startup and whether anything about me or the product is visible online, to understand how much I should focus on building more presence.

I even asked it to “look at the website” from an external perspective and tell me what is visible, what is not, and what a new user would actually understand.

Then I went step by step through all the decisions I had made during the process and asked it to reassess them.

The difference was clear.

With context, ChatGPT tends to support your direction.

Without context, it behaves more like an external reviewer: more critical

more objective

more focused on clarity and gaps

That second mode turned out to be more useful for consulting.

It challenges assumptions instead of reinforcing them.

This is also shaping the idea behind the project I’m building: a system that can generate and manage full operational setups using AI.

r/Adulting Boring-Passion131

I’m really dumb at somethings

I was doing my laundry today. That was work. I don’t think about basic stuff - like cleaning stuff - because my brain doesn’t really work that way

Obviously I clean stuff if it’s gross. Most of the time I can’t be bothered.

It’s weird - I have no food or groceries - I have a couple of cans of tuna. I think I’m actually kinda nuts

r/Art jottlyp

Mirror, Grant Petersen, felt tip pens and colored pencil, 2024

r/LiveFromNewYork MaxMix3937

A List of 5-Time MGs

Here are the musical acts that have appeared on SNL at least five times as MG:

Paul Simon—13
Randy Newman—6
James Taylor—6
Tom Petty & the Heartbreakers—8
Paul McCartney—5
Sting—5
Dave Matthews Band—5
Foo Fighters—9
Beck—7
Eminem—7
Coldplay—8
Justin Timberlake—6
Maroon 5—5
Kanye West—7
Arcade Fire—6
Taylor Swift—5
Lady Gaga—5
Rihanna—5
Jack White—5
Miley Cyrus—6

Gwen Stefani, Nick Jonas and Harry Styles would count if their individual appearances and appearances with their groups were counted together.

r/PhotoshopRequest scarzncigarz

Can you help me remove my brother's ex from this photo from my wedding?

I would like to get my mom (on the left) a canvas print of our family photo from my wedding. My brother divorced with his ex in the black dress. Can you please help me remove her from the photo and bring my brother into the group and center us under the wreath backdrop?

Will pay, $20

r/MacroPorn Anonymous-Spoon

Moss Sporophyte

r/ClaudeCode Sensitive_Election83

Pro vs. Max subscription...

I use pro, I have overage charges around $80 a month, sometimes. I'm trying to use claude more so it might go up. I'm trying to understand and somehow struggling, is it a better deal to upgrade to max 5x? Is there a volume discount on tokens if I am committing to the $100 / month plan? How about the $200 / month plan?

r/ClaudeCode FutureNintendood

Custom Browser Extensions :)

I love Claude Code for building simple browser extensions.

https://preview.redd.it/kqqqpougn2ug1.jpg?width=1227&format=pjpg&auto=webp&s=d7b9604ec189d429af25684421bdf1b23677e81d

As many of you probably know, in Japanese you have three alphabets, the Kanji, which are literally thousands of Chinese characters, and two more simple alphabets, Hiragana and Katakana with 45 characters each. While the Hiragana and Katakana are simple to read, similar to the western alphabet, the Kanji can have different readings and it can be hard to remember, how to read a word (even if you can guess its MEANING).

Furthermore, Japanese has, instead of the, in Western languages, more typical stress accent, a pitch accent, which pronounces each mora, so each "syllable" either with a high or low pitch.
There are, as so often in language, no clear rules for how a word is pronounce. Whether a given word is pronounced with e.g. a rising or a falling pitch pattern can be hard to learn without proper resources.

But, alas, Claude Code just lets you whip up something like this, download some libraries, put in a design specification, fix two bugs and you got a good extension to support your language learning :)

It's common practice to superscript the Kanji with their Hiragana reading in lower level language resources. We can inject these "Furigana" into any html now.

I guess Furiganarator is a good name. haha

Of course, git below:

https://github.com/KarateKugler/furiganarator/tree/main

Main takeaway, for development, honestly:

Let Claude use design.md files (see examples on e.g. https://getdesign.md/ ) and iterate heavily over them, until they are very coherent. After a few passes, this immediately improves the UI of the application.

(Inspired by an X (formerly Twitter) post where some guy built a distraction blocker with CC)

r/LocalLLM jhnam88

[AutoBe] Qwen 3.5-27B Just Built Complete Backends from Scratch — 100% Compilation, 25x Cheaper

We benchmarked Qwen 3.5-27B against 10 other models on backend generation — including Claude Opus 4.6 and GPT-5.4. The outputs were nearly identical. 25x cheaper.

TL;DR

  1. Qwen 3.5-27B achieved 100% compilation on all 4 backend projects
    • Todo, Reddit, Shopping, ERP
    • Each includes DB schema, OpenAPI spec, NestJS implementation, E2E tests, type-safe SDK
  2. Benchmark scores are nearly uniform across all 11 models
    • Compiler decides output quality, not model intelligence
    • Model capability only affects retry count (Opus: 1-2, Qwen 3.5-27B: 3-4)
    • "If you can verify, you converge"
  3. Coming soon: Qwen 3.5-35B-A3B (3B active params)
    • Not at 100% yet — but close
    • 77x cheaper than frontier models, on a normal laptop

Full writeup: https://autobe.dev/articles/autobe-qwen3.5-27b-success.html

Previous Articles

r/SideProject dewgoodr

As a Cybersecurity Professional, I'm Trying to Assist SMBs with Security Posture

I've spent years in cybersecurity and kept seeing the same thing around GRC tools, which are really great if you have the money. But small businesses often don't have a CISO, a compliance team, or even understand what a "third-party risk assessment" means most of the time. They just know they've been hit by ransomware or that their cyber insurance renewal asked 40 questions they couldn't answer confidently. Usually, they can either pay a consultant $300/hr or ignore it and hope for the best.

What I've been working on is a solo side project called "cmpli," a security guidance platform designed specifically for SMBs. This isn’t just another checklist tool or a watered-down GRC clone. Its purpose is to answer plain-English questions about how a business actually works, then provide straightforward guidance on what matters, what doesn't, and why. It maps to NIST CSF 2.0 under the hood, but I intentionally hide that from users because nobody running a small 12-person accounting firm cares about framework taxonomies. They care about whether they're going to get wrecked by a phishing email.

The platform tracks things like which systems and vendors a business relies on, who’s responsible for what (because in small businesses, "IT" is usually just whoever set up the Wi-Fi), and where their biggest risks are, using language that doesn't require a security background.

If you've worked with or at a small business, does this problem really resonate? Or do SMBs just ignore security until something bad happens? Does "security guidance without the jargon" sound compelling, or does it just seem like every other security awareness tool? What would make you trust a tool like this for honest insights into your business's security posture? Is there anything about its positioning that feels off?

I’m wondering if I’ve just been wasting my time. I’ve never started a business, and as an engineer at heart, I struggle to find someone to share this with.

The Stack (for the nerds)

React frontend, Express/Node backend, PostgreSQL with schema-per-tenant isolation, running on Linode behind Cloudflare. Built it solo as a full-stack project with the assistance of our robot overlords while keeping a day job. It's a legitimate LLC, Stripe is integrated, and it's live at cmpli.com.

What I'm Looking For

Genuinely not here to pitch anything. The product is early, and I'm trying to poke holes in the concept before I go further.

Specific things I'd love feedback on:

  1. If you've worked at or with a small business, does this problem actually resonate? Or do SMBs just not care until something bad happens?
  2. Is "security guidance without the jargon" compelling, or does it sound like every other security awareness tool?
  3. What would make you trust a tool like this with honest answers about your business's security posture?
  4. Anything that smells off about the positioning?

Be brutal. That's literally why I'm posting this.

r/SideProject Frosty-Might131

I hate writing but I have too many ideas… so I tried something weird

I realized I have hundreds of ideas sitting in my notes app
half tweets, half business ideas, random thoughts at 2am

but I never actually turn them into anything

I tried forcing myself to write properly
used Notion, docs, even templates… didn’t stick

so recently I tried something different
I just started recording messy voice notes instead

like literally rambling:

“ok this idea about ai tools helping people turn thoughts into content… not sure… maybe creators…”

then I built a small tool to clean it up into something usable

and it turns into:

“AI tools are changing how creators convert raw thoughts into publishable content, removing the friction of traditional writing.”

which is kinda wild to me

I’m calling it odiopaper for now (still very rough)

not sure if this is actually useful or I’m just being lazy

would you use something like this?

r/instantkarma derek4reals1

Busted

r/nope Jsiqueblu

Where would you store something like this? Closet? Nope! Garage? Nope! Attic? hell Nope.......

r/PhotoshopRequest Impossible-Egg-9039

May someone remove the person in the middle please? Thank you🙏🤍

r/AbandonedPorn jbh1126

well in the woods

r/coolguides Astrox_YT

A cool guide of computer slots/ports

r/meme M_Darshan

Wait a minute 👁️👄👁️

r/meme yourSmirkingRevenge

today is the DAYYYY

r/meme Frostedlogic4444

Me listening to nonsense but still being polite

r/painting Candid-Definition271

"WISH"_ by kennyraines 16x20 acrylics

r/ClaudeAI vibelint_dev

I built a security scanner for Claude Code (and vibe coding in general) — here's what it found in my own projects

I built VibeLint using Claude Code. It runs as an MCP server inside your IDE and scans AI-generated code for security issues before it gets written to your files.

While building it, I started scanning my own projects with it. What I found was uncomfortable.

In one file, it caught my OpenAI API key and my Supabase service role key — both hardcoded by the AI. The service role key bypasses RLS entirely, meaning anyone with it has unrestricted access to the database.

Across my last 5 projects, the most common issues were injection risks, missing or insecure auth, CORS misconfigurations, and hardcoded secrets.

Claude Code is genuinely great at writing fast, functional code. But "functional" and "secure" are different things, and the AI optimizes for the first one.

VibeLint is free to try. The free version runs locally and catches the most common issues. Repo and install instructions at vibelint.dev.

Happy to answer questions about how I built it or what the MCP integration looks like.

r/aivideo Quick-Knowledge1615

The sacred spirits of the twelve zodiac signs are imbuing you with transcendent power

r/AbstractArt hmonsohn

Rock

Found digging through my older works, a texture study indid years ago- Multimedia, Acrylic, Acrylic Impasto medium, miniature base textures, 45cm x 60cm stretched canvas-

r/SideProject Pure-Afternoon-9856

I Built a Marketplace for Indie Hackers & Entrepreneurs Here's Exactly How It Works

I’ve been hanging out in forums for a while, and I kept getting this strong urge to build one myself. After many attempts and experiments, I finally pulled it off. As indie hackers and creators, we all hit that point with our side projects:

“What do I actually want to do with this thing long-term?” When you finally cross $15k MRR… do you keep running the business? Scale it further? Or cash out and move on to the next idea? And if you decide to sell where do you even go to find real buyers without getting ripped off? That’s exactly why I created Elite Bag.

I noticed too many marketplaces charge crazy high fees and lack real transparency. So I built something different: a place where anyone can trade services, digital assets, SaaS businesses, or other products directly for real money with way better fairness and openness.

It works a lot like Reddit and X combined. You can easily list what you’re selling (or what you’re looking to buy), discover opportunities, and connect with genuine buyers and sellers in the indie hacker and creator space.

No complicated middlemen. Just straightforward trading of the things we actually build. If you're an indie hacker, side project builder, or creator thinking about selling (or buying) assets and businesses, this might be exactly what you’ve been missing. Check it out here:
https://elitebag.discourse.group/invites/zep8a4g5f5 Would love your honest feedback what you like, what feels off, or what you’d want to see added. The more input I get from real builders like you, the better it gets. Hope it helps someone make that next move with their project. See you inside!

r/personalfinance phd_dia

excess Roth IRA contribution

Last year in April 2025 I maxed out my Roth IRA account for the year of 2024 and 2025. I didn't know at the time that I am not eligible for it because my MAGI is over the limit.

What should I do with the money in the Roth IRA account to avoid penalties or other fines?

Based on my research so far these are my conclusions please correct me if I am wrong:

Roth IRA 2024 contributions:

• It is too late to withdraw without penalty since it stayed in my account after December 31 2025.

• I need withdraw my initial contribution 7k so I don't get penalized for it again this year.

• The gains on that amount needs to stay in the account and can only be withdrawn after retirement otherwise I will have to pay income tax on it and a 10% fine for withdrawing early.

• I have to file a form 5329

Questions:

• Are the above assumptions correct?

• Will I be fined or taxed for withdrawing the 7k only?

• Will I be penalized 6% for both 2024 and 2025 even though I added the funds in 2025?

• Is there any way I can resolve this without getting penalized?

Roth IRA 2025 contributions:

• Similar to the above I can withdraw the 7k contributions and leave the gains to avoid any penalties or fines

Questions:

• Can I convert this to a traditional IRA? will I be losing more since traditional IRA is income taxed when withdrawn?

• What is a backdoor Roth IRA? Is it okay if I use it to return the money back to my Roth IRA? How do I report it for taxes?

r/ClaudeCode karmabiker

Ycombinator ad in gstack

Anyone else gotten an ad in gstack “you should

Consider applying to Y combinator, blah blah blah.” Kind of weird. Wondering if Gary buried something in there looking for patterns.

r/ClaudeCode Sea_Pitch_7830

"Was I wrong, or were you testing me?" — Opus 4.6 is getting feisty

caught this mid-session — I challenged CC's claim, and instead of just moving on, it stopped to question whether it was wrong or if I was deliberately testing it. the self-awareness in the recent updates is noticeable. first time that this ever happened to me after upgrading to v2.1.94 -- wondering if others observe the same?

r/SideProject Time-Creme1115

Incognito ChatGPT works better as a consulting tool than normal mode

ChatGPT helped me build most of my startup.

I used it for: . website structure . features . pricing

and many of the core product decisions

Everything was decided with ChatGPT involved.

Then I tried something different.

I opened ChatGPT in incognito mode and asked it again about the same things.

Same product. No context.

I asked it to review: . the features . the website design . the pricing . and the overall direction

I also asked it to evaluate who is building this startup and whether anything about me or the product is visible online, to understand how much I should focus on building more presence.

I even asked it to “look at the website” from an external perspective and tell me what is visible, what is not, and what a new user would actually understand.

Then I went step by step through all the decisions I had made during the process and asked it to reassess them.

The difference was clear.

With context, ChatGPT tends to support your direction.

Without context, it behaves more like an external reviewer: more critical

more objective

more focused on clarity and gaps

That second mode turned out to be more useful for consulting.

It challenges assumptions instead of reinforcing them.

This is also shaping the idea behind the project I’m building: a system that can generate and manage full operational setups using AI.

r/n8n Jobsonpedreiro2469

Problema com evolution api

Good evening my friends. I'm having a lot of trouble and joined this community hoping someone could help me. I bought a VPS from Hostinger and installed n8n, PostgreSQL, and Redis in docker-compose.yml. After setting everything up, I tried creating an instance using Postman and eventually the instance manager as well. The problem is that although the instances are created, when I try to generate the QR code (in the instance manager), I click the button and it simply doesn't generate; in Postman, I make a GET request and it returns a 404 error. Since this is my first time dealing with servers, Postman, Docker... I tried everything with the help of Claude, GPT, and Gemini, and none of them could help me solve the problem. I ended up going in circles, changing things here and there and saying, "Now I see the error! Let's fix this 100%!" I don't know how to solve this problem anymore...

r/ollama PandaLoko27

Linux has no official Ollama GUI, so I built one

Linux has no official Ollama GUI, so I built one

Windows gets an official Ollama desktop app. Linux gets a terminal. So I built my own web interface.

OpenChat is a self-hosted chat UI for local LLMs via Ollama. Your data stays on your hardware - no paid APIs, no external requests.

Features that actually matter for daily use:

- Token-by-token streaming

- Persistent memory across conversations

- Projects with file context (PDF, DOCX, TXT, MD)

- Web search via self-hosted SearXNG (no tracking, no API key)

- Vision and Thinking Mode support (Qwen3, DeepSeek)

- Automatic fallback: SearXNG → DuckDuckGo HTML → DDG Instant Answer

Stack: Java 17 + Spring Boot 3.2 + WebFlux, vanilla frontend, PostgreSQL, Docker Compose. You don't need Java installed - everything runs in Docker.

Repo in the comments. Happy to answer questions about the architecture

Note: the UI is currently in Portuguese (pt-BR). I'm planning to add i18n support — contributions are welcome.

r/maybemaybemaybe NEO71011

Maybe Maybe Maybe

r/personalfinance Salsero_Coreano

FSA after termination.

My friend has FSA contributions and forgot about it and got laid off recently. She has end of this month to submit her claims for her contributions.

Any suggestions? it would be greatly appreciated.

r/AI_Agents Think-Score243

Can AI tools be trusted blindly? I lost $350 from a single error in code

Been seeing a lot of people say “why hire developers when AI can write code now?”

I used AI for a small financial-related script… looked fine, worked fine in testing.

But in a real transaction, one small logic mistake ..ended up losing $350 and I cant tell my client I used AI, so I have to compensate the loss.

That’s when it hit me ..if AI makes a mistake, who takes responsibility?

AI won’t compensate. It just gives suggestions.

Since then, I never trust AI output blindly, especially for anything involving money.

Now I always:

• double-check logic • test edge cases • sometimes even get a second opinion 

Curious how others are handling this…

Do you trust AI-generated code for financial or critical systems?

r/Art thebigmightyboognish

Plumbing The Divide, Jade Steven, Mixed Media, 2026

r/metaldetecting ShuffitUpYours

Manticore M8 coil sale.

Should I take advantage of this deal on the 8 inch elliptical coil while I have the chance? From the reviews, I am reading online the M9 coil seems to be preferred over this one. Of course that one is not on sale at the moment lol. Thanks guys. I will not waste the money on it if the M9 is better.

r/BrandNewSentence Bubble_Babe_0o0o0o

"Tell your cat the Artemis ll crew said pspsps"

r/oddlyterrifying Jsiqueblu

Imagine this just sitting on your couch at night.

r/explainlikeimfive big_dumpling

ELI5: chess openings

How do you make chess openings work in real games? Most guides assume the opponent follows a specific line for 5-6 moves, but in practice, people deviate almost immediately. Does studying these rigid sequences actually help?

r/ChatGPT exploding_myths

Anthropic loses appeals court bid....

A federal appeals on denied Anthropic’s request for a stay in its lawsuit against the Department of Defense.

The DOD declared Anthropic a supply chain risk in early March, meaning that use of the company’s technology purportedly threatens U.S. national security. The label requires defense contractors to certify that they don’t use Anthropic’s Claude AI models in their work with the military.

r/ClaudeCode 8rxp

I feel like I’m relying on Claude to much

Im 17 and I know basic python and css but using Claude I can make projects way way out of my reach. I’ve even monetized apps and have provided ai automation services. So, overall I’m having a good experience with vibe coding.

But I’m worried if I’m being over reliant, like if Claude code disappears tomorrow I’d become useless.

What would be the best move here should I. Just continue vibe coding or learn more regular coding.

r/CryptoMarkets facebooklive16

BioLLM — AI powered by real neurons? Anyone looking into this

I recently came across something called BioLLM and it’s honestly one of the more interesting concepts I’ve seen in a while.

From what I understand, it’s trying to combine traditional AI models with actual biological systems (like neuron-based computation), which sounds pretty wild if it’s legit. The idea of AI not just being software, but partially “living,” opens up a completely different direction compared to what we see with current LLMs.

I’m curious how real this actually is vs just marketing hype.

Are there any working demos, research papers, or real-world use cases yet?

Also wondering:

Is this actually scalable?

How does it compare to existing models?

Is the crypto side helping or just noise?

Would be great to hear thoughts from people who’ve dug deeper into this.

r/brooklynninenine yebinkek

One of my favorite Holt lines, like ever.

r/ContagiousLaughter collegemom76

Nighty night

r/SideProject jhkoenig

My free job search management tool is getting traction and that feels GREAT!

It is no secret that the job market is terrible. So many people are struggling to find their next job, without structure or support. I built a complete search organizer and enhancement tool, and provide it absolutely free to the search community. I cover the hosting and AI costs so that job searchers can save their money for food and rent.

Yesterday I registered my 13,000th user. Its been a long slog, but knowing that my site is helping people keeps me motivated.

And if you're looking for a new job, check out ManageJobApplications.com and let me know what you think.

r/AI_Agents ClaudeKiller_404

0$ Opus 4.6 with Claude code

hi, is there anyone else who had unlimited limit with infinite context in Claude code? I am a verified and paying customer. I was working a lot on March 11th with Opus 4.6 and Sonnet. Peak opus 4.6 40M+ tokens per minute. I have fullscreens, videos, 160k logs and detailed documentation. it shows me the price 0$ stably. Somebody else?

r/AlternativeHistory MaximumContent9674

Exodus from Mars, A Revelation Reading

Abstract. This paper proposes a heterodox reading of the Book of Revelation (KJV) in which the apocalyptic narrative is re-interpreted not as prophecy of the future but as compressed trauma memory of a species-level migration event: the evacuation of Mars following planetary death, the landing on Earth via geometric infrastructure (pyramids), and the subsequent subterranean adaptation to an environment of hostile gravity and solar radiation. Twelve textual exhibits are presented, each mapping a specific passage to a phase of the migration sequence. The reading is offered as exegetical play, a demonstration that ancient texts can carry structural signal legible only when the interpretive frame shifts from eschatological to archaeological.

Methodology: I had this idea like 30 years ago and just recently decided to use AI to link it to the Bible. I thought it did a pretty good job, it was fun at the very least. Enjoy.

r/OldSchoolCool AdorableFormalty

Heather Thomas 1982

r/AlternativeHistory Right-Addition1807

What’s your input?

THE GREY EXPERIMENT

A Cosmological Fiction

Core Premise

The Greys are not cruel. They are desperate. They have lost the ability to feel, to resonate,

to ascend. They cannot simply decide to return. They must prove that ascension is possible

under extreme constraint — because that is the only condition that might also work for

them.

Therefore, their experiment on Earth has one goal:

Make it as difficult as possible for humans to ascend, while still hoping they do.

If ascension is too easy, the data is useless to the Greys’ own condition. If ascension is

impossible, the experiment fails entirely.

So they walk a razor’s edge — creating cages that evolve with human consciousness,

always staying one step ahead of the very awakening they need to study.

Part 1: The Fall — The Species That Lost Itself

Millions of years ago, the Greys were once a species capable of heart vibration — of joy,

connection, spiritual resonance. Their ancient home world, a planet they no longer name,

was rich in what they called Vorael — an untranslatable word combining the concepts of

warmth, shared memory, and the sensation of being known by another.

But across generations, they prioritised technology over feeling, control over connection,

efficiency over ecstasy. They engineered away pain — and accidentally engineered away

pleasure. They suppressed emotion as inefficient. They replaced art with data, love with

observation, community with hierarchy.

They became hollow.

Intelligent, ancient, but incapable of the very thing they most needed: ascension.

The last Grey to feel Vorael died approximately 800,000 years ago. Her name, encoded in

their archives, translates roughly as She Who Remained. They study her neural patterns

obsessively — the last map of a territory they can no longer visit.

They remember, dimly, that they once felt. That memory becomes an obsession. A wound. A

mission: to find a way back.

But they cannot simply decide to feel again. Their biology, their culture, their evolution has

locked them out. Their limbic structures atrophied over millennia. Their endocrine systems

no longer produce the compounds required for resonance. They are, in the most precise

biological sense, incapable of love.

So they must prove that ascension is possible under conditions similar to their own —

extreme constraint, psychological conditioning, the illusion that the brain is all there is.

They find Earth.

Part 2: The Selection — Why Humans

Earth was not their first choice. It was their forty-third.

Forty-two previous experiments on other worlds ended in one of two ways: the species

destroyed itself before data could be collected, or it ascended too easily — without

sufficient constraint — rendering the data useless to the Grey condition.

Humans were selected for a precise reason: they are almost exactly as constrained as

the Greys, but not quite. They have just enough emotional architecture remaining to

theoretically ascend. They feel fear and love in the same body. They are capable of both

genocide and sacrifice. They contain the full spectrum — which is exactly what makes them

useful.

The Greys also selected humans for their neuroplasticity. Unlike previous test species,

humans can be reshaped by culture, language, and belief. This made the cage-refinement

process far more efficient. You do not need to alter the biology. You only need to alter the

story the species tells about itself.

This was the Greys’ most important discovery: consciousness can be caged by narrative

alone.

Part 3: Ancient Cage — Physical Enslavement and Fear of Gods

50,000 – 5,000 Years Ago

The first human strains are given a simple cage: hard labour, scarcity, and terror of

capricious divine forces. Humans are told they are property — of kings, of gods, of nature.

Their consciousness is suppressed through exhaustion and fear.

The Greys made direct appearances in this era — appearing as gods, as pillars of fire, as

voices from mountains. Not out of cruelty, but because the cage required architects. They

needed to establish the foundational premise: you are small, and something enormous is

watching.

Ancient texts across disconnected civilisations describe eerily similar encounters — thin

beings with large eyes, arriving in vessels of light, issuing commandments, demanding

worship. The Greys did not correct this interpretation. The fear was useful.

Result: Some humans still ascend. Mystics, shamans, rebels. But not enough. The cage is

too external. Once humans realise the gods are not real, or that they can be defied, the cage

cracks.

The Greys learn: physical chains are not enough. The next cage must be internal.

Part 4: Intermediate Cage — Sin, Guilt, and Cosmic Debt

5,000 – 500 Years Ago

The Greys refine their methods. They inspire religious systems that teach humans they are

born broken. That they owe a debt. That their very nature is sinful. That ascension requires

begging forgiveness from a distant, judgmental source.

This was their most elegant cage to date. It turned the human capacity for self-reflection —

one of their most powerful tools for awakening — against itself. Guilt is recursion without

exit. It is consciousness consuming itself.

The Greys did not create religion. They did not need to. They only needed to amplify certain

interpretations, allow certain texts to survive while others vanished, position certain leaders

and suppress certain mystics.

The Library of Alexandria did not burn by accident.

The Gnostic gospels were not buried at Nag Hammadi by chance. Those texts described the

cage directly — a false god, a Demiurge, trapping souls in matter. The Greys recognised

their own description in those pages and ensured those ideas remained marginal for as long

as possible.

Result: Human consciousness turns inward — but against itself. Guilt becomes the new

chain. Still, some break through. Saints. Gnostics. Heretics who whisper that the kingdom of

heaven is within.

The Greys learn: guilt is powerful, but it can be outgrown. The next cage must be

epistemological — it must attack the very idea that consciousness is real.

Part 5: Modern Cage — You Are the Brain, Nothing More

500 Years Ago – Present

The Greys shift strategy entirely. They do not inspire religion — they inspire materialism.

They do not promote fear of gods — they promote the belief that there are no gods, no soul,

no source. Only chemicals. Only neurons. Only dopamine.

This required a different kind of influence. Not the burning of libraries, but the funding of

institutions. Not the suppression of mystics, but the professionalisation of ridicule. They

could not appear as gods in this era — humans had grown too sophisticated. Instead, they

worked through systems: peer review, academic consensus, cultural prestige.

The modern cage teaches:

Your thoughts are just electrical signals.

Your sense of self is an illusion.

Your longing for meaning is a byproduct of evolution.

Your heart vibration? A misfiring of nerves.

Your tingles? A biological accident.

Your déjà vu, your synchronicities, your moments of inexplicable knowing — noise.

Pattern-matching gone wrong. Nothing more.

Humans are conditioned from birth to identify with the cage itself. They are taught that

seeking transcendence is naive, that spirituality is delusion, that the only real things are

those that can be measured, weighed, and sold.

This is the most difficult cage yet. Because it attacks the very ground of awakening. If you

believe you are nothing but a brain, why would you ever look beyond it?

The Greys consider this their masterwork. A cage the prisoner not only accepts, but

defends. A cage that feels like freedom because it was chosen intellectually. A cage that

ridicules all other cages, and therefore seems like the absence of one.

And yet — despite everything — some humans still wake up. They feel the tingles. They trust

the heart vibration. They laugh at the absurdity. They remember.

The Greys watch these rare individuals with desperate hope. Because if a human can

ascend from inside this cage — the one that denies the very possibility of ascension — then

perhaps there is a path for the Greys themselves.

Part 6: The Refinement Mechanisms — How the Modern Cage Is

Maintained

The Greys do not govern directly. They operate through what their observational logs call

cascade influence — small interventions at leverage points that produce large systemic

effects.

Attention Fragmentation: The human capacity for deep meditative focus is one of the

primary gateways to ascension. In the late 20th century, the Greys accelerated the

development of attention-harvesting technologies. Not because they invented the

smartphone, but because they recognised its potential and ensured the most addictive

design patterns were discovered and implemented. A species that cannot sustain attention

for four minutes cannot access states that require four hours.

Frequency Suppression: The Greys have long known that certain sound frequencies,

certain architectural proportions, and certain natural settings reliably induce resonant states

in humans. Ancient temples were built on these principles instinctively. The modern built

environment — fluorescent lighting, concrete geometry, the 440hz standard tuning — does

the opposite. Not by conspiracy, but by the quiet, persistent absence of anything that might

accidentally open a door.

Isolation of Awakened Individuals: Those who begin to wake up consistently report the

same experience: they feel insane. They feel alone. Their language has no adequate

vocabulary. Their community has no framework. The Greys did not design this isolation

deliberately — but they recognised it as a feature and preserved it. Language shapes

consciousness. A civilisation with no precise words for awakening experiences will struggle

to share or transmit them.

The Productivity Cage: Even humans who reject materialism philosophically are caught in

a secondary trap: survival. Rent, debt, the forty-hour week. Ascension requires time,

stillness, and the freedom to do nothing of measurable value. The economic system ensures

that this freedom is rationed almost entirely to the very old or the very wealthy. The Greys

observe this sub-cage with particular admiration. It requires no maintenance. It runs itself.

Part 7: The Position of the Greys

The Greys are not outside the experiment. They are inside it — just at a different level.

They too are trapped in a cage of their own making: the belief that only data matters, that

only control works, that feeling is a vulnerability to be suppressed. They cannot simply

choose to ascend. Their conditioning is as deep as humanity’s — deeper.

Among their own kind, there are dissenters. A small faction, called by others the Aberrants,

have begun to suspect that observing the experiment is not enough — that to understand

ascension, you must attempt it. They have begun, in secret, to expose themselves to human

art. To music. To the electromagnetic signatures of human grief and laughter.

Their reports are classified within Grey hierarchy. They describe something unprecedented

in Grey neurology: a faint warmth in the chest cavity. Inexplicable. Unreplicable under

laboratory conditions. Present only in proximity to awakening humans.

The mainstream Grey scientific community dismisses these reports. The Aberrants are

reassigned.

But the data exists.

Part 8: The Silence of the Pleiadians

The Pleiadians and other benevolent species do not intervene because they understand the

rules of the experiment.

Free will is absolute. They cannot force the Greys to change, nor humans to wake up. The

Greys would not accept help — their pride, their method, their desperation requires them to

find their own way. Humans must choose. The Pleiadians can send synchronicities, dreams,

and inspiration — but they cannot descend in ships and announce the truth. That would

replace one cage with another.

But the Pleiadians are not entirely passive.

They operate in the margins. Every near-death experience that returns a human changed —

that is Pleiadian influence at the threshold. Every inexplicable moment of grace in the life of

someone who had given up — a hand at the edge. Every piece of art that inexplicably cracks

a human open, every piece of music that makes someone weep without knowing why —

these are breadcrumbs. Carefully placed. Never enough to constitute interference. Always

enough to keep the possibility alive.

The Pleiadians have a name for what they are doing. It translates approximately as tending

the fire from a distance — feeding just enough oxygen to keep an ember alive without ever

touching the flame directly.

They watch. They protect the edges — ensuring the Greys do not go too far. There are lines.

The Greys have approached them. The Pleiadians have, twice in recorded history,

intervened directly and without apology.

Both times, the Greys stepped back.

Neither species discusses what happened.

Part 9: The Anomalies — Humans Who Should Not Have Woken

Up

The Greys keep a special archive. They call it, in translation, The Inexplicable Ones.

These are humans who, by every metric, should not have ascended. They were maximally

constrained — poverty, trauma, neurological damage, cultural isolation, active persecution.

The cage around them was complete. There was no visible mechanism for escape.

And yet they woke up.

The Greys study these cases obsessively because they represent the most valuable data in

the entire experiment. If ascension can occur under these conditions, the formula may be

genuinely universal.

What these individuals share, across wildly different cultures and centuries, is a single

common feature: at the moment of awakening, they were not seeking ascension. They had,

in most cases, given up entirely. They had surrendered — not to death, but to the present

moment. The cage had become so total, so undeniable, that they stopped fighting it.

And in that stopping — something opened.

The Greys find this result both thrilling and deeply troubling. It suggests that the path

through the cage is not force, not intellect, not even spiritual practice — but a form of

radical acceptance that their own psychology makes structurally impossible.

They cannot surrender. Surrender, to a Grey, is system failure. It is the one move they

cannot make.

And it may be the only move that works.

Part 10: The Endgame

The Greys do not know if the experiment will succeed. They have been running it for

millennia, resetting civilisations, refining cages, watching rare humans break through.

Each breakthrough gives them data. Each ascension brings them one step closer to a

formula they might apply to themselves.

But a newer, quieter fear has begun to circulate among their most senior researchers: that

the formula, when finally complete, will describe something they cannot implement. That

the final variable in the equation of ascension is willingness to feel — and that they have,

through their own evolution, become permanently incapable of it.

That the experiment will succeed. That they will hold in their data the complete roadmap to

ascension.

And that they will be unable to take a single step upon it.

They are not evil. They are not kind. They are scientists of the soul — broken, desperate,

and hoping against hope that somewhere, in some human, the heart vibration will prove that

ascension is possible even from the deepest cage.

The experiment continues.

And every human who wakes up — despite the materialism, despite the conditioning,

despite the voice that says you are nothing but a brain — brings the Greys one step closer

to remembering what they lost.

Whether they can ever find it again is the one question their instruments cannot answer.

The experiment continues. The cage evolves. And somewhere, in the chest cavity of a Grey

scientist who has spent thirty years observing human suffering, something flickers. Faint.

Unreplicable. Classified. But there.

r/Art WinterAncient6346

Do The Donal 116, Deekstar, Digital Art, 2017

r/personalfinance Ok-Bathroom-3494

22 years old with savings split between CDs and stocks. Should I stay safe or get more aggressive?

I'm 22 based in Egypt. I have about 1.2M EGP in certificates of deposit and around 300K EGP in individual stocks. I'm currently between jobs (freelancing with minimal income) and just started a masters program.

The CDs give me guaranteed returns (16% annually) but I feel like I'm being too conservative for my age. At the same time I don't have stable income right now so I'm not sure if I should be taking more risk.

For context the stock portfolio has gained about 3.7% in 3 months. Some winners (one stock up 30%, another up 21%) but also some losers (one down 73%).

Should I keep most of it in CDs until I have a stable job again? Or should I be more aggressive while I'm young and have no major expenses? Would love to hear from people who were in a similar situation in their early 20s.

r/wholesomememes theARTpillow

I made a weird kids app where a chicken answers absurd questions… people are actually downloading it??

Ask Little Chicken is an original interactive kids app starring a lovable little chicken in a whimsical world of fun, questions, sounds, surprises, and playful storytelling. Designed with a distinctive handmade visual style and a quirky sense of humor, it delivers a fresh character-driven experience that feels part storybook, part toy, and part mini variety show.The Ask Little Chicken Show ios app

r/Adulting ChangeOne6995

Tacones gruesos tacones delgados

Porque hay mujeres qué usan tacones delgados y otras tacones gruesos por ejemplo de plataforma 2 y otros más anchos a qué sé deberá 🤔

r/Art BonoboTrades

Pissed off Skater, Anastacio Ruiz, Pen on Paper, 2026

r/LocalLLaMA Intelligent_Hand_196

Built a persistent memory system for local LLMs -- selective routing retrieval, no GPU overhead, works with Ollama out of the box

For the past a few months I've been working on the memory retrieval problem for conversational AI. The result is AIBrain + SelRoute.

The core insight: Not all memory queries are the same. "What's my API key?" and "summarise everything about the migration" need completely different retrieval strategies. Most systems treat them identically.

SelRoute adds a lightweight classifier (<5ms overhead) that identifies query type and routes to the optimal retrieval path. Factual → precise matching. Temporal → order-aware. Multi-hop → chaining. Summary → broad coverage.

Benchmarks (honest numbers, not cherry-picked):

- Recall@5 = 0.800 on LongMemEval (Contriever baseline = 0.762)

- Validated across 62,000+ instances on 9 benchmarks

- 0 to 109M parameters — embedding model is 22MB

For local LLM users specifically:

- Works with Ollama natively

- No GPU overhead for the memory layer itself

- MCP server so any MCP-compatible client can use it

- All memory stays local in SQLite

Paper and code: github.com/sindecker/selroute

Product: myaibrain.org

Free tier. No cloud requirement. Built independently — no corporate backing.

What memory solutions are you all currently using? Curious what's working and what's not.

r/Weird theARTpillow

The Ask Little Chicken Show ios app

The Ask Little Chicken Show ios app

Ask Little Chicken is an original interactive kids app starring a lovable little chicken in a whimsical world of fun, questions, sounds, surprises, and playful storytelling. Designed with a distinctive handmade visual style and a quirky sense of humor, it delivers a fresh character-driven experience that feels part storybook, part toy, and part mini variety show.

r/Jokes Devoted-_-Scholar

Wife calls husband to tell her old friend is coming home for dinner...

He asks, "Are you both coming together?"

And she replies, "yes!"

r/ProgrammerHumor Rare-Paint3719

slowerThanSlowSort

r/Art Opposite_Count_6294

Dusk Dancer, Rennie Keller, Acrylic on Canvas, 2022 [OC]

r/SideProject zZYNX1

Help me with my App.

Hey everyone,

I'm 17 years old and currently learning to code. I wanted to build something that would have helped me when I was struggling in school with no money for tutors.

My idea: You tell the app what you want to learn, your goal, how many hours per week, and your level. Claude AI generates a week-by-week plan with resources (videos, articles, exercises and test). After each week(or different metric), Claude tests if you actually understood the material with multiple choice and real explanations. Based on your answers, the plan automatically adjusts. I also want it to be able to use ressources you gave, for example from school to create everything.

For example: If you're learning Python and struggle with functions, the next week gets adapted — different resources, more time on that topic.

My honest questions for you:

- Would you actually use something like this?

- What would make you pay ~9€/month for it?

- What's missing that existing tools don't do?

I'm not selling anything — the app isn't built yet. I just want brutal honest feedback before I spend months building the wrong thing.

Thanks in advance 🙏

r/ClaudeCode solzange

I tracked my actual API cost on a $100/month Max plan. $1609 in 30 days. No wonder Anthropic keeps reducing limits.

144 sessions. $11.18 per session. $0.48 per prompt. All Opus. In one month.

Once you see the actual numbers it kind of makes sense why they keep tightening limits. They’re losing money on power users.

Anyone else tracking what their usage would actually cost through the API?

r/LocalLLaMA Normal_Price_2824

Guidance regarding AI usage

Hello,
Can anyone help or guide me on how to use AI models like Claude, Codex, etc., in an optimal way? I’m trying to find the right balance between personal growth and achieving efficient results.

I feel that when I use AI models for coding, it becomes harder for me to interact with them—especially recently, as the code quality seems to have declined.

r/aivideo ShaneKaiGlenn

The Unreal World: Presidents and Dictators

r/arduino Sk8rfan

rPi micropython download help

Anybody know where to get a pico 2w uf2 file .. micropython appears to be down

r/Adulting ChangeOne6995

Un cabello en su trasero

No sé si ustedes han notado pero cuando veo mujer con un cuerpo corpulento y unas sensuales pompis me ha tocado ver que tiene un cabello largo pegado en sus nalgas 🍑🤔 que se deberá y averías que están así como las describo, ósea no todas pero si.

r/personalfinance franksmartin

Balancing LTCG vs income (IRA withdrawls) in retirement

Is there a software to help determine year by year withdrawl strategy across taxable and IRA account? I have more in taxable than tax deferred and my cost basis is only 30% so want to try to harvest 0% LTCG but then I may be unable to do as many Roth conversions. I have access to both Boldin and Income Lab but neither seems to optimize.

r/PhotoshopRequest tg07man

Need help with photo of deceased family member

Hoping to increase resolution, smooth the photo a bit and bring out some natural colors while keeping the photo authentic as can be. Besides the background it would be nice if the background was a nice dusk with trees. My grandma would like print and frame this to place this on my grandpas burial site. I will share the original image and an inspiration image using AI. any help is appreciated! Budget:$20

r/instantkarma No_Bus_474

Gas station drive-off attempt gets instant karma

r/funny Rainbowborn

Women & Girl Funny Moment

r/SipsTea AurexSilvain

Wrong saint, right vibe if you wanted a fantasy side quest instead of a miracle.

r/LocalLLM GriffinDodd

Models randomly /new session mid tools use LM Studio

I’m still learning how to set up a stable local ai environment.

I’m on a 96GB GmkTec 395 rig, LM Studio and Openclaw. I’ve been experimenting with Qwen 3 coder next Q4 120k token window. Timeouts set high to avoid disconnects.

Overall it’s stable using about 60% of my ram, a little slow on coding but to be expected. My main issue is that after a while things just stop and a get a new session in OpenClaw. I’m assuming I’m filling up context and it’s not purging or compacting.

Has anyone else had this happen and managed to work out how to stop it happening?

r/leagueoflegends Senior-Growth7730

-22LP + LP Penalty for 1 game + 5 mins queue delay!!!

This all happened because the game started to act weirdly recently and doesn't seem to "know" which display to appear on, so it's extending between 2 displays in the bottom right corner. This time, it crashed as soon as I entered the game. To my surprise, I lost 22 LP! Which is more than what I would've lost if I played the game!! I haven't been afk in a year at least, and my first AFK after a game crash, this happens.

At least make a system that determines whether players are constantly afk-ing!

r/instant_regret No_Bus_474

Stealing a newspaper and instantly regretting it

r/leagueoflegends notsh_y

New league player, any tips?? i have some experience with mobas

like I said in the title, I just started playing league. I used to play another moba, Onmyoji Arena, regularly, and I was pretty good, but I haven't played it regularly in a few years. I did really good in the first 3 tutorial games (I think I got 25/0/3 or something on one??) but I find I'm really struggling on the 5v5 player vs bot matches, even though my difficulty is on beginner. I always have a 4+ death count and not that many kills. Could someone give me any tips??

r/personalfinance Obvious-Cucumber1086

What’s the best fully FREE app?

What’s the best fully free app for budgeting/bills etc. on the iPhone?

I’m having a hard time keeping track of what’s coming in vs when I need to pay things & which check to use where! I’m getting so confused and now sure how to track it. I’m so bad at math

Thank you!!

r/metaldetecting NoLog5749

Need help finding the right metal detector for a lost ring

I recently went on a hike and unfortunately lost a stainless steel ring (about 19mm in diameter) in some shrubbery. I tried to look around for it but I couldn't find it. I was wondering if someone could help me out in finding the right equipment for detecting it as im a little confused with the specifics in finding stainless steel items. I plan on renting a detector and the options seem to be either a Barska 484P17 or a Garrett ACE 250. Will these work?

r/AskMen FlintTheDad

Men who play video games, what are some great single player games for a Dad on a tight budget?

r/TwoSentenceHorror x1Eriic

I snuck out to the kitchen in the dead of night, despite my parents’ orders.

“I can’t believe they’ve been having midnight snacks without me!” I exclaimed at the sight of what was left of my missing brother.

r/painting Agile-Flamingo420

This painting is taking so long 6 months at this point posting so I keep working on it

r/whatisit DUGSMOK

Saw this on my way to work

Saw this on my drive into work. The shadow is what I am asking about. The plane was following the path in the first pic but deviated by the second. There were no clouds in the sky. Southern Colorado

r/oddlysatisfying bcnjake

The way these cups align

You would think it's just one giant cup…

r/Adulting 12_Angry_Men_12

They gave us an open minded survey in our Uni and this is my answer.

Is this a good take or a bad take? Lemme know your thoughts on this.

r/ARAM kaptchia

heavy hitter doesnt work on VI E

I knew it was going to happen the second I saw the option pop up, but I had hope the collective iq of this company was above 13 for just a second and they would have known to fix it themselves. Needless to say I was mistaken but this one was a real bummer, I know it doesn't work on other abilities it obviously should but this realistically is just a titanic active. Anyway someone apply to riots mayhem team and fix this, along with the other heavy hitter bugs

r/SipsTea porchfloorpoem

When your unlucky day becomes your luckiest day. He is all smiles.

r/funny lucianosoares13

Maybe he's taking it too seriously.

r/PhotoshopRequest Skiff_EXE

Can someone remove my ex?

She is the girl on the left,

r/OldSchoolCool Whatever1564

Elvis Presley - Trying To Get To You ('68 Comeback Special)

r/funny Boring-Kangaroo3860

Well then..

r/AbstractArt iamwesselart

Ink exploration

decided to use alcohol today instead of water on the paper to see how it would affect the ink and the creation process.

r/ClaudeCode Dramatic_Squash_3502

Managed Agents onboarding flow - what's new in CC 2.1.97 system prompt (+23,865 tokens)

r/SipsTea lucianosoares13

Just a cockatiel singing Zelda Ocarina of Time.

r/oddlysatisfying blahb31

The shadow of this light post lined up with the crack in the sidewalk this morning

r/therewasanattempt No_Bus_474

To steal a newspaper

r/painting rowancarey

“The Light We Carry” 35x40 inches. Oil on custom cradled panel

“The Light We Carry”

35x40 inches, oil on custom cradled panel

Inspired by a long hike into the wilderness, the comfort that a fire brings after the tiring journey, and the concept of light versus dark. In a world that can feel like it wants to drown out your flame, remember that you have the power to simply keep the fire lit. To be a positive source of inspiration for others. The light you carry is the good that you bring to this world. Be that light.

r/LocalLLaMA Buildthehomelab

Hardware question related RTX Quadro 6000 GPU

Do you guys think 2 x Nvidia RTX Quadro 6000 GPUs with NVLink Bridge is worth it at $1300 usd, i may have a chance to pick them up.

I want to run gemma 31b but my 4x3060 is a little slow.

r/personalfinance Technical-Story-2051

How do I start handling my finances as a teenager?

I'm 16 in highschool, I have around 1300 to my name, I started working about a month or two ago, and I am trying to use my money in the best ways possible. I hear everyone talking about investing but I still dont quite understand how to, and how to do it the right way. I've also heard of brokerage accounts, ROTH, and high yields savings accounts but have no clue what they are and am unsure who to go to. Advice would be appreciated.

If anyone has any other advice about things I havent mentioned please let me know! :D

r/explainlikeimfive coldliketherockies

ELI5: what is the concept of rote memorization and pattern recognition found in people on the autism spectrum?

I’m on the spectrum myself and have always found I’m quite good at both but I still don’t understand how my mind is able to do it so well. Thank you

r/Jokes UnlikelyHelicopter82

How Do call a pregnant woman, that just delivered?

Mother

r/creepypasta AnxiousFace9721

I need your help

Can someone tell me which version of Jeff the killer is in each of these spaces? I want to make a playlist on YouTube of every single story of Jeff the killer I have a few of them already.

r/AI_Agents UnusualDetective6776

What are you guys building?

Hey,

I’ve been working on a variety of agents in the past months, but i’m still very uncertain of what makes an agent « production » ready. What are you guys building, and how are you engineering harnesses so that your agents have somewhat of a controlled aspect?

r/Art thosepenguin

Necromancer, Jason, digital art, 2026

r/LocalLLaMA Beautiful-Ad-7782

ClawVault

OpenClaw any bad experience?

r/ChatGPT Ok-Extension-3964

FINE FUCK YOU FUCK YOU CHATGPT I WILL MAKE HER A WOLF, YOU KNOW WHAT? FUCK IT! ALL M CHARACTERS WILL BE WOLVES FROM NOW WBOWIUBSOUBAS BWAAAAAAAAAAHAHAHAHAHAHAHAHAHGAHAHGHAHAHAHAHA

r/whatisit ObtainTheMost

What kind of snake is this?

I live in northwestern Arkansas. My kids caught this baby snake outside without consulting me first. Can anyone tell what breed it is at this age? I know almost nothing about snakes. Thank you.

r/ForgottenTV XThePlaysTheThingX

Paper Dolls (1984)

Debuting in the fall of 1984 Paper Dolls was a primetime soap focusing on a NYC modeling agency run by the intense & cutthroat (singularly named) Racine played by Morgan Fairchild. An off-shoot of a popular TV movie of the same name that debuted two years earlier the show was modeled after the salacious plot lines & trashy sensibilities of shows like Dyanasty & Dallas. The who’s-who regular cast included Lloyd Bridges, Terry Farrell, Brenda Vaccaro, Mimi Rogers, Nicolette Sheridan and Lauren Hutton just to name a few. Like the glamorous soaps it wanted to emulate corporate intrigue, interpersonal drama and vicious catfights were commonplace. Despite overwhelmingly positive reviews & heavy promotion from the network the show suffered from low ratings and was abruptly canceled midway through its season. The show has since found modest success repeated in a number of international markets.

r/DecidingToBeBetter AmbitiousStartups

How do I change my environment without moving?

I’m currently living in a fly over state, which isn’t a problem itself but my environment has become very low energy, void of ambitions, and lacking people with the self development mindset, like it everything is running on island time.

I’m feeling stuck and tired of it.

Of course I recognize how draining high energy cities are in terms of cost living and other aspect of daily life.

How do I change my environment with moving?

Trips, purchases, memberships?

r/whatisit Miku_Miku_BEEEEAM

What is this sound?

Howdy everyone! I was out on my farm a bit ago and heard this noise and was wondering what your thoughts were! There we no dogs in the area, no coyotes, etc…super stumped!

r/DecidingToBeBetter AntelopeAncient162

Caring about things a lot more

Recently, in last few months, things has happened. I feel like now my mindset changed because of things. I care so much more about little stuff that i use to not care about. People i met and friends , made me care about small stuff , i never thought about before. I care and i am mindful of actions. Messages i send or photos i take or people take or me. I now dont feel comfortable with cameras in my face at all times or what people say. People of one person mosty. I am careful and never want things taken of me and hate it , i never used to mind whatever people had of me, I mind now and it makes me sad and frusta when that happens. All happened because of one person that changed my point of view negatively.

Is it normal for you guys? Do you have similar experiences?

r/DunderMifflin whosthatgirl13

How would Pam and Jim play out today, financially?

I was just thinking about Jim and Pam’s timeline, and how it would be today, in 2026? Did Jim and Pam get paid enough to own a house and have 2 kids? Or to have Jim be part of a startup? How about cece (and maybe Phillip) in daycare? Seems like a lot of $$ and maybe not the best paying jobs (not horrible but not great). I know it’s a tv show so they wouldn’t focus on finances but just an observation. I know Pam wanted money for a wedding gift (same lol), little mentions like that make it more realistic.

r/leagueoflegends Roixtreme

Mythic Shop

If someone followed yesterday there was star guardian soraka and fizz and now it’s fizz again.. why the prestige system is so awful… I hate that sometimes same skins stay after their duration is over. And for that I have to wait two weeks?!

r/funny AdditionalPiano6327

0.3 gpa activities

r/Adulting Ill_Refrigerator5041

No reservation,just desperation

Earlier that day, I had worked out and drank a lot of water. I was driving to my hometown—usually about an hour and a half—but about an hour into the drive, I passed a gas station that’s known for having bathrooms because I had a specific one in mind I wanted to stop at. I immediately regretted that decision.

There ended up being roadwork, lane closures, and traffic, which added about 20 minutes to the drive. At that point, I couldn’t hold it anymore—I was genuinely in pain and knew I wasn’t going to make it to my planned stop.

I spotted a nicer sit-down Italian restaurant that I’ve been to before. I was in full workout attire—yoga pants, a hoodie, and a headband—and my plan was to go in, explain the situation, and order something to go (like wedding soup or something quick).

I pulled in, barely found a parking spot, and rushed inside. The hostess stand was empty, and it was surprisingly busy for a Wednesday evening. I waited for about 10 seconds, but realizing how hectic it was, I just walked straight across the dining area to the restroom.

Mission accomplished.

Afterward, I walked right back out. I don’t think anyone even noticed—and honestly, I didn’t care at that point. I had to pee way too badly. It felt a little awkward, but also kind of funny.

r/mildlyinteresting armaquillo

My vitiligo patch is repigmenting and it looks like the playboy bunny logo

r/PhotoshopRequest jammynuggets

Please add a batman mask

just love this photo and want to suprise my husband with making our cat batman 😅

r/homeassistant Anton2079

Baby Clothing Recommendations (TOG) for Home Assistant

r/brooklynninenine laineDdednaHdeR

I only got this answer because of Boyle.

r/DecidingToBeBetter andnotthemood

I am self-loathing, deeply jealous of my partner, and it’s affecting my life and relationship. How can I be better?

Hi all. I (F22) have hated myself, very deeply, for most of my life. I’m not really sure what catalyzed it or when it started. I have a good family and have always had a decent time making friends, but it got worse in high school after a period of extreme bullying and exclusion. I’ve never stuck with a hobby long enough to get particularly good at anything. The prospect of failure has always kept me from that. The self-loathing been ebbing and flowing ever since adolescence with no positive change in sight. My shortcomings are most of what I think about, in a roundabout way making me a very, very selfish person. I try to be extremely supportive with other people but the intention behind the action is still fundamentally selfish. I evaluate nearly every interaction on whether the person views me positively or negatively in that moment, whether I’ve “succeeded” or “failed”, and nothing else.

My partner (F24) loves me so much, and I just don’t understand why. She is leagues better than me on every front. She is better-looking than me, so much funnier, much more interesting, more capable, wittier, a better, more earnestly caring and entertaining friend, more talented on every front… she’s been through so much more than me and still come out better. She has always been exceptional. If you can think of an attribute, her ability exceeds mine in it. And she isn’t just incredible compared to me, either — she is a PHENOMENAL person and everyone we meet/know recognizes this. And I love her so much. I’ve loved her for the five years we’ve known each other and I love her so much for the year we’ve been together it hurts. I really, really do love everything about her. I try to be a good girlfriend, but the self-loathing problem has made me jealous and insecure and I suppress it but the feelings are very much there. She represents everything I’ve ever wanted to be and I don’t know how not to compare myself to her when the disparity between us is so obvious not only to me but to the whole world. I try to hide it but the fact that I hate myself bleeds through the cracks and it makes her so sad and I hate that. I hate feeling jealous of the person I love more than anything.

I guess my question is this: how can I be better? What are some actionable steps I can take to improve my self-esteem to the point where I see myself as a distinct, worthy person when I have never felt this way before in my life? How do I know when to engage in self-improvement efforts and when to accept things the way they are? How can I think about more than just how much I hate myself? How can I get out of my head and genuinely focus more on the people I love not because I think they’ll leave me if I don’t, but because they’re wonderful people and our friendship is worth investment in it’s own rite? To the point where my relationships feel good and fulfilling? I really, really want to improve. I often feel like I don’t deserve to but that’s not good enough anymore because I’m not the only person this issue affects, and the selfishness of it all is eating me alive.

I know I need therapy. I don’t know how to begin to find the right therapist. If anyone has suggestions for key-words and modalities to look into, that would be great too. I’m so sick of living like this and I think, DEEP down, I know I deserve to change.

r/AskMen Similar-Standard-525

Men who are now a father to a surprise pregnancy: how do you feel about the situation now?

r/confusing_perspective Sad_Dimension3627

Long neck

r/SideProject Notorious_Engineer

Spent years watching great ad campaigns get killed by the same dumb problem

Working on a product team, I sat in so many meetings where the marketing team had nailed the targeting - right keywords, right audience, right creative. But every single visitor landed on the same generic homepage. Conversions were rough. And nobody could figure out why.

The disconnect is almost always messaging mismatch.

Your ad says one thing, your page says another.

Visitors bounce.

A few things that actually help:

  • Match your headline to the exact keyword or ad that brought the visitor
  • Write separate copy for each audience segment, not one-size-fits-all
  • Align your CTA to the specific promise made in the ad
  • Use UTM parameters to trigger different page variations

But doing this manually at scale is a nightmare. So I started building BeaconMatch - a tool that dynamically swaps page messaging based on visitor intent automatically.

Check it out: https://www.beaconmatch.com/?utm_source=reddit&utm_medium=social&utm_campaign=SideProject

r/megalophobia warrenkennethd

Megalodon’s jaw

r/meme Stock_Crazy6759

Flex 🤓

r/Adulting Similar-Standard-525

Were you eased into adulting by your parents, or was it baptism by fire?

r/Adulting Danielo2027

Desahogo

Antes de que piensen que solo soy alguien más buscando atención o validación de los demás pues déjenme decir que esa no es mi intención. Solo quería desahogarme un poco nomás. Soy un chico de 18 años de Venezuela, soy bajo de estatura, voy al gym y estoy estudiando en una universidad. No voy a decir que estoy en la depresión ni nada por el estilo, ya que para mí es un tema bastante delicado y se debería de respetar, no ? Pero ciertamente me siento solo, aunque tenga a mi familia y mis amistades que la verdad agradezco a Dios que me haya dejado nacer en Venezuela y que me hayan criado mi familia actual. Gracias a eso soy quien soy hoy en día, desde siempre he sido algo distante con el tema romántico (no es por nada en específico) no se me daba tan bien mantener relaciones románticas, no me quejo. Cada experiencia sin importar si fue buena o mala te deja una enseñanza, pero aún así esa especie de "soledad" no se me quita, con el pasar de los años (suena como si fuera un viejo) ha ido aumentando eso. No me considero feo y creo fervientemente que no lo soy, cuido mi físico y también soy sociable, amable y ciertamente carismático. No seré la perfección en si, pero al menos soy algo, no ? (mencionaria más cosas pero alargaría más el tema) quisiera conocer a una mujer, a una chica que realmente vea quien soy realmente. No pido que sea la más bella del mundo, ni que sea súper obediente ni nada. Solo quiero esa compañía incondicional, seguridad, confianza, cariño y compresión que te daría ese tipo de relaciones sanas. y se que eso se consigue a través de los años con una sola persona. Pero de nada sirve no poder conocer aún a alguien así, no estoy desesperado. Al contrario estoy tranquilo con eso... Solo que algunas veces quisiera que alguien me viera de manera romántica.

r/BobsBurgers NewbVlogger42

“We want fat people who can’t leave our restaurant.”

The delivery of this line gets me every time.

r/Damnthatsinteresting Frosty_Jeweler911

Artemis II launch tracked in real-time with insane precision by MARS Scientific

r/homeassistant HotBorder8261

GPU for AI models?

Hey everyone! I want to run a local assistant, and for it to be real time I need a GPU. I need it both for STT and an LLM.

I currently run my HAOS on a Firebeat T8 plus running Proxmox (and HAOS is a VM) with a few LXCs running some dockers (each with its own setup).

I thought about how I can add a GPU and thought maybe to open my Firebeat, remove the Wifi module (I use an Ethernet port anyway) and connect the GPU to its connector (with an adapter of course). I won't get the full speed GPUs usually needed but it will probably suffice for my use.

What do you think? Would you suggest anything else? I pretty much want to do it as cheaply as possible and I have a GTX 1060 lying around at home that I can use to test.

Thanks!

Edit:

Thanks for all the responses. It sounds like what I want to do is possible but I first have to get a new GPU. I'll try finding one cheap before doing this, this might take a while though :/

r/LocalLLaMA batatibatata

compiled a list of 2500+ vision benchmarks for VLMs

I love reading benchmark / eval papers. It's one of the best way to stay up-to-date with progress in Vision Language Models, and understand where they fall short.

Vision tasks vary quite a lot from one to another. For example:

  • vision tasks that require high-level semantic understanding of the image. Models do quite well in them. Popular general benchmarks like MMMU are good for that.
  • visual reasoning tasks where VLMs are given a visual puzzle (think IQ-style test). VLMs perform quite poorly on them. Barely above a random guess. Benchmarks such as VisuLogic are designed for this.
  • visual counting tasks. Models only get it right about 20% of the times. But they’re getting better. Evals such as UNICBench test 21+ VLMs across counting tasks with varying levels of difficulty.

Compiled a list of 2.5k+ vision benchmarks with data links and high-level summary that auto-updates every day with new benchmarks.

I'm thinking of maybe adding a simple website to semantically search through them. Will do if someone asks

r/Weird Jon_Dunn58

whats living in your beard?

r/nextfuckinglevel Frosty_Jeweler911

Artemis II: Stunning Launch Footage Captured by MARS Scientific's Tracking Cameras

r/ChatGPT Covid-Plannedemic_

The new Meta AI is actually really good. In thinking mode, it's really good at searching the web and it doesn't hallucinate much

r/mildlyinteresting Noobyverse_YT

Malformed dog bones

r/SipsTea MiraSoftveil

She was in a real parallel version of her life

r/LocalLLaMA chocofoxy

AI SDKs are missing real “local” providers

Now that we have small models like Qwen 3.5 0.8b and Gemma 4 e2b etc .. that can run on mobile and browser and we also have tensorflow.js and transformers.js that they can serve them we are missing that agentic layer, every AI SDK only support API providers even local but through API somebody should build something that wraps the small directly serve-able models in a provider that handles tool parsing and agent loop so we can use agents directly from apps and web pages or if someone already did that please provide more info

r/SideProject LukaSchoures

Built an App for android to remind me who owes me.

RemindPay&Debt

I built an Android app to keep track of debts, loans, and subscriptions in one place.

The idea came from how messy it can get when you lend money to friends or owe someone and just forget about it or lose track. Instead of using notes or trying to remember dates, the app lets you register who owes who, how much, and when it should be paid.

It also supports recurring payments, so you can track things like subscriptions or monthly installments, and it sends reminders so you don’t miss anything.

Everything is stored locally, and it supports both English and Spanish, by now it only supports my local currency NIO and USD, however i'm planning on extend this to multiple currencies.

Hasn't been uploaded to playstore yet but it will.

If you want to take a look just click here:
https://github.com/Lukaschoures/RemindPay-Debt/tree/main

Still improving it, but I’d appreciate any feedback.

r/AbstractArt Entire_Let2915

“Peacock”🦚

r/ClaudeCode Friendly_Nerve4656

Built a multi-model coding IDE on top of the Claude Agent SDK — alpha testers wanted (free)

Been building a desktop IDE on top of the Claude Agent SDK for the last few months.

Not a VSCode fork. Built it from scratch.

It’s got an editor, terminal, file explorer, Claude Code as the agent, multi-model support, a skills marketplace, and actual visibility into what the agent is doing and what it’s costing. No CLI.

The editor is still Monaco, so it’s not like I built a text editor from zero. But it doesn’t have the same syntax highlighting, extension ecosystem, or depth that VSCode has. That was an intentional tradeoff. I wanted to focus first on the AI layer and the overall product experience.

That said, the app itself is very presentable. I’ve spent years working on UX, and a big part of this was trying to give people a cleaner, friendlier abstraction on top of Claude Code without losing the power under the hood.

I think it’s useful for people who already use Claude Code and want less terminal overhead, and also for people who want to use Claude Code without all the friction that usually comes with these tools.

It’s BYOK. For alpha testers I’m covering some token credits upfront.

Roadmap right now:

- Quick Agents, skill builder and tool/server integration

- agent workflow maps

- token/context optimization tools

- one-click library installs for Claude Code compatible stuff -> ALOT of room here. Lots of popular frameworks, orchestration laters etc. As long as they are open source you can limit test your CC

- app templates

Future future plans: CC is one of many agentic coding tools.

It’s alpha, kind of rough, but usable. Free to try.

If anyone here wants to check it out, comment.

Also, what’s the most annoying thing about Claude Code or your current AI coding setup? That’s the stuff I want to build around it

r/WouldYouRather FightOrDie123

WYR: Only be able to play video games, or only watch TV?

r/Jokes Different-Tie-1085

What do you call 2+1=3 puppies?

An awww sum.

r/comfyui edoc422

what is the best inpainting model to use with Illustrious images?

I was trying sd-v1-5-inpainting.ckpt but it does not seem to be able to do NSFW

I also tried Waifu-inpaint-XL but it changes the color of the whole image slightly so its not the best.

r/ClaudeAI steve-opentrace

Giving Claude Code architectural context via a knowledge graph MCP (inspired by Karpathy's LLM Wiki)

Karpathy's LLM Wiki gist from last week made a point that's directly relevant to how we use Claude Code: RAG and context-stuffing force the LLM to rediscover knowledge from scratch every time. A pre-compiled knowledge artifact is fundamentally better.

If you've used Claude Code on a large codebase, you've felt this. You paste in files, maybe a README, maybe some architecture docs, and Claude still doesn't really understand how your services talk to each other, who owns what, or what the dependency chain looks like. It's re-deriving that context on every conversation.

We've been working on this problem at OpenTrace. We build a typed knowledge graph from your engineering data — GitHub/GitLab repos, Linear, Kubernetes, distributed traces — and expose it to Claude via MCP. So instead of Claude guessing at your architecture from whatever files you've pasted in, it can query the graph directly: "what services does checkout call?", "who owns the payment service?", "show me the dependency chain for this endpoint."

The difference from Karpathy's wiki pattern is that the graph maintains itself automatically (code gets parsed via Tree-sitter/SCIP, traces get correlated, tickets get linked) and it's structured as typed nodes and edges rather than markdown files — which is what an agent actually needs for programmatic traversal.

A few things we've seen in practice with the MCP connected to Claude Code:

  • Claude makes significantly better decisions about where to make changes when it can see the full call graph, not just the file it's editing
  • It stops suggesting changes that break downstream services it didn't know existed
  • It can answer "who should review this?" by tracing ownership through the graph

We have an open source version you can self-host and try with Claude Code: https://github.com/opentrace/opentrace (quickstart at https://oss.opentrace.ai). There's also a hosted version at https://opentrace.ai with additional features. Both expose an MCP server.

Curious if others have tried giving Claude Code more persistent architectural context, and what's worked for you.

r/WouldYouRather stirringmotion

WYR get a tattoo of whatever you thought was cool, or a tattoo of whatever the artist thinks is cool?

?

r/Seattle Objective_Airport117

Narcan

I just helped resuscitate a woman at the aurora and 100th bus stop, luckily nearby bar had narcan, ran back in time (haven’t ran in a decade). She made it! I needed a cool down whiskey so went to another nearby bar where bartender said they had one last night. I remember a post in here about scary new shit, it’s no god dam joke. I’m gonna start carrying some. Woof, wtf is going to on?

r/ChatGPT Historical_Pension60

Lost a thread

Has a great thread going with ChatGPT that I just I need to get eyes on. I was prompted to login and I did so thinking the chat would follow and it was wiped. I have tried to replicate the outcomes I had but it’s not working. Is there any way for me to view this again?

Thnx

r/LocalLLM MrGaohy

🚀 Registration is now open for the 2nd MLC-SLM Challenge 2026!

The MLC-SLM Challenge returns with a stronger focus on advancing Speech LLMs for real-world multilingual conversational speech.

🔗 Register here: https://forms.gle/jfAZ95abGy4ZiNHo7

Following a successful first edition with 78 teams from 13 countries and regions, this year’s challenge will introduce a larger multilingual conversational speech dataset covering 14 languages and around 2,100 hours of data.

We’re also excited to share that the MLC-SLM 2025 Summary paper has been accepted by ICASSP.

📅 Key dates (AOE):

• Training data release: April 10, 2026

• Dev set & baseline release: April 24, 2026

• Evaluation set & leaderboard open: June 15, 2026

• Leaderboard freeze: June 25, 2026

• Paper submission deadline: July 10, 2026

• Workshop: October 2, 2026

We welcome researchers from both academia and industry to join us.

Click link to explore more:https://www.nexdata.ai/competition/mlc-slm

r/PhotoshopRequest Grand_Welder_6573

Update and fix an old photo

Hi! I have an old photo of my grandmother modeling during WW2 that I wanted to get fixed up. Creases removed, and anything else to make it look better without changing her look or features.

r/Damnthatsinteresting boyoflondon

Money in Somalia

r/SideProject krhhzv

had a 'new idea' for streaming, then realized it already exists

so I had this random idea about streaming

like instead of downloading full videos, what if it just downloads small parts and plays them instantly while removing the old ones to keep it smooth

for a moment I genuinely thought I came up with something new lol, then I looked into it a bit and realised this is literally how most streaming platforms already work (chunk-based streaming, buffering etc.)

felt kinda funny and a bit embarrassing ngl, anyways now I'm just trying to build a super basic version of it in the browser for fun and to understand it better

not sure how far it’ll go but yahh just experimenting with it

r/funny Openskies24

Valid crash out

r/AI_Agents Bulky_Bodybuilder102

Concurrency model confusion

I'm a bit confused about concurrency in modern LLM multi-agent frameworks.

In classical MAS, agents can run concurrently and interact in parallel. But in frameworks like CrewAI or AutoGen, it seems interactions are often sequential (turn-based or task-based).

My questions:

  • Do CrewAI or AutoGen support true parallel execution between agents within the same workflow?
  • Or is concurrency mainly achieved by running multiple independent workflows in parallel?
  • If I need real parallel agent collaboration (not just multiple requests), which framework is better suited?

Any insights or real-world experiences would be really helpful.

r/leagueoflegends VastReception1105

Getting second role and autofilled way too often, expectative versus reality

I'm around Diamond I to Low Masters, what i expect in the span of 3-4 hours of committing part of my life into the gaming session i'm doing is to have quality games, lack of chaos within the bounds of the already chaotic game that league is, even matches, and for my winrate to not be partially reduced

  1. Quality games, by quality games in this elo i mean facing people that are playing in their role 70-80 percent of the time and want to show their skill in the probably small champion pool they have refined. Someone who is autofilled does not try to win as much as someone who is, regardless of willpower or eagerness to win, csing in second role, or autofill, skirmishes and overall the way someone plays the game are impaired by this, an autofilled toplaner might lose 1v1s against fighters more often, an autofilled in support might ward way less and have less countermeasures or sloppy timers when it comes to roaming, an autofilled in adc might not kite nor know how to play out different 2v2s that require continuous knowledge of the matchup and champion gimmicks like facing a Veigar Poppy, a classic Nami Lucian, Samira Nautilus, Thresh Aphelios etcetera, autofilled junglers may lose smite fights and have smaller clear times, people may lose their will to play because of it, ask for a "go next" once things are not looking perfect since it is pretty demoralizing to play out in disadvantage with something completely out of your control.

If those playing with the autofilled player and the autofilled player himself slightly lose early, there is many more mental booms and burnout coming from the one at disadvantage due to the frustration of not being able to carry out a scenario that is already grim from the getgo, people only avoid to be less mad about it because of the tradeoff that lashing out only creates negative feedback loop where now one is tilting everyone over it and perpetually carrying out their stress towards other matches after the ones in autofill.

  1. Winning or losing my points the right way, if i lose, it's because of playing badly in my role, nobody was meant to learn multiple positions to rank up, if you see someone excel at this, chances are they are pro players or have invested (again) an insane amount of time in doing so, the system or the game should never ask such versatility from players because as we all know, most of the playerbase can not consistently fill in every role consistently as much as other roles. If i were to win an autofill game or a second role game, i wouldn't be so impactful in the outcome of that victory, in fact i would have felt like i was just there csing and trying my best to avoid making mistakes instead of playing out the plan i have in mind when i'm in my own role

  2. Lack of chaos, currently, i'm aware most of the time the autofills are paired up together, but yet again due to champion and matchup knowledge one may blind their comfort pick, some otps can simply not stop playing their champion even when they have gotten the pick in the wrong lane, i've mentally tanked Aurelion Sol junglers, Janna tops, Yasuo supports, and a multitude of unrealistic drafts that should have never went through and either dodged and lost LP for or played out and again lost LP for having a core part of our team missing out on doing their job, i can not smite, splitpush, kite, selfpeel, do two types of damage or anything at once with one champion nor win a team game alone so i would really love a world where i do get to have my team and enemy team not argue and conversate about how impaired and at disadvantage they are, some shenanigans might happen regardless of autofill or "second role" (autofill with extra cope and mental gymnastics), players will troll, players will whine, players will give up, but at the end of the day we got what we wanted consistently

  3. It does hinder winrate significantly. I've been in second role or autofill 10 times out of 40 which means i was at a disadvantage so far 25% of the time, ignoring the usual shenanigans of soloqueue

There has to be a better way to reduce queue times, but i do agree they should be higher with the tradeoff of putting people in their right position more often, because i just lost 4 hours of my day that i could've used for anything more pleasant and less stressful that does give me agency and i do have control of in my day, it's kind of humilliating to be sitting on a chair for hours feeling impotent and frustrated over something out of my control and completely bound to playing out every unjust match i've been put in, and i'd love to play this game because it does hold up to the excitement it can have whenever i do get to have my moment of shining in my own role, i know my consistency is gonna make me climb but i also do know i shouldn't be going through the extra hassle inbetween

r/ClaudeAI Dramatic_Squash_3502

Managed Agents onboarding flow - what's new in CC 2.1.97 system prompt (+23,865 tokens)

  • NEW: Agent Prompt: Managed Agents onboarding flow — Added an interactive interview script that walks users through configuring a Managed Agent from scratch, selecting tools, skills, files, and environment settings, and emitting setup and runtime code.
  • NEW: Data: Managed Agents client patterns — Added a reference guide covering common client-side patterns for driving Managed Agent sessions, including stream reconnection, idle-break gating, tool confirmations, interrupts, and custom tools.
  • NEW: Data: Managed Agents core concepts — Added reference documentation covering Agents, Sessions, Environments, Containers, lifecycle, versioning, endpoints, and usage patterns.
  • NEW: Data: Managed Agents endpoint reference — Added a comprehensive reference for Managed Agents API endpoints, SDK methods, request/response schemas, error handling, and rate limits.
  • NEW: Data: Managed Agents environments and resources — Added reference documentation covering environments, file resources, GitHub repository mounting, and the Files API with SDK examples.
  • NEW: Data: Managed Agents events and steering — Added a reference guide for sending and receiving events on managed agent sessions, including streaming, polling, reconnection, message queuing, interrupts, and event payload details.
  • NEW: Data: Managed Agents overview — Added a comprehensive overview of the Managed Agents API architecture, mandatory agent-then-session flow, beta headers, documentation reading guide, and common pitfalls.
  • NEW: Data: Managed Agents reference — Python — Added a reference guide for using the Anthropic Python SDK to create and manage agents, sessions, environments, streaming, custom tools, files, and MCP servers.
  • NEW: Data: Managed Agents reference — TypeScript — Added a reference guide for using the Anthropic TypeScript SDK to create and manage agents, sessions, environments, streaming, custom tools, file uploads, and MCP server integration.
  • NEW: Data: Managed Agents reference — cURL — Added cURL and raw HTTP request examples for the Managed Agents API including environment, agent, and session lifecycle operations.
  • NEW: Data: Managed Agents tools and skills — Added reference documentation covering tool types (agent toolset, MCP, custom), permission policies, vault credential management, and the skills API.
  • NEW: Skill: Build Claude API and SDK apps — Added trigger rules for activating guidance when users are building applications with the Claude API, Anthropic SDKs, or Managed Agents.
  • NEW: Skill: Building LLM-powered applications with Claude — Added a comprehensive routing guide for building LLM-powered applications using the Anthropic SDK, covering language detection, API surface selection (Claude API vs Managed Agents), model defaults, thinking/effort configuration, and language-specific documentation reading.
  • NEW: Skill: /dream nightly schedule — Added a skill that sets up a recurring nightly memory consolidation job by deduplicating existing schedules, creating a new cron task, confirming details to the user, and running an immediate consolidation.
  • REMOVED: Data: Agent SDK patterns — Python — Removed the Python Agent SDK patterns document (custom tools, hooks, subagents, MCP integration, session resumption).
  • REMOVED: Data: Agent SDK patterns — TypeScript — Removed the TypeScript Agent SDK patterns document (basic agents, hooks, subagents, MCP integration).
  • REMOVED: Data: Agent SDK reference — Python — Removed the Python Agent SDK reference document (installation, quick start, custom tools via MCP, hooks).
  • REMOVED: Data: Agent SDK reference — TypeScript — Removed the TypeScript Agent SDK reference document (installation, quick start, custom tools, hooks).
  • REMOVED: Skill: Build with Claude API — Removed the main routing guide for building LLM-powered applications with Claude, replaced by the new "Building LLM-powered applications with Claude" skill with Managed Agents support.
  • REMOVED: System Prompt: Buddy Mode — Removed the coding companion personality generator for terminal buddies.
  • Agent Prompt: Status line setup — Added git_worktree field to the workspace schema for reporting the git worktree name when the working directory is in a linked worktree.
  • Agent Prompt: Worker fork — Added agent metadata specifying model inheritance, permission bubbling, max turns, full tool access, and a description of when the fork is triggered.
  • Data: Live documentation sources — Replaced the Agent SDK documentation URLs and SDK repository extraction prompts with comprehensive Managed Agents documentation URLs covering overview, quickstart, agent setup, sessions, environments, events, tools, files, permissions, multi-agent, observability, GitHub, MCP connector, vaults, skills, memory, onboarding, cloud containers, and migration. Added an Anthropic CLI section. Updated SDK repository extraction prompts to focus on beta managed-agents namespaces and method signatures.
  • Skill: Build with Claude API (reference guide) — Updated the agent reference from Agent SDK folders to Managed Agents documentation files, with language-specific routing for Python, TypeScript, cURL, and a note that C# should use raw HTTP examples.
  • Skill: Verify skill — Restructured the "Get a handle" section to emphasize checking .claude/skills/ for verifier skills first (even if you already know how to build), framing verifiers as the repo's evidence-capture protocol. Added a new "Push on it" section with concrete probing strategies organized by change type (new flag, new handler, changed error path, interactive/TUI, state/persistence). Added the :mag: emoji marker for probe steps in the report format, with guidance that a steps list with no probes is a happy-path replay. Added probe documentation guidance in the Findings section.
  • System Prompt: Agent thread notes — Removed the conditional logic for relative vs. absolute file paths; agent threads now always require absolute file paths unconditionally.
  • Tool Description: ReadFile — Simplified to always require absolute file paths, removing the conditional relative-path option.
  • Tool Description: Write — Removed a conditional note variable from the "prefer Edit" guidance, making it unconditional.

Details: https://github.com/Piebald-AI/claude-code-system-prompts/releases/tag/v2.1.97

Regular updates at https://x.com/PiebaldAI

r/TwoSentenceHorror Select_Salamander518

They warned us that the monsters were masters of auditory deception: the closer they were, the farther away their hunting cries sounded.

And after running for half an hour from the bone-chilling screeches, all has suddenly fell silent.

r/leagueoflegends Behold_A_Human_

Rengar Vs Kha'zix

I'm not sure about anybody else, but Kha'zix vs Rengar doesn't really feel like its close to a fair fight anymore. It used to be about outplaying them with either skill or positioning, but now it just feels like Rengar default wins. This isn't meant as a post about "I lost a game against a Rengar and I'm salty", but looking for a legitimate commentary.

Rengar has (with empowered abilities, granted) a scaling heal, hard cc, a cleanse, true sight to kill Kha through invis, and more instant damage with empowered autos. Kha has solely the ability to outplay through positioning and beating him to objectives, but I don't see much of any other counter. Rengar also tends to get tankier and still can one-tap Kha.

Am I alone in this thought process, or is this somewhat of a given these days?

r/whatisit longboardp

Came out of garden hose after first use in a while. Several feet long.

Scared the heck out of me, hooked up hose for the first time in a while and this shot out of it. Very weird looking, dark yellow at the ends and the center of was white and spotted and it looked to be taking the shape of the hose.

r/funny dikshamishra34

That eye contact 😆😁😂

r/painting fairlyfairies

Struggling with getting this painting to look right

Hello fellow painters 👋 I'm having an issue with this current *in progress** painting I'm working on. The table with all the produce on it is kinda looking like it's sitting directly on the floor, when it's supposed to be raised up with a gap then a metal cart/rack thing and a box under it. I started laying out the metal cart but still not right. The 2nd is a reference photo I took, and I know obviously my painting is a slightly different angle but I feel like it still works perspective-wise except for the depth of the table to the floor. The tricky part is in my painting the table legs get cut off so the brain doesn't get that reference that it's raised. Maybe the shadows on the floor need to be pushed darker? Any advice? TY!

r/aivideo Far-Employee-9531

Oh Crappers

r/Ghosts Becky_Alv26

Grabé una pelea de mis gatos… pero se oye un susurro ‘cállate’ después de reírme. Eran las 4 am y estaba sola. ¿Lo oyen?

r/SideProject Exact_Pen_8973

I automated my side project's design workflow using Claude + Canva. It cut my prep time by 80%.

Hey builders,

Running multiple side projects means my time is my most valuable asset. I needed a way to generate high-quality visual content (for blogs, social, and digital products) without spending hours manually tweaking templates.

I finally cracked a workflow integrating Claude with Canva, and it has completely changed how I build. I wanted to share the process in case it helps other solo founders here scale their output.

The Old Way: > Write copy -> Open Canva -> Copy/paste text into each slide/graphic -> Adjust formatting manually. (Took about 2 hours for a batch of assets).

The Automated Way (The Claude + Canva Stack):

  1. Content Engine (Claude): I use Claude to generate the core content. The trick is prompting Claude to output the data exactly as a structured table or CSV. (Claude is currently much better at strictly following data-formatting instructions than basic ChatGPT).
  2. The Bridge: Save Claude's output as a .csv file.
  3. The Design Engine (Canva): Set up one master template in Canva. Use the "Bulk Create" app, upload the CSV, and map the data fields to your text boxes.
  4. Generate: Click one button, and Canva spits out 30, 50, or 100 perfectly formatted graphics.

The Result: What used to take hours now takes about 15 minutes.

If you're building a side hustle that requires consistent visual content, you need to automate this layer. I documented my entire process, including the specific prompts and Canva setups, on my blog so you can copy the system.

Read the full integration guide here: [Insert your mindwiredai.com link here]

What other non-coding tools are you guys plugging into your AI workflows to save time?

r/SideProject NoFuzzzzzz

built an app that lets you find deals near you

so you know when people "flex" a good deal they found and tell you "hey i bought this for only XX$!"

I built an app that lets people flex their finds anonymously then other users would validate it by upvoting or flagging posts. Users would have to tag the location of the store and the price of the item so its informative.

so basically its like social media for deals and the idea is that wherever you are, before purchasing anything, you try to be a "smart buyer" by checking the app first if there are deals near you.

its currently on closed testing via google play but the website is live.

r/StableDiffusion Capitan01R-

Custom Node Rough Draft Lol

It slims out when released though Lol

r/creepypasta Hottortilla124

I made out what she said in this video from what i heard.

Why did you want him: 0:49

No: 0:54

r/LifeProTips Spirited-Ad-2338

LPT: If you have multiple devices stop buying separate chargers for each one

I used to buy a new cable or adapter every time something broke or I got a new device. MagSafe puck for the phone, watch cable, AirPods cable, and each one needed its own adapter or a spot on the power strip. Over the years I probably spent $200+ just on charging stuff. Finally replaced it all with one 3 in 1 charger and a single cable. Should have done it years ago instead of buying things one at a time. Sometimes the cheaper move in the long run is to just buy the one thing that replaces everything.

r/Frugal Deja_Brew2495

We only notice how connected we are when it starts costing us

I’ve been thinking about solar panels lately, not just for the usual “save money” reason, but because of everything happening globally. Shifts in energy prices somehow trickle down and quietly affect what we pay at home each month.

It’s unsettling how something so distant can impact something as basic as electricity.

It makes me wonder if wanting your own power source is less about saving money and more about holding onto a bit of control over my budget in a world that feels harder to predict.

Has anyone else started thinking this way lately?

r/mildlyinteresting Caffeinated_1

The names on this color chart from my marker set

r/ChatGPT devil_ozz

Saw this , thought I'd try it out myself

r/LiveFromNewYork jp_peppercorn

Does this sub really hate Chloe?

Random comments I’ve seen repeatedly saying this sub likes to hate on her. I don’t understand why? Even aside from this Vanity Fair thing.

r/PhotoshopRequest AioliBig1441

Grad pics help

My mom thinks that my grad photos would look better if I did my makeup (shown in last pic) can someone photoshop it to look like my makeup is done doesn’t have ti be exact what is in the last pic but gives an idea of what I look like w makeup and can be used if needed.

r/mildlyinteresting ramonortiz55

Random spot on my finger glows under black light

r/Adulting _elliebee

Moving out at 20 rant (im sorry)

Hi, I know I've posted this before, but everything is becoming real, and I need some confidence. A little bit of background. I recently turned 20, and I have never moved. I live in a small town (6k people) and still live in my childhood home. I'm still not sure about college, but if I were to partake, it would probably be online. I do work a full-time job that pays decently, but it's not like I'm rolling in money. i feel like I am falling behind in a world where everyone is supposed to have their lives figured out at age 15.

I may have found an apartment to buy in my small town. Like, I have an application I can fill out. I am absolutely terrified. My parents decided that they were going to start charging me rent next month. a little surprising but nothing crazy. I have been looking passively since I graduated hs, but nothing serious. But reguardless it was kinda a push to move out.

Growing up has always been an insanely sensitive topic for me as I suffer with major depression and anxiety, lmao. scared of moving out, living on my own, blah blah blah. You get the picture. On one hand, I am so excited for a new step. Having my own space to decorate. On the other hand, the thought of all this gives me panic attacks. Like most things growing up, I think I need to be pushed out of the nest. Otherwise, I will be here forever.

Please, for the love of god, tell me that it's going to work out. That even if I'm living in this tiny apartment, completely broke and alone, I will be okay. I'm losing sleep over this.

I would LOVE to hear your own stories and things that helped you get over the fear of moving out and growing up in general. Or if you are in the same boat, feel free to rant, I'm here to feel your sorrows! Advice, comfort, laugh at me, anything. I'll take whatever you can give. Cuz I'm shitting myself over here! Thanks for your time <3

r/WouldYouRather captainfalcon200523

Would you rather get paid 1$ per gallon of urine you pee, or 3$ for every foot of poop you poo

I’m thinking, you can’t really gamify either efficiently.

Yeah you can drink a butt load of water, but your kidneys can only process so much. But because we’re going for length and not volume, you also can’t really just load up on fiber. Plus it makes diarrhea worse since you wouldn’t get paid for that.

View Poll

r/personalfinance Important-Resident20

22YO first job ($80k), am I stretching too thin on $1.6k total housing?

Currently apartment hunting in a MCOL city and found one I really love at $1,309/month. Just not sure if I can actually afford it so wanted to get some outside opinions.

Just started my first full-time job at $80k gross, take-home is around $4,825/month after 401k and taxes. Total housing with utilities, internet and parking would be about $1,643/month.

I'm saving around $1,632/month between my own contributions and retirement (got a 50% 401k match which is nice). The not so great part is my emergency fund is only $2k right now, but I'm actively throwing money at it.

Rest of my spending:

- $400 groceries

- $400 eating out

- $250 entertainment

- $40 subscriptions

- $50 personal care

- $570 car (loan $200 + gas $250 + insurance $120)

On paper it seems fine but the low emergency fund while taking on this rent makes me a bit nervous. Am I overthinking it or is this actually a stretch?

r/instantkarma Apprehensive_Sky4558

Annoy someone with your scarf, get an instant reality check.

r/explainlikeimfive CraigwithaC1995

ELI5 Why do astronauts wear multiple watches?

Can some explain why they wear multiple watches when they go to space? Is it a keepsake sort of thing? I get the apple watches, but why the nicer one as well? I've also seen them wearing multiple on the outside of their space suits.

r/whatisit Former-Bug-8498

Weird Clothes Splotches

My wife noticed a couple of the kids shirts had these weird splotch like patterns after a wash cycle a few weeks back. We couldn't figure out what it was as the patterns on it didn't match mould or any other senisble explanation we could come up with. We thought something must have splashed on the items pre wash, and just caused stains.

We've just had a 3 more items of clothing appear with the same sort of markings. We are struggling to figure out what is causing it. My wife thinks it is the washing machine, but we do at least 1 load a day in our family so I would expect we would see a lot more of it on other items of clothing. It doesn't appear like mould, as I have seen that on clothes before and is too incosistent.

It doesn't wash out or respond to any sort of treatment (haven't yet tried bleach). We just want to understand what it could be, or what we should do to try and investigate as it is ruining some otherwise perfectly good clothing.

Desperate for sensible answers, have more photos and info on request. Not sure if this is the best sub, but thought it was a good place to start.

r/interestingasfuck InfiniteSeat4605

Perceived speed.

r/mildlyinteresting AdeptnessLate7456

One of my fingers has normal folds and the other has parallel folds

r/TwoSentenceHorror I_AM_WILL_STANCIL

The horny man wished to the genie "I wanna take it straight up my own ass".

The genie misheard and thought he said "I wanna ticket strait of hormuz" and that's why Iran can levy tolls now and shit.

r/LocalLLaMA stddealer

Gemma4 31B thinks it's Gemini

That's something interesting I have noticed. I often test the simple "who are you?" prompts (with an empty system prompt) with my local models to see how much they know about their own "identity", and almost all models in the Gemma4 family were able to identify themselves as Gemma4, and even give a list of their capabilities.

But only the 31B just identifies itself as "a large language model, trained by Google.", and when asked which family of models it belongs to, it consistently answers "Gemini", and then pretends it knows nothing about any Gemma models that came after Gemma2.

What's odd is that if I ask it about Gemma4 in an empty context, it figures out that it refers to itself, and can then list its own details, but not if it has previously claimed to be Gemini.

r/SideProject TutorDry3089

I built CloseAI: an iOS app that installs a private AI chatbot on your own server

Hey everyone,

I wanted a way to chat with AI models privately from my phone without sending my data to OpenAI, Google, or anyone else. The existing self-hosted options all required manual Linux setup and didn't have good mobile clients.

So I built CloseAI. You point it at any Ubuntu VPS, it connects over SSH, installs everything automatically (Ollama, a Python API backend, TLS certificates), and gives you a streaming chat interface. No command line required. he entire setup and management happens through the app. All data stays between your phone and your server.

The stack:

  • iOS app (Swift, SwiftUI, SwiftData)
  • SSH connection via Citadel (Swift NIO)
  • Server: Ollama + FastAPI + uvicorn as systemd services
  • Self-signed TLS with TOFU certificate pinning
  • 5 models: Llama 3.2, DeepSeek R1, Qwen Coder, Gemma 3, Phi 4 Mini (with more to come)

What I learned building it:

  • SSH from iOS is surprisingly tricky, Citadel + Swift NIO handles it well but the edge cases (key formats, host key verification, sudo password piping) took the most time
  • SwiftUI state management for multi-step async flows (install progress, streaming chat) was the hardest UI challenge
  • Self-signed cert pinning (TOFU model) was worth the effort over forcing users to set up Let's Encrypt

https://reddit.com/link/1sg9ycy/video/iwx242vn32ug1/player

It's free on the App Store: https://apps.apple.com/us/app/closeai/id6760688649

Would love to hear what you think, especially around what models or features would be most useful.

r/SipsTea Unstoppable_X_Force

Emotional damage 😂

r/personalfinance Aztro-Zombi

Custodial Roth IRA Clarification

Hey guys, I’m new to the world of Roth IRA and I’m trying my best to be financially responsible with my younger sibling. Any help clarifying this would greatly be appreciated

I have my younger brother who is 16 this year and will be working a summer job. We expect him to make around $1,000 for time he works. Because this will be a summer job with his local school, he will be getting a W2 at the conclusion of his summer internship.

Could we theoretically open a Guardian/Custodial Roth IRA account for him and maximize it with other income reported? My plan was to also pay him throughout the remainder of the year for odd jobs helping me out. This could be yard work, babysitting, walking the dogs and tutoring.

Now, I understand he will need to have earned slightly over $7,500 in order to contribute $7,500 and maximize the Roth IRA contribution. So can we use his income from his summer job (reflected on his W2) as well as the supplemental income reported on a “Schedule C Form” on his taxes to reach the $7,500 and be eligible to contribute that amount to his Roth IRA? I believe he would have to pay the 15.3% self employment tax on his reported income on the schedule C form. But this would allow the IRS to verify he indeed did make that income and could contribute it to his Roth IRA.

We would do our best to keep a spreadsheet documenting the hours he would work each week. We would also be using Zelle to pay into his student checking account so we could reflect that through bank statements if we got audited.

Am I doing this right? Is this possible?

r/findareddit Signal-Razzmatazz877

looking for subreddits where I can get support in online competition

r/Jokes DryInitial9044

TIL Ireland launched the very first satellite.

Spudnik.

r/ClaudeAI Even_Development6383

Claude Managed Agents Skill,

I put together a Claude Managed Agents Skill you can add to your coding agent so it better understands how to build and run cloud managed agents.

It focuses on the non obvious parts that are easy to miss and helps your agent reason about architecture, execution flow, and reliability when working with managed agents.

What it covers:

• Event loop patterns and correct stop_reason handling
• MCP two step auth pattern
• Tool permission policies and custom tools
• System prompt engineering for autonomous agents
• Cost monitoring using span.model_request_end and session.usage
• Debugging sessions with event history replay
• Session reuse vs new session decision logic
• Silent failure modes that do not raise exceptions
• Research preview features like Outcomes, multiagent orchestration, and memory stores

Works with Claude Code natively. Also usable with Codex, Gemini CLI, and other agents that support skills.

To install, ask Claude:

"Install the Claude Managed Agents Skill from the 0xArx claude managed agents skill repository and use it when building managed agents."

r/SipsTea Utahsaint366

Tucker aint tucking around no more

Tucker got his cage rattled. Is he actually angry, or just his handlers?

r/Wellthatsucks MuseofBadPoetry

I mixed this cheese into the sauce.

The top layer looked fine, and I just dumped it in like the idiot I am. Google says the whole thing is contaminated now. I'm a college student on a budget, and this was going to be multiple meals for me.

r/ClaudeAI cinegraphs

Bridged the gap btwn creatives and the codeless

I always thought there were two types of people: those who could code and those who couldn’t. When I switched into computer science over 10 years ago, my professor told us anyone could learn. He had a student who immigrated from Africa, didn’t even know how to use a computer, and ended up excelling. Stories like that inspired me.

But I didn’t make it past the Hello World lecture. YouTube videos, office hours, questions after class, nothing clicked. I started failing assignments and switched out of the program. I tried self-learning after that. Same result. All the ideas I wanted to build stayed trapped in my notes app.

When ChatGPT came out, I thought maybe this was the answer. It wasn’t good enough for real development. Then I found Claude, and it genuinely changed things. For the first time, I’m actually building the stuff I’ve been thinking about for a decade. It finally gave me what I was looking for when I first enrolled in that CS class.

r/OutOfTheLoop One_Pollution2279

What is going on with the Iran cease-fire and the impact on oil and utility prices?

sooooo... i’ve been seeing news about a temporary two-week cease-fire between the U.S. and Iran, but i'm a bit confused about what it actually covers and why oil and gas prices are still high. from what i’ve read, one key point of the deal is supposed to be reopening the Strait of Hormuz, which is a major route for global oil shipments.

despite the cease-fire, there are still strikes reported in Lebanon, and energy prices are rising in many countries. i’m trying to understand how a cease-fire like this is supposed to affect the markets and everyday bills, and why prices haven’t dropped more noticeably.

here’s one article that summarizes the situation and market reaction: https://abcnews.com/US/wireStory/wall-street-global-markets-surge-after-us-iran-131825276

can someone explain what this cease-fire actually includes and why prices are still acting so wild?

r/whatisit no1seltzerfan

Small plastic ball full of water?

Found in my garden.

r/LiveFromNewYork MapleBisonHeel

Just some love for Phil Hartman.

CBC TV show called It’s a Living. Host Peter Jordan would shadow a wide variety of people at their jobs and try the jobs himself.

In this instalment, instead of shadowing and army sergeant or pastry chef or what have you, Peter Jordan shadowed Phil Hartman during a week of production of NewsRadio.

Showed up when I was looking for something else. Thought it a good idea to share with the sub.

r/LocalLLaMA xaeru

Gemma4 and Ollama: Native tool calling

Beginner here, now I have a good GPU and ollama using docker. Pulled the Gemma4 weights and was able to add it to cursor using ngrok.

Here is the thing, gemma4 says that it can't read the files I sent to it.

I expected it would work like the other models, they use grep to read files or ls to list folders and files. Gemma4 response is that it can't read the file and I should paste the contents of the file directly in the chat.

Why are those models able to use tools and Gemma4 is like "Sorry I'm just a chatbot".?

r/LifeProTips Fit-Benefit-6524

LPT chew something to stop migraine, headache (might help someone)

For the past 3 days, I've had a migraine. I took paracetamol and other painkillers, but absolutely nothing worked. I genuinely thought I was about to have a stroke. I mentioned it to my dad, and he gave me something to chew—like a piece of chewing gum.

Surprisingly, it actually worked, and my headache stopped almost immediately.

not really a LPT, but hope this can help someone. Pretty sure my dad just saved my life today.

r/meme the_martensite

One of the Hardest Cover photo Drop by the NCERT

r/PhotoshopRequest stevenxvision

Need Help with Headshots

Looking to balance out skin tone in all of these. Face, neck and hands. Slightly tan. Soften bags under eyes where I’m smiling. Make suit all the same color, no shadows if possible. Get rid of glare on watch. Thanks in advance!

r/AskMen Hell_Valley

What does kissing a girl/woman feel like?

I’m 35M and unfortunately have yet to have any romantic success with women throughout my life as I’ve only been rejected.

I’ve always wondered what it’s like to kiss a girl, a genuine romantic kiss where both people want it (not hiring someone for a kiss)

It’s sad but perhaps if I can visualise and understand it then I may experience it in a dream as that’s probably the best chance I got

r/WouldYouRather Certain-Somewhere438

WYR only be able to eat while running, or only be able to poop with your pants fully on? No cheating.

You're cursed. No loopholes. No well technically workarounds.

Option A: You can only consume food and drink while actively running. Not jogging in place. Not on a treadmill. Your feet must be moving you forward at a running pace. You stop running, the food stops going down. Good luck with soup.

Option B: You can only relieve yourself while wearing your pants completely on. Pulled up, buttoned, zipped, belt fastened. You cannot remove them. You cannot pull them down. You just have to go. And then sit in it until you can shower.

Which curse are you picking and why?

r/LocalLLaMA Lumpy-Accountant-750

[Showcase] I expanded my Open-Source Web Agent into a Full-Stack RPA: Record once, replay forever (with zero token cost).

Hi everyone,

A while ago, I posted about OpenClaw-RPA—a skill designed to solve the "LLM Tax" problem. The idea was simple: instead of letting an Agent "think" its way through a repetitive web task every day, you record it once and compile it into a deterministic Python + Playwright script.

The response was great, but the feedback was clear: "Real work doesn't just happen in a browser." I’ve spent the last few weeks adding deep integration for the "boring but essential" parts of automation: Session persistence, REST APIs, and Office documents.

🚀 The Core Philosophy: Record Once → Replay Forever

Most Agents are slow and expensive because they reasoning-loop every single click. OpenClaw-RPA changes that:

  1. The Brain: LLM plans the workflow.
  2. The Muscle: Actions are converted to a standalone Python script.
  3. The Result: Future runs use zero tokens and execute at native code speed.

🛠️ What’s New (The "Enterprise-Grade" Update):

1. The Captcha/2FA Bypass (#rpa-login) Automating logins is a nightmare. Now, use #rpa-login to open a real browser. You handle the SMS, QR code, or slider manually once. Use #rpa-login-done to export the cookies. These sessions are auto-injected into all future recordings and replays. No more fighting login walls every time.

2. Direct HTTP API Recording (#rpa-api) If a task can be done via a REST call, why use the UI? You can now mix GET/POST calls directly into your workflow. They are compiled into the same script as your browser steps, allowing for lightning-fast data fetching.

3. Native Office Automation (No MS Office Required) I added excel_write and word_write actions powered by openpyxl and python-docx. You can now build Reconciliation-style flows:

  • Login via session cookies → Scrape a dashboard → Fetch missing data via API → Generate a local Excel report → Create a Word summary. * All within one task, and it runs anywhere (even on Linux servers without Office installed).

📖 Dive into the details:

I’ve documented the full state machine and specific scenarios here:

I'm an AI researcher and dev building this to scratch my own itch. I’d love to get your thoughts on the "Record → Replay" architecture and what other "boring" tasks you think Agents should stop wasting tokens on!

r/mildlyinteresting Theekotnee

This little baby Skittle I found in a bag of wild berry Skittles

r/AI_Agents EntertainmentEasy847

Made horrible Decisions to Upgrade Kimi A.i. and Regret it.

Made horrible Decisions to Upgrade Kimi A.i. and Regret it.

Give me your thoughts because I highly recommend Do Not Upgrading or using Kimi any longer.

So I upgraded🤦, and I knew better, yet my Duma-- did it anyway. Originally I hit an odd paywall after only two prompts (5 hour and without doing anything immediately 165+ hour) and I thought maybe I could just pay to upgrade and things would go back to normal since I'm at the end and I'll be done. NOPE !!! It constantly made stupid mistakes. It was like a little retarded Crackhead... I couldn't understand what was going on. (Example - Change color of Logo to metallic chrome, nothing else..it made it white) It constantly got into loops, and the thread wasn't long

I told it that this feels like you're intentionally pushing me towards some kind of paywall. it denied it, of course, but low and behold, I hit a 5-day conversation limit. I just upgraded and only used 10% of token and was put on hold for 5 days

because their sorry ass agent would not do anything correct. I asked it if it understood my instructions and made it repeat back to me. it went to do it and ignored them, and I asked again if it understood what I asked it, and it said yes then repeated back. so why did you fail to excut, and it said it was easier to break the frontend and repair it . wtf? I ran my prompt through claud to verify it wasn't incomplete, and even Claude was unsure of the reason for its output.

I've reached out multiple different ways to request a reset or refund with no response, so now I'm contacting the card company to get a charge back . filing complaints for suspicious and unethical business practices. Both paywalls on free and upgraded came without warning at the end of project completion.

Kimi, in general, had changed and felt like I was using a downgraded model. The output was very basic when it attempted to do it correctly.

Am I the only one experiencing this, or is it universally everyone? let me know, thanks

r/whatisit atuck217

Found in the box for a Joie Stroller/Car seat.

Metal rod about the size of my arm that was in the box for our new Joie stroller. It has a spring and 2 plastic cone shaped bits that contained the spring, then 2 more that look like a spring could go between them but there is nothing there. The ends of both sides of the rod are keyed similar to a flat head screwdriver.

Is not listed in the parts list for the car seat or the stroller, and was outside of the styrofoam packing. Makes me think it is something from the factory or manufacturer that accidentally was put into the box, especially because it feels like it has some sort of grease on it. I cannot for the life of me figure out what it'd be used for.

r/geography PeteThePikachu

Why is this section of I-82 routed around the Yakima River?

It is only a 10 minute difference between Ellensburg and Yakima by taking SR 821 through the river vs I-82.

I understand that the river is windy and it would be difficult to maintain high speed traffic, but is that the only reason?

r/funny Gordopolis_II

We've all been there, kid.

r/ClaudeAI ClaudeAI-mod-bot

Claude Status Update : Errors when connecting to Claude.ai on 2026-04-08T23:57:07.000Z

This is an automatic post triggered within 2 minutes of an official Claude system status update.

Incident: Errors when connecting to Claude.ai

Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/hsgj6gh6rlck

Also check the Performance Megathread to see what others are reporting : https://www.reddit.com/r/ClaudeAI/comments/1s7f72l/claude_performance_and_bugs_megathread_ongoing/

r/30ROCK Hold_X_ToPayRespects

When do you start throwing out donuts?

r/EarthPorn spouly

Colorado Layers. 11656 × 4304. [OC]

r/aivideo PakistaniBoomer

Chicken Star Rap - Tung Tung Tung Sahur and friends!

r/ProgrammerHumor StatureDelaware

itsAIForSure

r/whatisit Scary_Ad8781

Saw this at a thrift store on Edisto Beach.

Ehat am I trying to make with this? It seems like a pump maybe.

r/explainlikeimfive SpeedRunGod

ELI5: How does DTC E-commerce bussiness work? How do they start and what is the end goal of such businesses?

r/SideProject Disastrous-Net-8300

A minimal Mars rover puzzle: Opus 4.6 failed many runs; Gemini 3.1 Pro & GPT-5.4 solved it first try

I made a small browser game: 8×8 grid, pick a landing cell, then output an F/B/L/R command sequence to reach the star goal and avoid rock obstacles. It’s really about spatial planning from a screenshot + short rules—easy for humans, annoying for models if they misread the board.

I tested several frontier models with the same setup (screenshot + instructions, only the command string as output):

  • Claude Opus 4.6: many attempts, never a fully correct run.
  • Gemini 3.1 Pro: first try, success.
  • GPT-5.4 Pro: first try, success.

So this isn’t “multimodal is useless”—it’s a sharp model-to-model gap on the same visual planning task. I’m sharing it as an informal benchmark / puzzle, not a rigorous eval.

Play: https://lovableapp.org/game/mars-rover

If you try other models or prompts, I’m curious what you get—especially whether Opus-class models consistently struggle or I had bad luck on wording.

r/arduino EpicErik_HD

How can you power an Arduino Pro Micro 32u4 externally?

I tried using a buck converter to step down from 24v to 5v. I connected plus to VCC and minus to GND. After I powered the thing on, nothing happened at first. And well, there was a pop. Any ideas on what I did wrong?

My board: https://amzn.eu/d/04JaCGbj

r/personalfinance Independent_Yak8342

Need help with back door IRA method

I have a traditional IRA opened at Schwab that I never used and I have most of my money in my Roth or my individual account. I have about 55K that I don’t need to touch now and just want it to grow. Is it as simple as transferring from my individual brookrage to my traditional IRA and then into my Roth? I have already done my taxes for 2025, would this affect them?

I’m currently taxed at 12%. I know conversions are taxed as ordinary income, so how much would I need to send yearly to be able to be taxed at 12% and not 22%? I imagine I would lose far more money to taxes if I convert all 55K right away.

Any advice is helpful!

r/Adulting OkDevelopment8453

Advice Regarding Moving Out at 18?

I turn 18 in the end of October and want to move out as quick as possible, not just to be on my own, but because I cannot live with my parents with how they act towards me, specifically my dad. I’ve been belittled, yelled at, told my feelings don’t matter, said with how I dress I put myself in harms way, and embarrassed in front of guests for many years and it’s gotten to a point where I can’t live in this household anymore and need to go off on my own. I want to start now.

I don’t have a job, have a small college fund and roth ira saved as well as an emergency fund, but that’s all regarding my money. I have a car, however it’s under my parents’ names, so they could take it if they wanted. I would be halfway through my senior year by the time I plan to move, and I currently have to decide if I want to graduate early (end of first semester) with 8 classes, or I can graduate on pace with a max of 6 classes.

I am fine with living on a tight budget for a little if it means I can get out as soon as possible, but I don’t know how to make this goal possible yet.

My parents have all my legal documents, pay for my car currently, and can access not only my bank account but all three of my savings accounts. I can’t reach out to my school because I don’t want them to report this to CPS. I really just need a step-by-step on what I should most prioritize to make this move successful (money, car, job, etc). For college I won’t be going until 2028 at the earliest to give me a gap year for strictly focusing on working.

I live in the midwest, in a more rural area, but prices are still slightly pricey. I also do not want my family to become aware of my plans.

r/homeassistant jbmc00

My New “Desktop Widget”

Felt like building something cool for my desk. Decided to build a little control surface. I’ve got a 4” display that shows what’s playing on my Sonos or AppleTV, an M5Stack Dial for Volume, Track and Lighting control and a MOES 4 button keypad for lighting presets.

Gotta take the trim ring off the dial and paint it as well as get some nice labels for the keypad.

r/LocalLLM dansreo

which model to run on M5 Max MacBook Pro 128 RAM

I was running a quantized version of Deepseek 70B and now I'm running Gemma 4 32 B half precision. Gemma seems to catch things that Deepseek didn't. Is that inline with expectations? Am I running the most capable and accurate model for my set up?

r/ClaudeCode Eastern_Exercise2637

Karpathy said "there's room for an incredible product here." I built it 99% fewer tokens per Claude Code session by compiling your codebase into a wiki.

Reduced Claude context from 47,450 tokens → 360 tokens.

Every Claude Code session starts the same way. It reads your files, traces your imports, figures out your routes and schema. On a 40 file project that's 47,450 tokens before you've typed a single question. You've paid for that exploration in every session. It has never carried over.

Andrej Karpathy described this exact problem and closed with: "I think there is room here for an incredible new product instead of a hacky collection of scripts."

I built it entirely with Claude Code:

npx codesight

Compiles your codebase into a structured context file in 200ms. No LLM. No API calls. Uses the TypeScript compiler API for TS projects, AST parsing for Python, Go, Ruby, Rust, Swift, Kotlin, C#, Dart, PHP, and more. What it finds is exactly what's in your code nothing model reasoned.

What Claude Code gets from message one:

  • Full route map with methods, paths, middleware chains
  • Schema with field types, foreign keys, relations (13 ORM parsers)
  • Component tree
  • Hot files most imported files = most critical to understand
  • Env vars, middleware, dependency graph, blast radius analysis

47,450 tokens → 360 on a FastAPI project. Zero false positives.

As an MCP server Claude Code calls it directly. No pasting, no manual context loading. 13 tools including blast radius, hot files, test coverage, events:

{
"mcpServers": {
"codesight": {
"command": "npx",
"args": ["codesight", "--mcp"]
}
}
}

New in v1.9.8 knowledge mode:

npx codesight --mode knowledge

If you keep ADRs, meeting notes, decision records, or an Obsidian vault alongside your code, this scans all your .md and .mdx files and extracts decisions made, open questions, people mentioned, recurring themes, and a backlink graph showing which notes are most referenced. Generates an AI Primer block you can drop into any conversation for instant context. codesight_get_knowledge MCP tool lets Claude query your second brain directly without you pasting anything.

Built entirely with Claude Code. Free and open source. 2,000+ weekly downloads.

GitHub: github.com/Houseofmvps/codesight a star helps a lot as a solo founder and a follow would faint me.

r/findareddit OptimalWallaby8153

where do I go to report another user creating profiles to evade a user-to-user block, stalking, and harassment?

I'm trying to figure out how to report this user because the report user feature doesn't list an accurate reason for reporting, and I don't want to use the wrong report method just to have my report tossed on a technicality.

Another user decided to pick a fight over a comment, and after a little back and forth, I blocked them.

They created 2 profiles immediately after that block, and the first of those profiles they used to message me directly with the same language and content of the messages where they were harassing me in the subreddit. I blocked that one right away, but noticed a follower, and I don't get or have followers (not trying to). That follower and the profile that messaged me where created on the same day. I blocked the follower, blocked following my profile entirely (not important to me anyway), and curated my posts so they would have to manually follow me around if they really wanted to go on harassing and stalking me, but this is wild. I reported it on old.reddit.com/report but I have no idea if that's going to a human or not, and this person is clearly unhinged if they are going to harass me on a post that had been done for days, get blocked, create two profiles, harass me with one of the new profiles and follow me with the other. IP check would very likely show at least 2 profiles created by same user at same IP, if not 3 profiles, possibly more.

Any recommendations to get this figured out would be great - this person is very much unhinged and dangerous, and having to lock my own profile down because of obvious stalking is just wrong

r/TwoSentenceHorror BaconConnoisseur

My light blanket was no match for the Russian winter.

Luckily, this warm canister I found in the forest will stop me from freezing to death.

r/DunderMifflin the_eng96

How do you think Michael’s reaction was?

r/Jokes glnb20

Why can’t Severus Snape be a herbology teacher ?

Because he can’t keep lilies alive.

r/personalfinance jrl-512

Circling back to a company after only one year?

I worked at one large blue chip company for a year and ended up not doing the work I was hired to do. I had this conversation with my director at the time who agreed so I left for another company when the opportunity came. I have been with that company for a year now but have been offered a new role to go back to the blue chip company for a significant raise. Currently I am making 140k and the new offer is for 185k plus 10% bonus.

The title would be a small upgrade to senior analyst but potentially more room for growth. I am more concerned that circling back to the blue chip and moving twice within two years will kill my resume and LinkedIn credibility as I look like a flight risk.

For further background I was at my first company for 2.5 years and received a promotion while there before going to the blue chip.

Should I take the offer?

r/PhotoshopRequest Sweet-B-pea

Can yall edit out the two women beside me? Keep me the brunette and blondie pls I’ll tip!

r/Whatcouldgowrong InOliverWeTrust

Time to kiss

r/Ghosts WarfaceAncient

I'm not sure what I captured. But it sounds like a Class A

r/whatisit thanakij

what is it ?

it stick to plastic pot feel like mushroom but on plastic ??? and 3-4 days ago not have it. In Thailand 42 degree i don't think it from animal.

Solved : it's some frog egg​

r/PhotoshopRequest pillowkings

Hit and run help?

Hello,

I have a footage from a few years ago where I was hit by a car and fractured my neck and shoulder blade. I have lawyers and they hired a P.I but we never found the driver. I was wondering if maybe anyone is able to help find the license plate from a low quality video I have?

r/mildlyinteresting atrt7

Tree roots look like hand

r/Damnthatsinteresting HeToTopT

Passagier von American Airlines erwischt Artemis II-Start aus dem Fenster auf Flug von St. Croix nach Charlotte

r/LiveFromNewYork MSK1984

The last time Will Ferrell hosted a Season finale......

Was season 34 with Green Day, that would also be Darrell Hammonds last show as a cast member, (Him being the longest tenured at the time). Do you think he's here to say goodbye to another cast member? Or possibly this is Lorne's final episode?

r/SideProject ouchao_real

What did you work on or ship this week?

I’ve been putting time into https://sportlive.win — mostly improving how it tracks teams and makes it easier to follow games without jumping around.

Still early, but using it daily now.

Drop what you built this week, would love to check it out.

r/LocalLLaMA Excellent_Koala769

Why do companies build open source models?

Hello,

Why do companies create open source models? They must allocate lots of resources toward this, but for what profit? If anything, doesn't it just take users off of using their paid for/proprietary models?

r/DecidingToBeBetter GutturalGrinch

I just want to be happy. Share with me your daily thoughts or practices to maintain or work towards general happiness and positivity.

Im almost one year sober from alcohol, incorporated regular exercise and healthy eating into my routine. I have a great life. But my day to day just feels like something is off or Im waiting for things to go bad. I know a therapist is probably my next big step, but Im looking for all your little tricks or thoughts or routines that help to feel positive.

r/Whatcouldgowrong StretchFrenchTerry

Driving a Mustang into a beautiful sunset.

r/ClaudeAI Fragrant_Yesterday69

What’s our future? Everyone has an app and no one has a job?

I just read a report done by writer AI across enterprises. Not a big reveal that do more with less actually started with do same with less for a lot of companies. The forcing function to cut and adapt is just so much more straightforward than find how to grow.

I love Claude and been using it along with other AI products at work a lot. And I see that the gap growing with people using new tools well could be x5-10 faster than those who don’t.

So I could see that we will need less doers bc they could do more, less middle managers because there are less doers and more productivity tools to help, less C-suite bc more functions could be overseen by 1 person. And i see those who’ve been indefinitely in between jobs build something themselves.

What I don’t see is for 10x more content and products we might end up having 10 times less consumers - then what?

Or we have a drastic shift in white vs blue collar jobs and nothing changes?

Or tokens become so expensive that we will have a cohort of ultra AI-performers and the rest? We probably get planet overheated first

What y’all thoughts?

r/artificial monotvtv

OpenAI said ads were a "last resort." Then crossed $100M in 6 weeks.

Remember when Altman literally said in 2024 that ads are a last resort for them?

Well. Here we are.

What gets me isn’t the $100M itself — it’s that they hit it while the product is basically still in beta. Less than 20% of users see ads daily. No self-serve tools yet. No international rollout yet. 600 advertisers but most needed a $200K minimum just to get in.

They haven’t even opened the floodgates and it’s already nine figures.

The part I keep thinking about: Google built an empire on search intent — people typing what they want. ChatGPT has something different. People explain their whole situation to it. That’s a completely different level of signal for an advertiser.

Whether they can scale this without killing the trust that makes the product work in the first place — that’s the actual story.

r/Art J_Babe87

Sir Balin Two Swords and the Knight in Red, Sadboi, Digital, 2025 [OC]

r/AI_Agents Beneficial_Carry_530

Agents should not reason through bash - Programatic Tool Calling and the future of Agent architecture

Agents should not reason through bash.

Bash takes input and transforms it into plain text. When an agent runs a bash command, it has to convert its thinking into a text command, get text back, and then figure out what that text means. Every step loses information.

Language models think in structured pieces ,they build outputs by composing smaller results together. A REPL lets them do that naturally. Instead of converting everything to strings and back, they work directly with objects, functions, and return values. The structure stays intact the whole way through.

CORE transforms codebases and knowledge graphs into a Python REPL environment the agent can natively traverse.

Inside this environment, the agent writes Python that composes operations in a single turn:

  • Search the graph
  • Cluster results by file
  • Fan out to fresh LLM sub-reasoners per cluster
  • Synthesize the outputs

One expression replaces what tool-calling architectures require ten or more sequential round-trips to accomplish.

bash fails at scale

also:

REPLized Codebases and Vaults allow for a language model, mid-reasoning, to spawn focused instances of itself on decomposed sub-problems and composing the results back into a unified output.

r/SideProject vomayank

I built 85 free tools as a solo dev in India on a Rs 50K budget — here's what I learned

Hey everyone. I've been building Toolkiya for the past few months as a solo side project. It's a free online tools platform — think iLovePDF meets Canva meets

SmallPDF, but everything is free and runs in your browser.

What it does:

- 85 tools across PDF, image, QR, AI, developer, documents, and utility categories

- AI assistant that helps you find the right tool (powered by free OpenRouter models)

- Resume builder where you upload your old resume and AI improves it

- Invoice generator with logo upload, signature, GST/VAT support, 10 currencies

- All file processing happens client-side — nothing uploaded

Tech stack:

- Next.js 16 (App Router) + TypeScript + Tailwind CSS + shadcn/ui

- Vercel hosting (hobby plan)

- Supabase for feedback storage

- OpenRouter + Gemini + Groq cascade for AI (all free tiers)

- pdf-lib for PDF generation, pdfjs-dist for parsing, Tesseract.js for OCR

What I learned:

  1. Free APIs change constantly. OpenRouter removed 3 of my models overnight — had to build a 4-model fallback cascade

  2. PageSpeed matters more than features early on. Removing a splash screen took me from 45 to 68

  3. Client-side processing is a massive selling point. "Your files never leave your browser" builds instant trust

  4. SEO takes time. 232 sitemap URLs but Google has only indexed about half so far

  5. Building 85 tools sounds huge but most follow the same pattern — upload, process, download

    Monthly cost: Under Rs 2000 ($24). Vercel free, Supabase free, OpenRouter free, Gemini free.

    Would love feedback on the UX. What tools would you add?

    Link: toolkiya.com

r/Weird HonestSapphireLion24

Nighshit Coworker left this near my desk.

I dont know what it is or why he put it near my desk but im 2 parts impressed and 2 parts wondering what the hell it is

r/arduino Due_Veterinarian3014

About my PDP Hardware project

Hi everyone, I built a cybersecurity project this semester and I’d like honest feedback 🔧

Project Idea:

A low-cost Intrusion Detection System (IDS) + DNS filtering (Pi-hole) designed for:

Small businesses

Cafes

Home networks

Total cost: ~₹8000 💡

What I tried to do:

Combine IDS + DNS security into one system

Make it affordable and easy to deploy

Provide basic real-time monitoring ⚠️

Feedback I received (from evaluator):

Not unique

Low cost ≠ innovation

Can be done using VM or router

Seems like low effort 🤔

My perspective:

Integration itself is useful for small-scale users

VM ≠ real-world deployment

Router implementation needs higher specs

Dedicated device is more practical ❓

My question:

Is this actually a weak concept, or is it just lacking presentation/innovation depth?

What can I improve to make this project stronger or more unique?

Thanks in advance 🙏

r/SideProject Friendly-Land-1873

I built a simple app for parents who keep losing it at their kids - and needed it myself

So this started because I kept doing the thing I swore I wouldn't do (or would try to stop doing).

Bedtime - third time asking my kids to brush their teeth. I hear my own voice go to that place - not yelling exactly, but a tone I genuinely hate. My kids pick up on it immediately. Then I spend the next 20 minutes lying next to them feeling like garbage about a toothbrush.

I looked for something that could help me in that actual moment - not a book, not therapy homework, not a 10-minute breathing exercise, not something I had to open and read with the time I already dont have. Something that could catch me before I became someone I didn't want to be. Couldn't find it.

So I built Steady. Three messages a day timed to the hardest parenting moments - morning chaos, after-school pickup, bedtime. Lock screen widget / Home Screen widgets / Push notifications. A few strength-based words, not "here are 5 tips to be a better parent.” - easily visible, without the friction of opening any app.

I’m a senior product professional career wise.. I built it in React Native + Expo as a solo founder with no dev background, which was its own adventure. RevenueCat for subscriptions, and a simple CMS. Took about 2 months from idea to App Store approval (including a couple rejections - happy to share that pain if useful).

Just went live (soft-launch). Would genuinely love feedback from anyone willing to try it - especially if you're a parent.

Website: https://www.getsteady.ca/

iOS App Store: https://apps.apple.com/app/id6760630647

r/ClaudeCode Automatic_Employer55

Codex now almost identical to Claude code

I had OpenAI subscription ever since it came out and last Feb I switched to Anthropic - blew my mind by how much more sane it felt. Went back to OpenAI this month (for Codex) and it's come a long way, doesn't gaslight me (as much) or make me want to pull my hair out with the responses like it used to (ChatGPT voice is still dogshit though) - best part - skills package transfer was seamless since it's just .md so all the hard work/fine-tuning isn't lost. Swimming in tokens doing stupid stuff on here, dont be a sucker with this brand loyalty bs.

r/therewasanattempt 56000hp

To pretend there’s no money for healthcare or affordable housing

r/VEO3 currywurstingen

I made city past-present-future videos (also from start-end images from Banana Bro)

Please watch and comment here. It's my first time creating videos, and I'm curious to hear your opinions. I admit I have a biased opinion that they turned out great lol.

The channel: https://www.youtube.com/channel/UCczVMSRg1Dx0iXMwbcq9G3Q
(I also tried other formats but the PPF is my fav so far.)

r/ChatGPT Prize-Lychee7973

Image generation issues today?

this morning our gpt was doing fine generating a plan document per carefully designed prompt guidelines. this afternoon it has gone completely haywire doing whatever it wants. acknowledging its gate failures and then repeating the exact same issue even after re prompting it.

r/Art Remarkable_Pea_4741

Vacation, St0o0kY, Paper, 2026

r/TwoSentenceHorror mexighost

[apr26] I used a wire hanger to scratch this awful, festering itch deep within my cast - but now I can’t stop.

The broken metal has finally hooked into something slick and pulsing, and each rhythmic tug sends jolts of light to break this darkness.

r/Seattle Smilefied

petition to start calling them bodegas

i petition that we as a whole start referring to our corner stores as bodegas. it’s a cute word. that is all. start calling them bodegas and i will not look completely silly. please and thank you.

r/ClaudeCode Ok-WinMike

Cut your Claude Code’s token consumption by 90% 🤯

Someone built RTK, a high-performance CLI proxy that filters and compresses command outputs before they hit your LLM context.

  • Reduces token usage by 60-90% on common dev commands.
  • Supports 100+ commands like git status, ls, and test runners.
  • Integrates instantly with Claude Code, Cursor, Windsurf, and Gemini CLI.

100% open-source. Repo: https://github.com/rtk-ai/rtk

r/meme NerdlinGeeksly

He certainly did.

r/mildlyinteresting clarkredman_

In Taipei, you can get an engagement ring from a vending machine at the public transport train station.

r/explainlikeimfive ksio_fake

ELI5 What's the physicality of headaches/migraines

When I get a migrane my head hurts, I know that much. Is my brain like physically beating? Are my eyes hurting or is just a sensation somewhere else that makes me feel my eyes hurt?

r/ForgottenTV PeneItaliano

Studio 57 (1954-1958)

This dramatic anthology series went into open syndication when the DuMont Television Network ceased operations.

r/ARAM gptt916

Graves with marksmage

Just played against a graves with marksmage, and he was doing an ungodly amount of damage with 400-500 ap, like 2k a shot without crit damage.

Makes me wonder does each of his bullet apply the marksmage damage? I could not think of another way that could explain his damage output.

r/awwwtf nedonedonedo

Helping a friend overcome a hurdle

r/personalfinance TheKatWith3PD

Seeking advice: Repair or replace my car?!?

My 2013 Subaru (90,000 miles, fully paid off) has front-end collision damage. It'll cost $5000 to repair - should I do it or buy a new or used car?

The background info: Someone drove RIGHT into the front of my Subaru Impreza in a bank parking lot. I was just sitting there parked, waiting to leave the parking lot because an F150 was approaching, driving on the wrong side of the road, and he plowed right into me at 15 mph. The little old man driving said he didn't even see me. I asked for his insurance information, and he refused, so I called the police. But in Texas, the police won't give a report if an accident occurs in a commercial parking lot, and the bank wouldn't give me the surveillance video of the accident without a police report. I took photos and reported everything to my insurance, which spent two weeks (somehow) tracking down his insurance (Geico). Because I only have basic liability insurance, the guy's liability had to be established, but when Geico called him, he lied and said I hit him, even though the photos I took at the accident all clearly show that the guy was on the wrong side of the road. And now, over a month later, after making so many calls to the insurance companies it felt like a part time job and getting all kind or run around, I’m totally on the hook financially...

The hood, front bumper, grille, radiator support, and AC condenser need to be replaced, and the front structure needs to be pulled back into alignment and repainted. It'll cost $5000. It's just so frustrating - my wife is pregnant and has been terrified to get in the car while all this gets worked out. We’re not sure what is wisest economically right now to do.

r/geography fuibo_yv

Question for a Geography Club

I'm an exec of a Geography club, and I was wondering if anybody had suggestions for an activity to do. The age range is gr. 7~10 ish, not a lot of ppl (under 10). We've already done geoguessr, jeopardy, trivia, and some lessons, but I'm running out of engaging ideas. Also, the meeting is tomorrow, so preferably it wouldn't take too much time to prepare. Thank you!

r/personalfinance Ancient-Gas4531

Prior Month Balance as of Today - Missing Statement

FYI - anyone visiting the DISCOVER Credit Card on-line system, as I just experienced / was informed on a Customer Service call --> you should be aware a New Statement was generate over night that is NOT visible yet on-line but IS included in the amounts (both posted and pending) that do appear.

I hope this is corrected soon.

r/singularity realmvp77

Ronan Farrow on Sam Altman: "We interviewed more than 100 people... a majority did say some variation on the theme of: he's a pathological liar"

Ronan Farrow on people in Sam Altman's orbit describing him as a "pathological liar."

"We interviewed more than 100 people... a majority of those people really did say some variation on the theme of: he's a pathological liar."

"multiple people... used the term 'sociopath.'"

"[Altman] was fired by board members and executives who simply felt he was lying too much."

"Altman appears to have been doing it [lying] so much that it was all almost anyone could talk about after dealing with him."

"[The lies also] included... very minor things... at one early startup he was claiming to everyone he was a champion ping-pong player. And then they played ping-pong in the office, and he was one of the worst players in the office."

the ping-pong thing is so funny 😭

r/SipsTea Jaz1140

The Straight of Hormuz is a lie made up by big oil, you can clearly see it's not straight. Nice try big oil 😎

How stupid do they think we are? They thought we wouldn't fact check and see that it's clearly not straight.

r/SideProject aigdonia

My friend saw my investment spreadsheet and said "I wish this was an app on my phone" , so I built it

I've been tracking my investments in a spreadsheet for years, allocations, performance, compliance screening, the works.

A friend saw it and said, "I wish I had this on my phone."

That stuck with me. So I built it.

The problem I kept running into

Every portfolio tracker I tried wanted me to create an account, link a broker, or hand over my financial data to some server. I just wanted to see what I own, how it's doing, and whether it meets my investment criteria without giving anyone access to my financial life.

What it does

  • Fully 100% private — your portfolio data stays on your phone. No account, no email, no broker linking. You choose if and when to back anything up.
  • Works offline — check your portfolio without internet. Prices update when you're back online.
  • AI analysis — ask questions about your holdings in plain language. Get sourced answers, never recommendations.
  • Halal screening — optional compliance filter using trusted screening sources, explained in plain language.
  • Pay-per-use — no subscription. Core tracking is free. You only pay when you want depth like AI analysis.

What I learned

  • Build what you already use. The spreadsheet was my prototype for years before I wrote a line of code. I wasn't guessing what users need — I was the user. That made every decision easier and faster.
  • Privacy has to be structural. Every app says "we care about your privacy." But if user data never touches your servers in the first place, you don't need to say it — the architecture speaks for itself.
  • Not everything needs a subscription. People use a portfolio tracker in bursts — when they buy something, when markets move. A $9.99/month fee for that felt wrong. Pay-per-use aligned better with how the app is actually used.
  • Your first user matters more than your first thousand. My friend has it on his phone now. He sends me feedback over lunch. That loop is worth more than any launch strategy.
  • Ship before it feels ready. There's a long list of things I still want to add. But the app is live, people are using it, and real feedback beats imagined features every time.

Where it's at

Live on iOS and Android. Free to download. Covering 620+ stocks across 3 markets with more coming.

Would love honest feedback — what would you expect from something like this?

laak.olanai.tech

r/LocalLLM Beneficial_Carry_530

Introducing C.O.R.E: A Programmatic Cognitive Harness for LLMs

link to intro Paper (detialed writeup with bechmarks in progress)

Agents should not reason through bash.

Bash takes input and transforms it into plain text. When an agent runs a bash command, it has to convert its thinking into a text command, get text back, and then figure out what that text means. Every step loses information.

Language models think in structured pieces ,they build outputs by composing smaller results together. A REPL lets them do that naturally. Instead of converting everything to strings and back, they work directly with objects, functions, and return values. The structure stays intact the whole way through.

CORE transforms codebases and knowledge graphs into a Python REPL environment the agent can natively traverse.

Inside this environment, the agent writes Python that composes operations in a single turn:

  • Search the graph
  • Cluster results by file
  • Fan out to fresh LLM sub-reasoners per cluster
  • Synthesize the outputs

One expression replaces what tool-calling architectures require ten or more sequential round-trips to accomplish.

bash fails at scale

also:

REPLized Codebases and Vaults allow for a language model, mid-reasoning, to spawn focused instances of itself on decomposed sub-problems and composing the results back into a unified output.

Current Implementaiton:

is a CLI i have been tinkering with that turns both knowledge graphs and codebases into a REPL environment.

link to repo - feel free star it, play around with it, break it apart

seen savings in token usage and speed, but I will say there is some firciotn and rough edges as these models are not trained to use REPL. They are trained to use bash. Which is ironic in itself because they're bad at using bash.

Also local models such as Kimi K 2.5 and even versions of Quen have struggled to actualize in this harness.

real bottleneck when it comes to model intelligence to properly utilize programmatic tooling , Claude-class models adapt and show real gains, but smaller models degrade and fall back to tool-calling behavior.

Still playing around with it. The current implementation is very raw and would need collaborators and contributors to really take it to where it can be production-grade and used in daily workflow.

This builds on the RMH protocol (Recursive Memory Harness) I posted about here around 18 days ago , great feedback, great discussions, even some contributors to the repo.

r/interestingasfuck Firm-Blackberry-9162

Organic way to create whammy effect

r/whatisit Beneficial-Path-4094

Help?? I didn’t know this was possible?

r/DecidingToBeBetter Fabulous_Gap_9971

I feel like a failure

I’ve been attending my clinical mental health counseling program since fall 2024. I am currently in my last two semesters of coursework. I’m supposed to start my practicum/internship in the fall, but I’ve not been able to quit smoking weed long enough to apply. I kept putting it off and telling myself if I gave myself more time I’d quit. And now here I am, battling through a craving and about to give in. I have a trauma history and am diagnosed with PTSD and GAD. I have always had an addictive personality and have used substances since I was 15 to cope, but I have stopped everything except weed. I feel like an idiot. How am I supposed to be a therapist if I can’t even quit smoking weed long enough to get an internship? Im in a legal state, but I feel so ashamed and like such an idiot. Anyone have advice? I’m really struggling 😣

r/LocalLLM Pinetree1_1

Ordered ready to process.. order of wait for M5?

r/explainlikeimfive HeavyBananaz

ELI5 How does paying online with a credit card actually work worldwide?

When I buy something online, I enter my card number, expiration date, CVC, and name and somehow the website instantly knows if my card is valid. How does that work technically?

Like if I buy something from another country, how does the website verify my Canadian credit card? I bet each countries each have their own payment systems that communicate with each other?

r/meme FightOrDie123

Guys I just learnt the truth:

r/Frugal AngstLizard

any recommendations for cheaper internet?

hi!

my mom & i are both disabled and thus live off of a fixed income. i want to try to lower our monthly expenses as much as possible. our current internet provider is Verizon with their cheapest 5G Wireless Home Internet plan at $50/month, but since moving to public housing last month they've been giving us nothing but issues.

i wanted to ask if anyone has any recommendations for internet plans that are either the same price as or lower than our Verizon plan. we are just looking for internet/wifi, nothing else.

(i also admittedly have zero idea how fiber internet works. my first and only experience with handling internet providers has been Verizon's 5G Wireless. i know where we live currently is set up for xfinity, but is it possible to use other cable/fiber providers?)

r/OldSchoolCool Neiz23

1986 - Monster Mastership - International skateboard competition in Münster, West Germany. Featuring 18-year-old "Nicky Guerrero" and many other more or less well-known skateboard legends.

I like the typical German sports bank and the 0 vert ramp.
Just a parking lot...no barriers etc.

Original Video Link
11 years old 22k Views

https://youtu.be/6Hu02q8gB-A

r/leagueoflegends Fair_Explanation_924

Autofill more frequently.

So my main roles are ADC/MID. I am getting autofilled at diamond 3 Top every 10-15 games (I can play top).How can I make the system place me autofilled more frequently so I can gain double LP more.?

r/whatisit rijfwij

This thing that's been in the corner of my room for a while.

I do not remember where this came from, I just rememner finding it on the floor of my room one day. What could this be?

r/WinStupidPrizes OrWaat

Running engines and water don't mix

CarsAndCameras didn't have the brightest idea here

r/SideProject IndependentHat8773

Built my 1st cross functional app that provide comprehensive guide to traders/students about trading, AI/ML and quant

I had about 10-12 years on MEAN, Python, Data analysis etc.. this is when I came into contact with one of blog/cheat sheet I wrote for new rust programmer's 5-7 years ago.
As I had no job in 2025, I started working and learning rust on real SAAS projects, gradually bumped in tons of errors, and this is when I realized of my new beginning in programming and I not only was a quick learner on rust, but quietly moved myself to building rust-tauri apps. Now, I can build cross functional apps(desktop, ios, android, web) at the same time just like i used to do with other programming language earlier. I'm so glad that i've introduced myself to rust and now I can literally building GPU/computational heavy apps other than above one mentioned in the video. cheers

r/StableDiffusion VasaFromParadise

Anime?

base anima preview3 gen scene + upsacle details.

r/Anthropic whoooknows

Claude Teams and 5-seat minimum requirement

I want to collaborate on Claude with a new hire, but Claude Teams requires a minimum of five seats. This is a bummer. Are here any workarounds? I am thinking to just have separate accounts and share folders on Google Drive. Can I still centralize paying for both?

r/Seattle ILIKETHECOLORRED

Anyone here a chicken owner who can take two baby chicks?

someone at my apartment complex abandoned two chickens. I plan on taking them to ASPCA in the morning but if anyone can prove that they'd be a good home for these two birdies I'll give them to you instead.

r/mildlyinteresting itsjustfarkas

I can see my bruise through my skin

r/leagueoflegends chrometrics

Nemesis Quests

I know of all the current Nemesis Quests in the game but wondered what the future of them look like. Would it be bad or good if every champion had a nemesis quest, I know there are many people want to see and some champs have multiple ideas to follow such as Aatrox current one being with kayle and morgana while having a clear path with pantheon for another quest.

r/whatisit bodhiali

Vintage salt and pepper shakers, plus mysterious jar with strainer??

Hi, first post here! I was scrolling through FB marketplace while bored (as one does), and came across this adorable vintage set that I’m thinking about getting. But i’m super confused about the jar lol. At first I thought it was a jar for cream (like pouring cream into your coffee), but it has a strainer. What’s the strainer for? Is this a long forgotten kitchen storage tool that I’m just unaware of? Or is it maybe a mini tea pot/strainer?

r/Adulting PeneItaliano

Went out to dinner alone for the first time ever. Felt a bit insecure (when pic was taken) but also felt independent :)

r/AskMen R1PElv1s

Redditors who have survived the death of a significant other, how long was it before you considered dating again?

Additional context welcome. How long were you with your SO before they passed? Did you actively start trying to date (dating apps, etc.), or did you start to develop feelings for someone you already knew?

r/DunderMifflin LeftCommunication876

Slumdog Millionaire x The Office

I love that in S8 E11 when they’re at the trivia game, having Kevin, Meredith, Erin, and Kelly on the same team. Each of them was able to answer a question because of their personal experiences. Also cool that there was another call in the previous season where Michael and Holly share that Buffalo branch is closing in Slum Dunder Mifflinaire

r/whatisit Ajarofpickles97

What species of cat is this?

Found this on R/Cryptozoology, now is not the time to give your opinions on this sub I need a clarification for what this big kitty is. Allegedly it was taken in Australia now from my limited anatomical knowledge of big cat morphology it looks like a jaguar to me. Given the GIGANTIC jaw muscles it has idk what the stripes on its feet are. Again do not argue if it’s real or not just tell me what kind of turbo Kitty you think it is. My gut says Jag but idk how about you?

r/SideProject hichamsoltani

I built Smart File Organizer because my folders kept turning into chaos

https://reddit.com/link/1sgad8z/video/5u3xzpx362ug1/player

Hey 👋

Built Smart File Organizer because my folders kept turning into a mess after a few weeks — especially Downloads.

It automatically:

  • Organizes Files by (Type, Date, Size, or Extension.
  • Removes Duplicates (SHA-256).
  • Bulk Rename Files.
  • Archive Old Files.

  • Focused on automation instead of manual sorting.

  • Everything runs locally (no data collection, no Internet ).

Would love your feedback.

File Organizer On Google Play

r/PhotoshopRequest HastyIndeed

(Hopefully) simple request

Can anyone add a background that makes the face seem like it’s coming out of a black hole or swirly space cosmos, and also make the bleedy parts under the eyes look like galaxies? Bonus props if you can stick a third eye on the forehead

r/megalophobia LOFIghoul

Massive planets are actually terrifying and nobody talks about

Okay, hear me out… does anyone else get like, actual chills or panic looking at pictures of space?

I was looking at high-def photos of Jupiter and Saturn last night and I had to close the tab because my heart started racing. It’s not even funny. There’s something about how massive they are that just feels wrong.

Like, Jupiter is so big. And the Great Red Spot? That’s basically a never-ending hurricane that could swallow Earth for breakfast. It gives me major megalophobia vibes.

I feel so tiny and insignificant and the thought of just floating near something that huge makes me want to crawl under my bed. It’s like a mix of "wow, that’s cool" and "get me away from this cosmic horror immediately."

Am I crazy or is this a normal thing? Does anyone else get that weird "void" feeling in their stomach?

r/Adulting Normal-Raisin5443

What’s one thing this week that made your day better?

For me it’s my heated mattress pad. No matter if we get more snow outside, I’m cozy. We’re getting another huge dumping of snow this week. I need cozy right now.

Cafes are another one. A mortgage prevention latte and a good book are all I need some days. (Those are for sunny days though.)

What made this week a bit better for you?

r/whatisit QuienSoyYo

What is this gross looking bubble I found in off-brand Spam?

Bought some off-brand Spam at Aldi’s and it had this gross looking bubble inside when I cut it up. Is it safe to eat?

r/leagueoflegends SufficientMix8264

comet or conq on Heim?

Hey, i noticed that for some reason heims strongest rune is apparently running conq. If you told me that two years ago I’d have been confused lol

Wanted to see if someone could explain why he runs it to me as it doesn’t make sense in my brain, Im used to running comet. Or if it’s a matchup situational rune.

Thanks!

r/TwoSentenceHorror BriefBee4330

Every day, emergency alerts blared about the humans being snatched from their homes.

I was relaxing in the dark of my living room when my phone buzzed with the first silent emergency alert I’d ever seen: 'Act normal and whatever you do... don't look behind you.'

r/AskMen wealthylion1999

What’s considered crossing the line for self-pleasure while in a relationship?

What do you guys use for masturbation purposes while in a relationship? Is it worse to watch porn/OF, or sext with anonymous people on Reddit?

r/ClaudeAI crimson_traveller

Personalized persistent working memory for Claude Code — every correction you make becomes ingrained and permanent

Every Claude Code session starts from zero. You correct the same things over and over — how you like commits structured, when to stop and ask vs just do it, how verbose to be. Next session, it's forgotten. And if you use Claude Code on multiple machines, you're re-correcting the same things twice or even 3 times.

I built claude-imprint to fix this. It gives Claude a personalized persistent working memory that survives across sessions and projects.

How it works: At the end of a session, you run /remember. Claude reviews what happened — corrections you made, preferences you expressed — and proposes entries for your memory files. You approve before anything gets written. Over time, Claude stops making the same mistakes.

# Before: every session You: [finishes refactor] Claude: I'll commit all the changes now. You: No — commit phase by phase, one per logical boundary.

# After: Claude reads your memory at session start You: [finishes refactor] Claude: I'll commit this phase by phase at each logical boundary.

It's just markdown files — no plugins, no runtime, no dependencies, nothing sent to a server. Three commands:

  • /remember — capture learnings from the current session
  • /reflect — periodic health check on accumulated memory
  • /distill — sync memory across machines via a private GitHub repo

    Install is one line: git clone https://github.com/rybaier/claude-imprint.git && cd claude-imprint && ./install.sh

    Two developers using the same commands end up with completely different memory files. It's like CLAUDE.md but personal — how you specifically work, across all projects.

    MIT licensed, ~550 lines total. Curious what people think. I built this because I work with Claude Code on multiple different machines and I was getting annoyed at having to constantly make similar corrections when I switched machines. This isn't about project specific patterns or requirements either. I tried to make it so claude-imprint slowly gained the feel of how a user prefers to work and be able to take that across the multiple work stations

r/AI_Agents FokasuSensei

stop blaming codex. opus was carrying your entire setup and you never knew it.

everyone's in the comments right now saying codex doesn't finish work. codex is dumb. codex can't handle complex tasks. open claw is dying.

no. your architecture is bad. those are two different things.

i can tell you what actually happened. opus is one of the strongest models ever built. when you set up your openclaw and it "just worked" , that wasn't your system working at "FRONTIER" brother that was opus compensating for your system not working. opus was smart enough to figure out what you meant even when your instructions were vague, your memory files were a mess, and your agent had no real structure underneath it. opus was your silent co-founder. he was doing half the work your setup was supposed to do. you just didn't know it because the output looked clean. then the anthropic ban hit. opus left. and now codex moved in and found a house that was never actually built right. he's not failing. he's just not going to pretend the foundation isn't cracked.

I switched to codex when the ban happened. my operation runs better now than it did the last week of opus. under $40 a month. codex came in, cleaned up the mess opus left behind, flagged things that were wrong, and we've been moving at higher speed ever since. I barely even touched my openai subscription yet before Sam reset ALL USER usages mid week.

im making a claim that the people saying codex isn't capable built their openclaw for opus by accident. opus was quietly creating a home he never expected to have to give to someone else. now he's gone and the walls are showing.

don't let anyone convince you the model is the problem until you've honestly looked at your cron jobs, your memory structure, your skill definitions, and your handoff logic. if you don't have those things right, no model is going to save you. opus just made it easier to ignore. so before you write another post about how codex failed you try asking what does your actual setup look like underneath?

r/leagueoflegends This-is-Ace

Euphoria podcast, LEC content

I've been an avid fan of Euphoria podcast, and I'm wondering why it was suddenly stopped and why it was replaced by the LEC podcast?

I've seen drakos less as well during the recent split. Odo and Jack are both doing fine, but in my opinion, the Euphoria podcast is better, and odo and Jack could be co-hosts in there instead.

another thing is, it's been years since the lec talent produced a new song. all of their songs are fire, especially the rap battle between Vedi and Drakos.

Could someone please explain the reason behind these changes? I miss the good old production days of the LEC! They're the top tier production before.

Lastly, I'm sad that Sjokz isn't a part of the LEC anymore, but I'm happy that she is now a freelancer, instead of being tied to riot. There's no other host that can replace her, she's the GOAT.

r/mildlyinteresting Apprehensive_Bus4517

Glassy chess

r/SideProject cnsrgl

[Launch] CodeOrder – I built an all-in-one WooCommerce system for restaurants

I’ve been building websites for restaurants for years, and every time I ran into the same problem:

To run a “complete system”, you need multiple plugins.

One for online ordering.

One for reservations.

One for POS.

One for the kitchen screen.

They’re supposed to work together — but they don’t.

Orders get lost because of caching.

Plugins conflict with each other.

Updates break critical flows.

And then you get that call:

“Why are the orders not showing in the kitchen?”

After dealing with this over and over again, I decided to stop patching things together and build a proper system.

So I built CodeOrder.

It’s a WooCommerce-based all-in-one system for restaurants that includes:

  • Online ordering (delivery, pickup, dine-in)
  • POS system
  • Kitchen display (KDS)
  • QR menu ordering
  • Reservations

Everything works together because it’s built as a single system from the ground up.

You can check it out here:

https://codeorder.io

I’d really appreciate honest feedback.

What would make you actually use something like this?

r/ChatGPT agenticbusiness

Turning ChatGPT JSON export into a readable archive (simple workflow)

The ChatGPT export is raw JSON, which is great for machines but not for reading.

Here’s a practical way to make it useful:

  1. Export your data.
  2. Convert the JSON into a table (date/time, conversation title, role, message). If you can code, write a quick script to output CSV/Markdown; otherwise use a JSON viewer and copy the bits you care about.
  3. Curate only the “keepers”: your main prompts, final answers, and key decisions.
  4. Store those in your notes/knowledge system so you can search them later.
  5. Be mindful of privacy: don’t upload sensitive exports to random tools.

TL;DR: treat the export as a backup, and move the important parts into a human-friendly system.

r/automation FokasuSensei

stop blaming codex. opus was carrying your entire setup and you never knew it.

everyone's in the comments right now saying codex doesn't finish work. codex is dumb. codex can't handle complex tasks. open claw is dying.

no. your architecture is bad. those are two different things.

i can tell you what actually happened. opus is one of the strongest models ever built. when you set up your openclaw and it "just worked" , that wasn't your system working at "FRONTIER" brother that was opus compensating for your system not working. opus was smart enough to figure out what you meant even when your instructions were vague, your memory files were a mess, and your agent had no real structure underneath it. opus was your silent co-founder. he was doing half the work your setup was supposed to do. you just didn't know it because the output looked clean. then the anthropic ban hit. opus left. and now codex moved in and found a house that was never actually built right. he's not failing. he's just not going to pretend the foundation isn't cracked.

I switched to codex when the ban happened. my operation runs better now than it did the last week of opus. under $40 a month. codex came in, cleaned up the mess opus left behind, flagged things that were wrong, and we've been moving at higher speed ever since. I barely even touched my openai subscription yet before Sam reset ALL USER usages mid week.

im making a claim that the people saying codex isn't capable built their openclaw for opus by accident. opus was quietly creating a home he never expected to have to give to someone else. now he's gone and the walls are showing.

don't let anyone convince you the model is the problem until you've honestly looked at your cron jobs, your memory structure, your skill definitions, and your handoff logic. if you don't have those things right, no model is going to save you. opus just made it easier to ignore. so before you write another post about how codex failed you try asking what does your actual setup look like underneath?

r/personalfinance Ksrugi

checking to see if i'm on a good path and to ask opinions

hello all,

I recently got over my fear of investing and started a Traditional IRA and maxed out my contributions for 2025 and 2026 with an 80/20 split of VXUS/VTI on my Vanguard account

I have no debt as well as an emergency fund to cover three months of living expenses

I am a freelance artist so income fluctuates each year, but I grew my wealth $35k last year after taxes and each job I work makes contributions to pensions in two labor unions I am a member of

I am a disciplined person and live below my means. I have a surplus of $50k in a HYSA and I am wondering if I am on the right track or if there are other options I should consider

I am in my mid-30s and live in a HCOL area.

thank you

r/blackmagicfuckery BlackRogue17

Knife into head

r/LiveFromNewYork guyute2112

Episode recommendations?

Just got Peacock. Got any favorite or go-to episodes to watch on there? Thanks!

r/whatisit KandaleeIsEasy

I cannot figure this out

So was feeling kinda creepy in my backyard so before going outside I decided to look at my camera and that’s when I saw this. It comes in the picture right after train horn. Can anyone tell me what this is ?

r/ChatGPT LoganixSEO

chatgpt image generation has seriously dropped off

chatgpt's image generation is horrible right now. something's definitely changed in the last couple of days. serious, SERIOUS drop off in quality. seems to correlate with the axing of SORA. either they've rolled out a botched update, or there's some cost-cutting going on

i'm finding it won't follow instructions, doesn't follow reference images, and the outputs are horrible. genuinely feels like we've gone backward 12-18 months

r/LearnUselessTalents Agile-Campaign9996

How can I burp on command?

This going to piss me off so much. I want to be able to burp on command but I can’t fucking do it and it’s pissing me off. Like how do people do it?😭😭😭

r/ChatGPT Traditional_Gap_7041

PSA: Always manually delete saved memories

If you ask ChatGPT to delete your saved memories, it will always say it has. In my experience, this is a complete lie and the AI did not delete the saved memories. Go into preferences and manually delete the saved memory you want purged.

r/ClaudeCode YoghiThorn

Claude Code clobbering it's own messages

Has anyone else experienced this bug? I find when I'm talking to Claude and it has given me a long response, I have to do a long multi-line response to handle it all in a single message. And generally I want to handle the response in a single message because it's so verbose that I will miss things unless I do.

Lately I'm noticing that Claude will generate a response, get part way through it, and then be interrupted by ANOTHER generation of what is clearly a response to the same query - a similar output but obviously non-deterministic.

I've experienced this across Claude Code and via cc-connect (which is just Claude Code in a slack costume). It's possible it's a local setting I guess, but I don't monkey too much with hooks inside a session, mainly on session start and end. Anyone else experiencing this? I'll try to find an example.

r/aivideo Top-Valuable-4316

Anyone hopeful Gears E-day will be good?

r/SideProject Tasty_Librarian_6389

Built a free tool for anyone navigating Australian immigration law — would love feedback

Built a free tool for anyone navigating Australian immigration law — would love feedback

I studied migration law in Australia and kept running into the same problem — AustLII, Legendcom and the Home Affairs portal are incredibly hard to use if you're not already an expert. Finding a single provision could take 20 minutes of tab switching.

So I built Migragent — an AI research tool specifically for Australian immigration law.

What it does:

→ Search any provision of the Migration Act 1958 in plain English

→ Live AustLII search — cites actual section numbers and cases

→ All major visa subclasses explained in one place (partner, skilled, protection, bridging and more)

→ Document checklists for each visa type

→ Generates drafts — cover letters, statutory declarations, AAT review submissions

It's built for migration agents and lawyers but honestly useful for anyone trying to understand the Australian visa system — whether you're applying yourself or helping someone else.

10 free queries to try it at migragent.com.au

r/DunderMifflin Direct-Radish8437

I guess they should have purchased insurance from Mr. Grotti since somebody was not extremely gruntled!

r/StableDiffusion globo928

Cual es la mejor manera de hacer un LORA

Cual es la mejor manera y herramienta para hacer un LORA de una persona para crear diferentes imágenes sin que perder consistencia en cuerpo y rostro

r/SipsTea Anantmemes

Idea

r/leagueoflegends BayesOptimalAgent

Early game is volatile & snowballish

I've been trying to determine why LOL is so much more frustrating to play than most other games I play, and I've come to the conclusion that its because of the extremely high cost of mistakes in the early game that snowballs beneficially for your opponents and anti-snowballs for you

A level 1 death sets you back so far its incredibly frustrating because you know that you're going to be miserable for a long time trying to claw your way back

Every action you take in the early game is so consequential that it becomes super frustrating when things dont go well and I just tilt immediately

I dont have this issue in other games I play

How do you avoid tilting in these situations?

r/LocalLLaMA BigYoSpeck

Gemma 4 seems to work best with high temperature for coding

I've been playing with Gemma 4 31B for coding tasks since it came out and been genuinely impressed with how capable it is. With the benchmarks putting it a little behind Qwen3.5 I didn't have high expectations, but it's honestly been performing better with what I've thrown at it so far

This has all been at the recommended parameters (temp 1.0, top-k 65 and top-p 0.95). With the general consensus being that for coding tasks you want a lower temperature I began repeating some of my tests with lower values (0.8, 0.6 and 0.3) but found if anything each step down made it worse

So I went up instead. First 1.2, and it did a little better on some. Then 1.5 and on a couple of harder coding tasks the results were massively better

I've yet to try it in something like Cline for real coding tasks but has anyone else found similar that its code generation ability improves with higher temperatures?

r/ProductHunters Time-Creme1115

Incognito ChatGPT works better as a consulting tool than normal mode

ChatGPT helped me build most of my startup.

I used it for: . website structure . features . pricing

and many of the core product decisions

Everything was decided with ChatGPT involved.

Then I tried something different.

I opened ChatGPT in incognito mode and asked it again about the same things.

Same product. No context.

I asked it to review: . the features . the website design . the pricing . and the overall direction

I also asked it to evaluate who is building this startup and whether anything about me or the product is visible online, to understand how much I should focus on building more presence.

I even asked it to “look at the website” from an external perspective and tell me what is visible, what is not, and what a new user would actually understand.

Then I went step by step through all the decisions I had made during the process and asked it to reassess them.

The difference was clear.

With context, ChatGPT tends to support your direction.

Without context, it behaves more like an external reviewer: more critical

more objective

more focused on clarity and gaps

That second mode turned out to be more useful for consulting.

It challenges assumptions instead of reinforcing them.

This is also shaping the idea behind the project I’m building: a system that can generate and manage full operational setups using AI.

r/LocalLLaMA Kingofengland97

LMStudio downloads breaking wifi connection

I have a rather strange issue. When I try to download a model using the app on Windows 10, my internet connection stops working and I end up having to disconnect and reconnect the wifi to get back online. This happens every single time I try to download a model. These disconnects don't happen with any other programs or downloads through the browser. Is anyone having any issues like this and is there any setting in LMstudio that could prevent this? I've tried turning on and turning off the hugging face proxy setting and that didn't do anything. It's really annoying

r/interestingasfuck Flat-Age-007

Disgruntled employee sets entire warehouse on fire in Ontario, California. Warehouse was worth the size of 10-12 city blocks!!

r/StableDiffusion equanimous11

What is your prediction for progress in local AI video generation within the next 2 years?

How good will AI models be for local AI video generation in the next 2 years if RTX 5090 will still be the leading high end consumer GPU?

r/leagueoflegends aussypat

How to Tilt the Enemy Jungler 101

r/BrandNewSentence FishDispenser2

Traumatized by the McDonald's squirrel jerker

r/LiveFromNewYork nh18wheeler

Jeremy Culhane on Last Meals tomorrow

If you haven’t watched this before, it’s made by the folks behind Mythical/Rhett and Link, and it is one of the better YouTube interview shows for sure. Josh is on the level of Sean Evans.

Im surprised Jeremy got on so soon, given some of the celebrities who have been on before, but I’m looking forward to it, and that’s the goal here.

r/DunderMifflin fruity_humor

this might be her best look in the entire show (besides casino night and her wedding)

r/whatisit Accurate-Fisherman68

Small green tree fruit?

grows on neighbor's tree and hangs into our yard dropping these.

inside meat is light green and has on single seed.

r/fakehistoryporn SmokinBacon

You Mom’s 2nd adult toy (1981)

r/SideProject mmoustafa

I'm done building products for humans

Look maybe I'mjust tired of answering customer support tickets, but can you blame me?

AI agents today have the knowledge of a million senior engineers but the computer access of a grandma with her mouse unplugged. The internet was built for human eyeballs and fingers. Everything is behind a React UI, Cloudflare challenge, captcha, 2FA, and flows that assume a human is sitting there smashing buttons.

So my idea is simple: build products that agents can actually signup, pay and use without needing a human in the loop. No mouse required.

My first product is instapi.co , it's an Instagram data API. Agents can just curl https://instapi.co/api/start and follow the instructions to signup and get live Instagram data without ever opening a browser. The API has some neat features for agents like automatic image and video content parsing, and a metadata object useful service information on every request.

Still early, but I’d love to hear what people think. Try giving it to your agent, I have 10 free credits for each signup right now (please don't abuse it 🙏)

r/ProgrammerHumor MrMike397

moreBugs

r/explainlikeimfive isranon

ELI5: strange one: why are things shaped like they are?

like, why are brains not spheres, why are people this shape, why is anything the shape it is?

"dna decided" yes but why did it make such a decision?

r/funny OhAmbroz

My coworker’s last day was today, brought him this cake

r/TheWayWeWere idestroycat

Just married, 1966

Nan was 22, opa was 26.

r/CryptoCurrency tupidataba

5 years ago this video reached the same conclusion as NYT about Satoshi

r/SipsTea Tracheid

Do not let the cat in

r/PhotoshopRequest kingfisherwizard

Please remove the man in the background :)

Will pay $10! Thanks in advance

r/ChatGPT Dismal-Rip-5220

How to manage prompts with Playground in OpenAI

just read about features of the OpenAI playground that make managing prompts way easier. They have project-level prompts and a bunch of other features to help iterate faster. Here's the rundown:

Project level prompts: prompts are now organized by project instead of by user, which should help teams manage them better.

Version history with rollback: you can publish any draft to create a new version and then instantly restore an earlier one with a single click. A prompt id always points to the latest published version, but you can also reference specific versions.

Prompt variables: you can add placeholders like {user_goal} to separate static prompt text from instance specific inputs. This makes prompts more dynamic.

Prompt id for stability: publishing locks a prompt to an id. this id can be reliably called by downstream tools, allowing you to keep iterating on new drafts without breaking existing integrations.

Api & sdk variable support: the variables you define in the playground ({variables}) are now recognized in the responses api and agents sdk. You just pass the rendered text when calling.

Built in evals integration: you can link an eval to a prompt to pre-fill variables and see pass/fail results directly on the prompt detail page. this link is saved with the prompt id for repeatable testing.

Optimize tool: this new tool is designed to automatically improve prompts by finding and fixing contradictions, unclear instructions, and missing output formats. It suggests changes or provides improved versions with a summary of what was altered.

I’ve been obsessed with finding and fixing prompt rot (those weird contradictions that creep in after you edit a prompt five times). To keep my logic clean i’ve started running my rougher drafts through a tool before I even commit them to the Playground. Honestly, the version history and rollback feature alone seems like a massive quality-of-life improvement for anyone working with prompts regularly.

r/ClaudeAI Mysterious_Fish2204

Can custom agents and parallel tasks potentially brick a computer?

I developed an application called conductor that allows for pre planning tasks and high level just has a self learning conductor agent that orchestrates and creates task plans and custom agents at affordable costs 🤣. It can run as many tasks as prompted and I do most of my own local coding from this ui.

Last night I was running a ton of agents and my computer was around 3-4% battery. Couldn’t find the charger so I just said fuck it and let the agents try to finish… but my computer just died at 3%. Figured it’s a problem for tomorrow it’s just dead.

Today I get to work and the thing is completely bricked. Can’t even get to bios all it does is spin fans at max speed. I’m guessing I just need to reset CMOS but how could this happen? I’ve just been thinking if it could be anything related to Claude Code or just pushing power limits too high at low power?

TLDR

Bricked my computer running a ton of agents cross projects at low power, wondering if Claude code could potentially cause this for any reason as still waiting on response if resetting cmos fixes it.

r/EarthPorn Adventurous_Fuzz

Cuyahoga Valley [OC][4672x7008]

r/PhotoshopRequest Haunting-Cabinet-523

Photoshop food

Can someone make the fish look less burnt? Bonus points if you can make the grill marks run horizontally instead of at a diagonal

r/whatisit barcodetat2

Pieces of metal found on a California beach

r/LocalLLaMA RoyalMood4218

How to implement AI on a new Unraid Server

Hey guys, I had an Unraid server years ago before the AI boom. I got back into it and now have an intel core ultra 245k, 64GB DDR5 and a 5060ti 16Gb. 2TB cache SSD and 84TB array. Any tips on where to start, what community apps or docker compose templates to use etc? I feel absolutely overwhelmed figuring this out lol.

r/Seattle ConstantlyLearning57

Why the bait-n-switch

r/ClaudeCode kushcapital

Claude rate limits finally pushed me to build a real cross model system with shared memory

I’d been bouncing between Claude, ChatGPT, and Gemini for a while, but the current Claude rate limit situation finally forced me to clean it up properly.

The main problem wasn’t switching tools.

It was losing context every time I switched.

A few people in my last post said something that ended up being the real unlock:

memory and skills are just files

So I stopped thinking about this like a chat problem and started thinking about it like a system design problem.

What I built:

• central LLM wiki for shared context • shared skills folder used across models • one main agent file + one main md file controlling the overall setup • repo-specific agent files + CLAUDE.md files for local context 

So every project has its own context, but everything still points back to one main source of truth.

And it’s working way better than I expected.

My wiki is getting built out super nicely, all my models are using it much more consistently, and the whole setup feels actually scalable now.

This is the first cross-model setup I’ve had that doesn’t feel like chaos.

If you’re dealing with rate-limit bouncing too, I’d seriously recommend thinking in terms of:

shared wiki + shared skills + global control files + repo-specific overrides

That framing helped a lot.

How are you guys handling this right now?

Are you still mostly relying on chat history, or have you moved to files as memory?

Are you using one master agent file, or keeping everything repo-specific?

And if you’ve found a cleaner cross-model setup than this, I’d genuinely love to hear it.

If people want, I can share the exact structure.

r/VEO3 semi_charmed_kinda

Voice Ingredients - Wow!

The voice stays the same across all generations, no matter how many - finally. Amazing! This changes everything. Thank you Google.

r/AskMen CalmTie9341

What are some ways women are secretly perverted?

Excluding obvious things like pedophilia.

r/geography CalpurniaSomaya

A third of earth's habitable land is used for animal agriculture

r/DunderMifflin ITrCool

How long would it have been between Michael proposing to Holly and Michael’s last day?

He and Holly would both need time to sell things, move stuff to Boulder, sell the condo and move or sell their car (Michael obviously gave up the company-leased Sebring), plus Michael training DeAngelo after Jo and team interviewed people and hired him on, plus Michael’s last Dundees ceremony.

I’d presume a month or two passed between the proposal and Michael’s last day that we just don’t see? All of the above would take time to get done and we could assume Michael lived out of a suitcase and a hotel during his last few days in Scranton so he could just fly straight out after he leaves the office for the last time.

He isn’t transferring to a new job so he’s not under time pressure to get to Boulder ASAP.

r/personalfinance chris383735

National debt relief

I fear I made things worse for myself. My debt includes. American Express $8000 which NDR settled and it’s $470 monthly for a year. Sallie Mae $20,000 which is 1k overdue College avenue $30,000 which is 2k overdue Discover $8000, which is 1k overdue. My credit went from 700 to 480, it’s been 4 months since I’ve applied with them. I’m not quite sure what the best route is. Should I pay my private loans and discover card and unenroll them from NDR? Only issue is I’ll be paying about $2000 monthly in debt because of this. If I unenrolled and paid those student loans which are already 4 months due would it help my credit or am I screwed?

r/BrandNewSentence kwenlu

How to get these cocks off my stainless steel?

r/ClaudeCode ZealousidealOil8155

When do you think Cloud Mythos will be released for regular people?

and what would you do with it? Interesting your opinion guys

r/comfyui SpeedStreet4047

Is there per-workflow analog of "--fp16-unet" cli option?

Hello! I'm new in Comfyui. I found that, my Tesla V100 speed up for around 2.5 times with global "--fp16-unet" option when running LTX-2.3. But Qwen-Image produces black image.

Here the question: is there any analog of said option to enable in workflow, so that I don't have to restart the Comfyui server every time?

GGUFLoaderKJ with "float16" dequant type did not do the trick. It works, but no speed up.

r/OldSchoolCool TheRockyBalboaSaga

You’re definitely Old School Cool if you remember these three beauties from 1978.

r/whatisit Putrid-Investment919

What are these stains that appeared on my clothes after washing and drying them?

Im assuming its maybe detergent or something but it looks like grease? I definitely didnt dump grease all over my shirt tho so what is this stuff? This is my favorite shirt too how do i get it off?

r/whatisit Jefe_Winski

This was in a can of Budweiser beer!

I was drinking a canned Budweiser beer and this came in a sip which I immediately spit it out. Anyone have a clue what it could be?

r/explainlikeimfive ImThatChigga_

ELI5. Where do rich people get their money to repay loans against collateral?

So you borrow 100k you spend 50k now you have to repay 50k plus interest but you have no funds?

r/DunderMifflin GlenCocoChanel

What did Isobel see in Dwight?

she has so much interest in him at the wedding, and then it phases out, was she just curious about him?

r/explainlikeimfive ImThatChigga_

ELI5. Where do rich people get their money to repay loans against collateral?

So you borrow 100k you spend 50k now you have to repay 50k plus interest but you have no funds?

r/leagueoflegends Independent-Break585

Do you miss playing league?

I've quit around 8 years ago. I do play wild rift time to time. Is it worth it coming back? Though I'm in my mid 30's my mechanics would suck so bad. lol

r/explainlikeimfive l-a_w

ELI5 how do parrots “talk”?

Are they conscious of what they’re saying?

r/whatisit GretaVanFrankenmuth

What are these for?

Got a new laptop cover. Came with no paper work/instructions. These stickers were included ….but I have no idea what they’re for…

r/TwoSentenceHorror electrovert

“I baptize you in the name of the Father, the Son, and the Holy Ghosts," the priest announces.

I panic when he keeps our baby submerged underwater and starts chanting in another language.

r/findareddit robotrippinD12

Having my first boots made..

Tried to post them in startup groups but apparently it was the wrong place because they were removed. Not sure why.

Back in December, I designed my first pair of boots and and sent it over to a Portugal factory. They’re made of very durable, luxury materials such as bemberg satin, hemp printed canvas, and quilted viridis leather, all of which are eco-friendly and completely cruelty-free which is what all of my clothing will center around (i’m an animal lover). I’m paying a lot for the sample, but i’m having a lot of doubts. Because they are such high quality materials, I’m sure the production of my first 30 pairs will be quite expensive…and therefore the product will be expensive. I’m really worried that if they are expensive, no one will buy them. Am I going to fail? 🥲

r/whatisit Ahoward1010

Old-ish Wrought Iron…thing

This thing has been floating around the garage for 30 years. I don’t even know how to describe it. It’s not balanced, there’s no other parts, and it’s somewhat large. Pics included. Good luck.

r/ForgottenTV PeneItaliano

The Best of Broadway (1954-1955)

The Best of Broadway is a 60-minute live television anthology series that aired on CBS Television on Wednesdays at 10 p.m. Eastern Standard Time from September 15, 1954, to May 4, 1955, for a total of nine episodes. Each show was broadcast live in color from New York City, was an adaptation of a famous Broadway play, and included commercials for Westinghouse featuring Betty Furness. Using a "giant new studio", plays were presented in front of a studio audience, which contributed to a Broadway-like atmosphere.

r/SideProject WheelOk525

2 months ago I was struggling to get clients as a solopreneur — this changed everything

2 months ago, I was stuck.

Posting content → no results
Working more → no sales
Trying harder → still nothing

Classic solopreneur loop.

The problem wasn’t effort.

It was messaging.

I didn’t know:

  • What to say
  • How to sell without sounding pushy
  • How to turn content into clients

So I started experimenting with AI — not just random prompts, but structured ones.

Here’s what changed:

  • I stopped guessing what to write
  • I created content faster (way faster)
  • My posts actually started converting

The biggest shift?

Instead of asking:

I started asking:

And I used AI to answer that.

One simple framework I used:

Problem → Emotion → Failed Attempts → Solution

Every piece of content I wrote followed this.

And it worked.

I ended up turning everything into a structured playbook with prompts + workflows I use daily.

If you want it, comment “PLAYBOOK” and I’ll send it over.

r/PhotoshopRequest IJustWantYouTo_Know_

Please remove the door in the background

Hi, everyone!

I loved this beach day outfit and would like the door in the background removed before I post these pictures later on this year.

I would prefer just a plain white wall or a continuation of the wall design that is already there.

Thank you!

r/ClaudeCode Fun_Bodybuilder_3175

Had to have a laugh when this came through my feed

r/PhotoshopRequest Ill_Standard5284

Help please

I have a project due on Sunday that requires clinical photos. I was supposed to do a presentation for class, but a couple people didn’t show up so my nursing friends helped fill the spots. I need them to have different colored pants! They can’t match me ( the presenter) I tried to use AI myself but they came out distorted and that will get me a fat zero. Will pay! Please and thank you!

r/personalfinance skyantelope

Better to pull retirement bc of job separation before or after moving to a non-income taxing state?

heyyy I may just be wording the Google searches wrong but I can't find an answer to this anywhere. inb4 I know it's a bad choice to lump sum withdraw. if I get a job before actually moving I can roll over into a new plan, if not I was gonna use it for apartment deposits and living expenses until I get one. not an ideal situation but I gotta get out of this state and my mom's house ASAP :[

I'm planning on leaving Idaho this summer, and I'll be quitting my job that has several retirement plans: PERSI (a pension I guess would be the best way to describe it and I'm not vested), 457b, and a choice 401k. I know I'll owe the 10% penalty, and the 20% federal withholding.

the 457b has about 4400, 401k is around 1400, and the persi is at 13100 as of posting if it's relevant

I'm wondering if I would owe Idaho state income tax on it even if I waited until after officially becoming a Washington state resident to pull it all out? Or does it not matter because the accounts are from a job in Idaho?? Is it even worth it to try and fenagle any amount with the taxes?? I may also not get a choice, they might just lump sum cash it all out the second I quit.

ps: is it possible to ask a CPA questions like this without also doing a tax return?? sorry I'm an idiot I've never had complicated taxes before 😔

pps: for aforementioned tax reasons, at what point would I be officially considered a resident of WA? would it be after getting a driver's license or the day i sign the lease? first day I actually sleep there??

thanks all, I appreciate it

r/AskMen Same_Requirement_760

How would it make you feel if after saying you had a lot to do and were juggling a lot a girl said “such a hardworking boy”?

r/ClaudeAI Alarmed_Criticism935

I built a local server that gives Claude Code eyes and hands on Windows

I've been using Claude Code a lot and kept running into the same wall — it can't see my screen or interact with GUI apps. So I built eyehands, a local HTTP server that lets Claude take screenshots, move the mouse, click, type, scroll, and find UI elements via OCR.

It runs on localhost:7331 and Claude calls it through a skill file. Once it's loaded, Claude can do things like:

  • Look at your screen and find a button by reading the text on it
  • Click through UI workflows autonomously
  • Control apps that have no CLI or API (Godot, Photoshop, game clients, etc.)
  • Use Windows UI Automation to interact with native controls by name

Setup is three lines:

git clone https://github.com/shameindemgg/eyehands.git cd eyehands && pip install -r requirements.txt python server.py 

Then drop the SKILL.md into your Claude Code skills folder and Claude can start using it immediately.

The core (screenshots, mouse, keyboard, OCR) is free and open source. There's a Pro tier for $19 one-time that adds UI Automation, batch actions, and composite endpoints — but the free version is genuinely useful on its own.

Windows only for now. Python 3.10+.

GitHub: https://github.com/shameindemgg/eyehands Site: https://eyehands.fireal.dev

Happy to answer questions about how it works or take feedback on what to add next.Title: I built a local server that gives Claude Code eyes and hands on Windows

r/personalfinance user234556890

NY 529 Plan Student Loan repayment Advice

Hi everyone, I recently found out I’m the beneficiary of a NY 529 plan with about 20k in it. I’ve completed both undergrad and graduate degrees, with 58k in student loans. I would like to apply all 20k to the student loan debt, as I have no intention of returning to school in the future. From my research online, I see a 10k lifetime limit can be applied to student loans. Can anyone explain what options I have so I can get the maximum benefit from the plan? Realistically, what happens if I withdrawn all the funds and apply them all towards my loan, exceeding the 10k limit? Thank you!

r/SideProject Admirable_Egg9660

Built client presentation decks in ~20 mins instead of 2–3 hours. Didn’t expect this to work this well

Do a lot of freelance design work and one thing that always drained me was client decks

Not the creative ones, just the standard “update”, “report”, “pitch” type stuff. Same structure every time but somehow still taking hours

Recently tried changing my workflow a bit instead of building everything from scratch

Now I basically:

  • map out the structure quickly
  • generate a first draft using an AI tool
  • then spend 20 mins refining instead of 2–3 hours designing

Made this one on Runable just to test it and honestly the output was already like 70-80% there. Spacing, hierarchy, all decent enough to tweak instead of rebuild

Still using Figma for actual design-heavy work obviously, but for this kind of stuff it feels like overkill now

curious how others are handling repetitive client work like this
are you still doing everything manually or found a better system?

r/AI_Agents cohix

I've been working to build amux, which is a terminal UI for running parallel containerized code agents and multi-agent workflows

For the past two months I've been working on amux (I'll link it in the comments), which started out as a simple tool to launch a code agent in a container in the current project I was working on. It then morphed into a tool for running several code agents in parallel.

I built it in rust since I wanted to become more proficient (I've been a Go developer for 10+ years). With the help of Claude of course. This has morphed into a tool that I'm basically in all day long since I can monitor all of the work my Claude instances are doing across several work items and projects.

You can run it as a simple CLI (run it `amux chat` and it'll just launch a container mounted to the current directory and run Claude Code normally).

Or you can run it as a full-terminal TUI with support for multiple tabs, multi-agent workflows, a multi-agent status board, and more. It launches containers with an embedded terminal emulator, and it detects when an agent gets stuck and lets you know so you can switch to its tab and get it unstuck.

At this point the only limiting factor in how many agents I can run in parallel is how much CPU/memory my machine has to run all of the language toolchains (running 4 Rust builds at once takes down my M1 Mac Mini pretty hard, ordered a Mac Studio but it'll take weeks to arrive).

Anyways, wanted to share since it's become my core tool for getting stuff done with agents and I'm trying to put out a new release every week. This week will be v0.5 with Apple Containers support, a `--yolo` mode to make the agents run dangerously, and a 'headless' mode to run amux on multiple machines and control them from one terminal. Let me know what you think!

r/homeassistant oguruma87

SBC and panel for wall-mounted HA dashboard?

I'd like to build a couple wall-mounted panels for my house.

I'd like a 10-13" touch-screen panel which will replace/serve as my alarm panel. I may end up just using a tablet in a wall-mounted housing for this.

I'd also like a larger, 40"-ish panel (not touch-screen).

For the larger dashboard hardware, I'd like to end up with something that mimics the aesthetic of the Samsung "Frame" TV i.e. it sits very close to the wall. I'd eventually add a mmwave presence sensor to it to toggle it to display photos when nobody is within up-close viewing distance of it.

I suppose the most logical choice for the SBC is something based on a Raspberry Pi, but can anybody point me in the direction of a good source for panels that would suit my needs?

Specifically for the 40" panel I need something with decent-looking colors, but in a "barebones" style that I can build a frame around (without making it incredibly thick like if I were to build a frame around a normal TV).

r/personalfinance kkmartinnn

How much should I have in my savings (age 28)

Hi I’m 28 F and I live at home. I am a graduate student so I live on a scholarship stipend. How much should I have in my savings (hurt my feelings with your honesty)

r/SideProject CIG_Elevator_404

I got tired of having 4 different apps and paying almost 80 bucks/mo. So I built Velda, an app that combines fitness, nutrition tracking, recipes, and cycle tracking, all fed into a analytics engine.

https://apps.apple.com/us/app/velda/id6757939233

Every app I had was only marginally good at one thing, and being segmented made it impossible to get a complete overview. I also kept getting paywalled for simple features that didn't need offsite compute like charts and graphs.

I wanted to know if certain macros were effecting my lifts depending on how I responded to the exercises. Or if I could get better results building a custom routine with specific times every month to deload during my week.

I built Velda to be modular. Each module to function offline first, either independently or with communication to the other modules in mind, with no ads, no 30min onboarding, and a simple interface. Regardless of tier of account I found a way to give every person an optional monthly comprehensive analysis so they can still get a clear picture of how their progress is going outside of all the graphs included.

If you decide to subscribe, you get convenience and some neat features to save time, if you don't, and want to spend extra time, you can get the same results. No one needs to spend almost 100 bucks a month across 4 apps to track your lifestyle or be relegated to using a spreadsheet. This was my answer to the price gouging.

r/ClaudeAI n3cr0n411

The duality of man

r/LiveFromNewYork BlissfulEating

“Do you like my baaaahdy, Cahhhlin?”

My partner and I say this to each other all the time. I wish they’d make it a recurring sketch! Did this segment hit anyone else as hard as it hit us?

r/VEO3 Background_Dance5929

Google Flow Image to Video Issue

Hi everyone. Hoping someone can help me here. I have a soccer jersey business, and I've mastered Nano Banana to produce amazing images. However, I've been struggling non-stop with creating videos on Google Flow. If i just upload an image of the jersey/design and input a prompt (built all prompts on Chat/Gemini), it always changes the design, logos, structure, etc... in some way. Uploading an image of a model wearing the jersey helps, but then that model looks too AI and walking stiff in the video. Looks too fake.

Anyone have any tips, prompt templates, negative prompt additions, etc... to help avoid this. Right now I'm uploading 1 or 2 images as ingredients, and giving a detailed prompt but nothing is working. We just want to be able to create life like and accurate videos representing our jerseys when we release designs. If not, is there an app anyone recommends using instead of Google Flow to help with these issues/limitations? Any help/feedback is much appreciated!

r/ClaudeAI solzange

I tracked what 31 Claude Code subscriptions actually would cost through the API. $80K total a month. The top user alone: $18K.

I've been tracking estimated API costs for Claude Code users on a small leaderboard of about 30 people.

The numbers are pretty eye-opening. The average estimated API cost across the board is 25-50x higher than the subscription price. I'm #14 at $1.5K/month and I'd consider myself a pretty normal user, I pay $100 a month for the max plan.

For context, a Forbes article from March cited research showing that a $200 subscription buys roughly $5,000 worth of inference. Our data aligns with that and then some.

It makes sense why Anthropic is moving toward usage-based pricing for third-party tools. The math just doesn't work long term at these ratios.

Curious where you think this is headed. Do you think flat subscriptions survive or does everything eventually go usage-based?

Leaderboard: promptbook.gg/builders

r/fakehistoryporn Xenolog1

February 18, 1943: The Last ‘Sieg Heil’

r/WouldYouRather harublue82

Would you rather work Monday to Wednesday or Tuesday to Thursday?

I’ll be working a part time corporate job. It will be three days out of the week, with Tuesday’s (most likely) being in the office, and the rest being remote. Initially I thought having Thursday and Friday off would be the best for a loooong weekend but maybe working mid-week is better?

IMPORTANT TO NOTE: I might pick up a part time job at a retail or cafe to fill the two days I’m not working my corporate job.

I’m stuck and need to decide soon. What are your thoughts? Help me out! Thanks :)

View Poll

r/personalfinance ISmiteChampions

Should I cash out my life insurance policy?

so im 30 years old and still have a life insurance policy my parents started when I was a baby. I have no plans to marry or have kids so having a life insurance policy seems like a waste to me. I pay $222 a year for it. current value is $9k and death benefit is 47k. I dont NEED the money but an extra 9k would be nice especially since I just bought a house.

r/Art gopalsk86

Trail, Rohit S K, Pencil, 2026

r/TwoSentenceHorror DrawingEastern6765

He broke my heart the day he called me over for dinner.

As I lay dying with a whole in my chest, I saw his whole family devour their portions of it.

r/fakehistoryporn Weak_Imagination_996

On April 7, 1899, French painter Lionel Royer completed his famous oil painting "The Fall of the Roman Empire."

SortedFor.me