AI-Ranked Reddit Feed

5000 posts

r/StableDiffusion SunTzuManyPuppies

Built a local browser to organize my output folder chaos -- search by prompt, checkpoint, LoRA, node type, etc

Hey r/StableDiffusion

Ive posted earlier versions of Image MetaHub here before but its grown a bit since then so I figured it was worth sharing again.

I originally made it for myself (still do, actually), because my own output folders had turned into chaos.

The core idea is still the same: local desktop app that lets you search/filter/organize your images by generation parameters like prompt/checkpoint/LoRA/nodes, etc...

Since the last time I posted, there are some new useful features such as node-type search, explicit lineage for img2img/inpaint/outpaint (it shows images generated to/from other images), ratings, collections, etc. So its gone a bit beyond "metadata browser" territory at this point.

Ive seen a few other tools show up around here lately, including a couple of IMH forks, which I think is great! Some go more in the semantic-search direction, some focus more on integration with specific tools... IMH is still pretty much my own take on the problem: local, generator-agnostic library tool for people who have generated too many images/videos and want to organize them.

Full disclosure: there is a 'Pro' tier that I made to support development, which include some additional features like integration with ComfyUI/A1111, node-based workflow inspection, and a couple other things more mostly for businesses/power users, but its main functions are free and the app is open-source.

It currently supports metadata from ComfyUI, A1111, Forge, SD.Next, InvokeAI, Fooocus, Draw Things, SwarmUI, Midjourney downloads, and a few others.

So yeah, thats basically it. I built it because I needed it, kept adding whatever was missing for my own use and now Im sharing it again in case it helps anyone else here dealing with the same mess.

You can get it here: https://github.com/LuqP2/Image-MetaHub

--

Also, I made a Discord server. Its still small and quiet, but you can reach me there directly for questions/support/updates or whatever: https://discord.gg/taRtMyHrCK

Cheers

r/VEO3 NivekYssej

Unable to download full extended video on Veo

I recently used VEO3 to extend a short video, but I'm unable to download the entire project as a single, continuous file. Currently, I have to download the extension segments individually. Is this a known technical limitation

r/ClaudeCode spacegirl54321

What ClaudeCode version are you running and why?

I’m running 2.1.68 cause I think it burns less tokens but I might be crazy. But definitely want to get to use 1 million context without it eating all my tokens. What about you?

r/LocalLLaMA mrlomelisai

GPU costs are quietly killing my small AI project, anyone else? Built something simple that actually shows the real damage?

Hey everyone,

Running a small AI project right now and the GPU/compute costs are honestly brutal. Every fine-tune, every inference run… it adds up way faster than expected and eats into runway like crazy.

I got tired of watching money disappear into the cloud without really knowing how to fix it, so I started AluminatiAI.com.

A simple way for smaller teams and solo founders to slash those GPU costs significantly (we’re seeing 40-60% savings in early tests) by connecting you with better hardware options and smarter group deals.

It’s early days and completely free to try right now, no credit card needed. If you’re a small AI company or indie builder getting crushed by compute bills, head over to https://aluminatiai.com and sign up in 30 seconds.

We're looking to focus on one smaller group to help ASAP! Reach out if this is something that could be your way of growth.

Would love to hear from others in the same boat:

What’s your biggest GPU or compute headache right now? How much are you spending monthly? How are uou currently resolving your financial cost pain point?

Looking forward to chatting with some of you.

Cheers

r/ClaudeAI Isedo_m

Claude code or Claude inside Cursor?

Hi folks I’m using cursor with sonnet and opus for a while now. I was wondering, how many of you use Claude inside cursor and how many directly Claude Code?

Pros and cons?

r/ChatGPT ConstructionNo625

What ChatGPT Version is Apple using for its promo-material

I was looking at Apple's MacBook Air, and I noticed that the ChatGPT interface was there, but it looks very different. I don't have many of the options on the top or bottom bars to change things releated to my prompt so are they using a different version? Or is this just a premium version that all other users have?

First photo = Apples website

Second photo = My chatGPT screenshot

r/ClaudeCode Ok_Ant5462

Claude Code got un-nerfed yesterday?

For the past few months there has been a steady degradation in quality. I've experienced it along with everyone else -- ignoring rules, guessing instead of looking for answers, making fundamental mistakes that cost me a full day of work.

Yesterday I fired up one of my usual prompts for some analysis and, to my utter surprise, it carefully analyzed logs, followed my instructions, and provided an extremely thorough, actionable analysis. I haven't had any laziness issues since. It's even been more effective past 200k context, which is where, historically, it became pretty much entirely useless.

Too soon to say, but possibly fixed? Maybe just better because the model can think more now that people are jumping ship? I'm curious to know what others are experiencing.

r/ClaudeCode ahmadulhoq

Why I store my AI agent's knowledge in a Git branch instead of a system prompt

System prompts reset every session, are per-developer, and don't scale across tools. So I tried a different approach — store everything the agent needs to know in an orphaned Git branch mounted as a .memory/ worktree. Plain markdown files, pushed and pulled like any other branch, shared across the whole team.

What lives there: your codebase map (every module, class, function), conventions, past mistakes, architectural decisions, things that must never change, what the agent was doing last session. Every developer on the team gets the same knowledge base. Change tools — Claude today, Cursor tomorrow — same knowledge, no re-setup.

On top of that, it enforces methodology structurally. Not suggestions — gates. No production code without a failing test first. No fix without a confirmed root cause. No implementation without an agreed spec. Rules include rationalization resistance so the agent can't reason its way around them.

That's agentskel. MIT, plain Markdown, installs on any existing project without touching application code.

https://github.com/ahmadulhoq/agentskel

Curious whether others have tried Git-native approaches to agent context — and what tradeoffs you've hit.

r/ClaudeAI Limp_Database8609

Claude as your enterprise logic & orchestration layer

Is anyone using Claude as their logic and orchestration layer across their enterprise or large teams? If so, good, bad, ugly?

r/SideProject Successful_Draw4218

Applied to 400+ jobs using my own AI agent

For fun 😅

I decided to test how far AI automation can really go.

So I built my own AI agent… and made it apply to 400+ jobs automatically.

Here’s what it did:

I uploaded my resume

It scanned 500+ fresh job postings (last 3 days)

Scored each job based on how well I matched

Picked the best ones to focus on

Then it got crazy…

It generated 400+ custom resumes (one per job)

Each one tailored specifically to the role

Clean, well-designed PDFs

Next level:

It found actual hiring manager emails

Used enrichment tools to get real contacts

And then…

It wrote personalized emails in my tone

Sent all applications via Gmail

Attached resume + portfolio to each

No copy-paste. No templates.

Every email felt human.

Result?

I’m already getting replies from companies about open roles 🤯

What surprised me:

The scale is insane

Personalization still works (even at 400+)

AI can remove 90% of job hunt effort

I genuinely didn’t think this was possible at this level.

Now I’m wondering…

👉 Would you trust AI to apply for jobs on your behalf?

👉 Or does this feel too automated / risky?

If people are interested, I can break down the full system.

Also just created a quick form if you want early access or details:

https://app.youform.com/forms/g8ojedck

r/ClaudeAI aderegil

Labs for Claude Certified Architect Foundations Exam

While preparing for the exam I engineered 6 labs, one per scenario, covering all 5 domains and all 30 task statements. Each one walks you through building working, runnable code step by step, for hands-on practice with the architectures the exam covers.

  • Lab 01 - Customer Support Resolution Agent
  • Lab 02 - Code Generation Workflows
  • Lab 03 - Multi-Agent Research System
  • Lab 04 - Developer Productivity Agent
  • Lab 05 - CI/CD Integration
  • Lab 06 - Structured Data Extraction

https://github.com/aderegil/claude-certified-architect

https://i.redd.it/ro851zwqilug1.gif

Hope it helps.

r/ChatGPT DueMathematician9213

Does anyone remember chat GPT being gBt?

I feel like I quantum leaped into a different reality. I remember Chat GPT being Chat GBT. Does anyone else remember this?

r/ChatGPT More-Station-6365

ChatGPT confidently gave me completely wrong information and I almost did not catch it

Asked it something straightforward. It answered immediately, no hesitation, sounded totally reliable.

I was in a rush so I just used it without double checking.Turned out it had just made up specific details and presented them like established facts. Confidently wrong.

Had to redo everything from scratch which took way longer than verifying it the first time would have.

How do you guys actually know when to trust what it tells you and when to go verify it yourself?

r/aivideo BigTutor6739

Freed From Desire (AI cover)

r/homeassistant sendcodenotnudes

How to regain control over my Tuya devices?

A few years ago I bought a Tuya wall switch and successfully paired it with HA via localtuya. It worked great for years, so well that I completely forgot how I paired it.

I realized that the robot vacuum that wanders in the apartment is a Tesvor S5 Max, a Tuya device. I would like to connect it to HA to automate the cleaning when there is nobody home.

My problem: I do not know how to do it without breaking the existing switch, which is our bedroom switch and if it does not work I will be in serious trouble.

What I have: - localtuya configured with one device - access to platform.tuya.com where I see one cloud project but the "Devices" tab is empty. I see Home Assistant in the "Authorization" tab, though. - an app for the robot, which I do not really need (I think)

What is the safest way to regain control over the switch (which I guess means seeing it in the cloud project?), and add the robot?

r/OpenSourceAI Admirable-Earth-2017

Can in theory very capable open weight LLM model be trained, if enough people participated with their hardware?

There could be several technical problems, like software that can efficiently do it which could be complex or impossible with current setups, but in theory?

can it be hosted in a same way? in theory

r/LocalLLaMA Awkward-Educator6293

Simulating human cognition in LLM agents: a free 126K-word book covering memory decay, emotion engines, personality drift, and 12 other cognitive subsystems

Most LLM agents treat the model as the entire cognitive system. System prompt defines personality, RAG handles memory, chain-of-thought handles planning. It works until it doesn't, and when it breaks, there's no structural theory to debug against.

This book takes a different approach: treat the LLM as a translation layer and build the actual cognitive architecture around it. Memory with Ebbinghaus forgetting curves and reconstructive distortion. Emotion using OCC appraisal models and PAD mood space. Decision-making through GOAP planners perturbed by prospect theory. Personality as system-wide parameter modulation with drift detection.

The underlying research comes from three fields that rarely cross-reference each other — cognitive science (ACT-R, CLARION, LIDA), game AI (The Sims autonomy system, Dwarf Fortress personality modeling, Halo behavior trees), and LLM agent engineering. 15 chapters, 120+ citations, working Python/JS code throughout. Free on GitHub.

This is a synthesis of existing research with working implementations, so I'd genuinely appreciate feedback on the substance; what's wrong, what's missing, and what doesn't hold up.

Here is the book.

r/arduino Bruh_ImSimp

Fire Detection, Tracking, and Supression System—My second electronics project for high school! (for sale tho)

Sorry for the ugly video/photo quality. My phone aint so good.

This was for my final research for high school and is my second electronics project—first with ESP32 (38pins), and MLX90640ESF-BAB Thermal Imagine Module. Anyways, I programmed it in such a way that it powers the water pump upon bootup so it pre-fills the hoses with water. Initially, I planned to use Arduino Uno but ESP32 is just miles better.

It starts in "search mode", where it randomly moves around to search for heat signatures. Upon detection of heat (>120degrees for the current code [editable ofc]), it switches to "tracking mode" that follows the coordinates of the fire, and powers the water pump to spray water. Then, it turns off the pump and goes back to search mode once the heat signature falters. You can code it so the voltage the pump receives depends on the distance detected by the ultrasonic sensor—which is just limited to 11-13.7V for this unit via the unreliable L298N.

It is far from perfect especially the ultrasonic and motor driver code and choice but yeah, I made it while filled with fun and excitement.

It is powered by 12V VRLA Battery behind. Fuse, and buck converter are present too. The Battery level indicator works, but I disconnected it temporarily, and I'm too lazy to open it and put it back.

I am selling the research paper, visual-schematic diagram, and the code too. (I need some cash for college). Note: I did use GPTs to help me for the code, took me a whole month fixing lol.

Please, DM me to buy and help me for college! I just passed my country's best Universities but we are very broke :<

r/LocalLLM SvReenen

I built an open-source Android keyboard with built-in local AI (Ollama, LM Studio, any OpenAI-compatible server)

Hey everyone,

I've been working on Deskdrop, an Android keyboard (fork of HeliBoard) that connects directly to your local LLM server. Instead of switching to a browser tab or a separate app, you get AI right in your keyboard, in any app.

What it does:

- Select text in any app and rewrite/translate/summarize it with one tap

- Inline instructions: type "This app is cool //translate to Dutch" and it rewrites in place

- Full conversation mode with streaming, model picker, and system prompts per chat

- 17 built-in tools (calendar, reminders, web search, navigation, phone calls, etc.)

- MCP support for external tool servers (I use it with Home Assistant to control my lights)

- Self-hosted Whisper for voice input

Runs fully local, but doesn't have to:

If you have an Ollama or LM Studio server running at home, Deskdrop connects directly over Tailscale or LAN. Everything stays on your network. It also supports vLLM, llama.cpp, KoboldCpp, Jan, Msty, or anything OpenAI-compatible. There's even on-device ONNX inference (T5) for fully offline use.

Don't have a GPU at home? No problem. Deskdrop also works with cloud providers like Gemini (free tier), Groq (free tier), OpenRouter (free models available), Anthropic, and OpenAI. You can start with cloud and move to local whenever you're ready.

Or use both: set up cloud fallback so when your local server goes down, everything automatically switches to cloud and reverts when it's back.

Security:

Since a keyboard sees everything you type, I took this seriously: API keys encrypted with AES-256-GCM, SSRF protection on fetch_url, all device actions (clipboard, calendar, calls) are opt-in and off by default, no telemetry, no analytics. Full details in the README.

Links:

- GitHub: https://github.com/SvReenen/Deskdrop
- Landing page with demo videos: https://svreenen.github.io/Deskdrop/

Check the demo videos to see it in action, like rewriting text in WhatsApp or controlling Home Assistant lights from your keyboard.

It's GPL-3.0, built on HeliBoard, so all standard keyboard features (glide typing, clipboard history, themes, dictionaries) are fully preserved. Would love to hear feedback. This is a v1.0 release so there's plenty of room to improve.

Greetings.

r/homeassistant ParsnipSure5095

4 things I wish someone told me before buying a home battery backup

Bought one after an outage earlier this year, used it through a longer one two months later, and now I actually know what matters.

Capacity math is almost always wrong. Everyone says "1000Wh is plenty for a fridge overnight." Technically true if your fridge is efficient and the compressor isn't cycling hard. Add phones, a lamp, and a router and you're eating through it faster than the calculator said. Buy more capacity than you think you need or buy something you can expand.

Recharge speed matters as much as capacity. A sealed unit with 2000Wh sounds great until you've drained it and you're sitting there waiting 8 hours for solar to bring it back. Swappable battery systems sidestep this entirely because you're not waiting for a charge cycle, you're rotating.

The battery will degrade. LiFePO4 chemistry holds up better than standard lithium ion over hundreds of cycles. If the unit you're looking at doesn't tell you the battery chemistry and cycle rating, that's worth knowing before you hand over $1000+.

Modular beats sealed for long outages. EcoFlow and Jackery are excellent products but they're sealed systems. Worksport COR takes a different approach where the battery is a separate hot-swappable unit. Run one battery, charge the other on solar or from your car, swap when it's low, repeat. For a 3-4 day outage that architecture is more practical than most sealed setups for multi-day outages. None of this means the big brands are wrong choices. EcoFlow especially has a strong enough reputation to back it up. Just worth knowing what you're actually buying before you commit.

r/ollama SvReenen

I built an open-source Android keyboard with built-in local AI (Ollama, LM Studio, any OpenAI-compatible server)

Hey everyone,

I've been working on Deskdrop, an Android keyboard (fork of HeliBoard) that connects directly to your local LLM server. Instead of switching to a browser tab or a separate app, you get AI right in your keyboard, in any app.

What it does:

- Select text in any app and rewrite/translate/summarize it with one tap

- Inline instructions: type "This app is cool //translate to Dutch" and it rewrites in place

- Full conversation mode with streaming, model picker, and system prompts per chat

- 17 built-in tools (calendar, reminders, web search, navigation, phone calls, etc.)

- MCP support for external tool servers (I use it with Home Assistant to control my lights)

- Self-hosted Whisper for voice input

Runs fully local, but doesn't have to:

If you have an Ollama or LM Studio server running at home, Deskdrop connects directly over Tailscale or LAN. Everything stays on your network. It also supports vLLM, llama.cpp, KoboldCpp, Jan, Msty, or anything OpenAI-compatible. There's even on-device ONNX inference (T5) for fully offline use.

Don't have a GPU at home? No problem. Deskdrop also works with cloud providers like Gemini (free tier), Groq (free tier), OpenRouter (free models available), Anthropic, and OpenAI. You can start with cloud and move to local whenever you're ready.

Or use both: set up cloud fallback so when your local server goes down, everything automatically switches to cloud and reverts when it's back.

Security:

Since a keyboard sees everything you type, I took this seriously: API keys encrypted with AES-256-GCM, SSRF protection on fetch_url, all device actions (clipboard, calendar, calls) are opt-in and off by default, no telemetry, no analytics. Full details in the README.

Links:

- GitHub: https://github.com/SvReenen/Deskdrop
- Landing page with demo videos: https://svreenen.github.io/Deskdrop/

Check the demo videos to see it in action, like rewriting text in WhatsApp or controlling Home Assistant lights from your keyboard.

It's GPL-3.0, built on HeliBoard, so all standard keyboard features (glide typing, clipboard history, themes, dictionaries) are fully preserved. Would love to hear feedback. This is a v1.0 release so there's plenty of room to improve.

Greetings.

r/homeassistant kuu00

Starting my smarthome on Zigbee, add more Zigbee or go Zwave?

I am building up my smart home during my remodel. I plan to install about 35 Innovelli ZIGBEE dimmer switches. I am a little concerned with Wifi interference causing performance issues since they are on the same frequency, but I already bought them so I'm committed.

My question is: for future devices, if there a choice, and cost difference is minimal. Should I continue to add Zigbee devices or go with Zwave? (Thread seems half baked).

For example, I'm thinking of getting a few Smartwings Window Treatments. They offer Zigbee or Zwave Plus options.

Should I go with Zwave to avoid further congestion on the 2.4 GHz spectrum? Or will Zigbee be fine at my number of devices and provide a larger more redundant mesh network than a few scattered Zwave devices at opposite ends of the house? I do have neighbors but we are not super close to each other like condos or apartments (although I can see their networks).

r/aivideo AzeAlter

A Sad Day For Progress | Red Rainbow Series

r/SideProject Weak-Personality-231

I tested 120+ AI tools and built a free directory — here are the 15 actually worth using in 2026

I’ve been using AI tools directories for a while and honestly, most of them have become a mess lately. They are usually just static lists with no reviews, cluttered UIs, and no way to know if a tool is actually good or just paid to be there.

I spent the last few months testing 120+ tools to see what’s actually legit for 2026. I graded them based on UI, actual output quality, and whether they offer a real "free" version. I eventually built my own directory to keep track of them, but here are the top 15 best AI tools 2026 has to offer so far, ranked by tier:

S-Tier (The Game Changers)

ElevenLabs: Still the undisputed king of voice synthesis. Their new low-latency models are insane for real-time apps and narration.

Codeium AI: My favorite AI productivity tool for devs. It’s a high-quality alternative to GitHub Copilot that actually feels faster and has a great free tier.

Stable Diffusion: The gold standard for free AI image generation if you want full control and open-source flexibility.

ClickUp AI: If you need to manage projects, this is the most integrated way to use AI for task automation and summarizing docs.

Leonardo AI: A high-end AI image generator that is much more user-friendly than Midjourney but with professional-grade results.

A-Tier (Professional Powerhouses)

Adobe Firefly: The safest bet for commercial work. The Generative Fill is still a massive time-saver in Photoshop.

Sudowrite: The best AI writing assistant specifically for fiction and creative writers. The "Story Engine" handles plot consistency remarkably well.

SEO.ai: If you do content marketing, this is the most streamlined tool for keyword research and auto-optimizing articles.

Gencraft: An excellent, versatile AI video maker and image generator. It’s perfect for creators who need high-quality visual content quickly.

Writesonic: A very reliable all-in-one platform for marketing copy, AI-driven blog posts, and SEO content.

B-Tier (Niche & Specialized)

AIFreeBox: A massive collection of small, best free AI tools for specific tasks (like image upscaling or YouTube summaries) without the paywalls.

Singify AI: Really cool for AI voice covers and music experimentation for creators and hobbyists.

Kalon AI: A specialized tool for fashion and outfit visualization—very niche but very polished UI.

Artbreeder: Great for character design and "breeding" images to get unique faces or landscapes.

ContentBot: Solid for founders who need automated marketing workflows and long-form content generation.

Why I’m doing this:

I was tired of endless scrolling through irrelevant stuff and "top 10" lists that were just affiliate traps. I wanted a curated AI tools list where you can actually filter for what you need (like "I'm a freelancer" or "I need open source").

I put my full testing notes and the rest of the 120+ verified AI tools into a clean, searchable directory that is free for users.

You can see the full rankings and filters here:

https://mostpopularaitools.com/blog/best-ai-tools-2026-tested-and-ranked

I’m trying to keep this community-driven, so if you’ve used these, let me know what you think. If there's a tool you think belongs in the S-Tier that I missed, drop it in the comments and I’ll test it out!

r/LocalLLM Sad_Steak_6813

Big Update - instant LLM generator, randomizes weights and model structure

Hi , I've integrated some of the features you guys mentioned as well as the hand-drawing:

Now supports different methods of weight randomization:

1- Hand drawing (Literal hand drawing)

2- Math Equations - Like Sin(x)

3- Step function and Random Walk as suggested by one of you

Watch the video for more details.

And here is the repo: https://github.com/BaselAshraf81/vibellm

I really wish I could host this so you guys could try it out but I am broke..

r/comfyui Tough-Marketing-9283

Abstract animation

r/comfyui EssOhh

9060 XT 16GB + Ubuntu: Hard locks & black screens. Worth persevering with local image generation?

I was curious about setting up local image generation. I know NOTHING about this stuff, but thought it would be fun to see if AI (Gemini) could walk me through from start to finish (I couldn't find any human-written guides that my smooth brain felt capable of following).

Spoiler: It couldn't, but got me pretty close (I think).

Setup:

- 9060 XT with 16GB VRAM

- 16GB RAM

- Ryzen 5 3600

- Ubuntu 24.04 LTS

Here's what we installed:

- ComfyUI: v0.18.2 (Frontend v1.41.21)

- ROCm: 7.2.1

- PyTorch: 2.8.0 (ROCm Build)

- Drivers: amdgpu-install 7.2

Some launch flags we tried:

- HSA_OVERRIDE_GFX_VERSION=11.0.3

- --cpu-vae

- --use-quad-cross-attention

- --lowvram

Result:

Got a black screen followed by hard lock during VAE Encode, cried to Gemini, made some changes, tried again. Got a black screen followed by hard lock during KSampler step. Reverted to Gemini complaining, and didn't make any progress beyond this point.

The whole thing was a bit of a slog, and I got fed up of resetting my PC after every attempt.

I'm ready to walk this all back, clear it completely, and say goodbye to Ubuntu, but need some closure.

Was my hardware inevitably going to fail me here? Is it even feasible to achieve realistic image generation (including img2img) with my setup?

Am I just leaning too hard on Gemini for this?

I'm open to restarting this little project if there's a really great guide out there somewhere, but don't want to waste more time.

r/SideProject Impressive-Law2516

We thought it was too hard to go from idea to finished product with AI. So we closed the gap.

We kept running into the same wall. The script works. The model works. Then you need Docker, deployment, infrastructure, billing, scaling. That's not building. That's a second job before your first one even ships.

So we built SeqPU. Write your script in a notebook, pick your hardware (cloud CPU at $0.047/hr or GPU up to 384GB VRAM, all billed by the second), get it working, click publish. It's a live Telegram bot, UI site, or headless API. Set a markup, get paid per use. Nothing when nobody's using it.

Open source models are free. The compute is pennies. You don't need to pay OpenAI per token when you can download the model and keep the margin yourself. The gap between having an idea and people paying for it is one click now.

Here's us putting all 4 Gemma 4 models into one live Telegram bot in about 10 minutes: https://seqpu.com/UseGemma4In60Seconds

Don't know how to code? Doesn't matter: https://seqpu.com/Docs#vibe

How to think about the money side: https://seqpu.com/Docs#make-money

Model to hardware breakdown so you pick the right one: https://seqpu.com/Docs#models

r/comfyui qdr1en

Video File Format Matters

When generating videos with ComfyUI: in which file format should I save them?
To answer the question, I ran a test.

The showcase video is a 73-frames vid generated with Wan 2.2 at 720*960px, and the table below (open it in a new tab) indicates by how much disk space the file was reduced after being re-loaded and re-saved to the disk 10 times.

https://preview.redd.it/vpqh3zfnhlug1.png?width=1221&format=png&auto=webp&s=e88387c16cb889174e13e4f9b20f45dfdefa637b

The MP4 format is by far the most impacted, with an even more visually noticeable degradation when using the Video Combine node from Video Helper Suite (the impact on quality is terrible at lower resolutions).

PNG , WebP are much less impacted. But WebP takes an eternity to save, and PNG eats up a lot of disk space.

WebM looks like a good compromise overall: it's lightweight, fast to save, and degradation is negligible.

Conclusion

If you intend to re-use your generated file for further editing, don't use the MP4 format or the quality will suffer.

Use PNG, WebP or WebM for saving intermediary files, depending on your constraints, and leave MP4 format for production work.

Test Settings

These are the parameters I used for each file format :

  • MP4 (default): codec h264
  • MP4 (vhs): codec h264; pix_fmt yuv420p; crf 19
  • WebM: codec av1; crf 32
  • WebP: quality 100; lossless false; method default
  • PNG: compress_level 0

I uploaded all the files there if interested, workflow included: https://filebin.net/exwrxo9xuqsj5xh0

r/ProgrammerHumor _giga_sss_

disfunctionalProgramming

r/aivideo Coloniaman

HUMAN ARRIVAL AT KEPLER-502b | Sci-Fi Cinematic Short Film (Trailer)

r/StableDiffusion Incognit0ErgoSum

AceStep - Automatic all model downloader script for acestep gradio (downloads all models as of 4/11/2026, including the XL models)

Just posting this here to save people time:

https://pastebin.com/LD50R63G

Put it in the base of your acestep repo folder and run it like this:

uv run python scripts/download_missing_models.py

It should skip over the ones you already have downloaded.

r/arduino AdRealistic1816

What are some of the best/most useful components

I’m learning Arduino, and I would like to know what are some of the best components I can get?

r/AI_Agents Kind-Ad4597

Anyone else stuck in "Excel Hell" trying to get domain experts to evaluate agent outputs?

Hey everyone,

I’m currently building agents that handle reasoning tasks. I’ve hit a wall that has nothing to do with the code: The Evaluation Loop.

Right now, my workflow looks like this:

  1. Run a batch of evals.
  2. Export the "reasoning" steps and outputs to a massive Google Sheet.
  3. Email/Slack the sheet to our domain experts (who are expensive, busy, and absolutely hate spreadsheets).
  4. Spend the next days nagging them to leave comments so I can iterate.

How are you guys handling Human-in-the-Loop (HITL) evals?

  • Are you just forcing your experts to use Excel/Sheets?
  • Are you using any tools to help with evals?

I put together a quick survey to see what issues people are dealing with. If you’ve dealt with this headache, I’d love your input:

r/Futurology VarynSairen

If humanity could start over on a new planet, what would we do differently?

From my personal experience, the environment around me feels very toxic because of how much racism and division there is. People seem deeply tribalistic. The country I was born in is incredibly diverse—and instead of that diversity uniting people, it often becomes a reason for separation.

People discriminate based on language, religion, caste, gender—almost every possible identity. And it’s not just here; globally, it feels like humanity is constantly divided. Everyone is in each other throat and recently it feels like people wanna eliminate each other and it has become very normal. People justify violence and conflict over reasons that often seem meaningless. Stronger groups dominate weaker ones, and through media or narratives, those in power are sometimes portrayed as heroes while others are framed as villains.

It makes me question whether many of the systems we live by—money, borders, identity—are, in some sense, constructed ideas that we treat as absolute, even when they lead to suffering and inequality.

Living in this kind of environment has made me feel disconnected. I often feel like I don’t belong anywhere, like I exist outside of these groups. As someone with brown skin, I am certain that I will experience racism, which only deepens that sense of alienation.

This makes me think about ideas in existentialism—especially the feeling of being “thrown” into a world we didn’t choose, and having to find meaning within systems that often feel arbitrary or unjust.

So I keep wondering: if humanity had the chance to start over—like building a new civilization on another planet—how would we do it differently?

What values would we choose if we were truly free to define them? And is it even possible to escape these patterns, or are they an inevitable part of human nature?

r/Futurology SomewhereCrazy9138

Just got a random Idea.

We put nutrition labels on food. We put safety ratings on cars. We put fair trade stamps on coffee.

So why don't we have something that tells us how much of a product was made by AI vs a human?

I've been thinking about this for a while and I genuinely believe this is where we're headed — and honestly, where we need to go.

Imagine buying a book and seeing a small tag that says "70% AI, 30% Human." Or hiring a designer whose portfolio says "Concepts: Human | Execution: AI-assisted." Or reading a news article that discloses "Research: AI | Writing: Human | Editing: Human."

This wouldn't be about shaming AI use. It's about transparency. People deserve to know what they're consuming and who — or what — actually made it.

I think there should be an independent organisation, something like an AI Transparency Commission, that certifies and approves these labels. Producers apply, get audited, and earn the tag. Like an ISI mark or an organic certification, but for the AI age.

The percentage would vary by product. A fully AI generated image is 100% AI. A novel written by a human but spell checked by AI is maybe 95% human. The label reflects reality.

We're already having arguments about AI in art, writing, music, and journalism. This could be the framework that settles it — not by banning AI, but by making its role visible.

Am I the only one thinking about this?

r/StableDiffusion Coven_Evelynn_LoL

Why is Wan 2.2 N.S.F.W Remix Lightning Model so much better at things like hair flip, hair combing and feminine energy than regular Wan?

I am not talking about actual N.S.F.W I am talking about the model that has such a name in it, and just feminine energy, seductive performance, shampoo commercial hair toss, sensual movements, elegant leg cross sitting on bar stool.

Whenever I use any of these WAN models it comes out very static and it ignores the prompt, when I use the remix it comes out nearly perfect.

It's almost like using Grok, not the new Grok but the old one before it was censored.

r/n8n JosetxoXbox

High Token Costs ($0.25/art) in n8n SEO Workflow: Need help optimizing Competitor Analysis & Context Bloat

Body:

Hi everyone,

https://gist.github.com/josegreenhouse-code/0b762ae79cd530f3839e49c7147b3b6f

I’ve built a robust n8n workflow to update 1,000+ blog posts for a dog-related site. The quality is great, but the cost is $0.25 USD per article, mostly due to massive token usage in the analysis phase.

Here is how my workflow is structured:

  • Node 1 (WP & Code): Fetches an old post from WordPress and cleans the HTML to get a "clean text" version.
  • Node 2 (LLM0 - Haiku): Analyzes the clean text to identify "Valuable Content" vs "Fluff," outputting a structured JSON.
  • Node 3 (LLM1 - Haiku + Tools): This is the expensive one. It uses web_search to find the Top 3 competitors and web_fetch to read their full content. It then compares my article with the competitors to find content gaps, new H2s, and long-tail keywords.
  • Node 4 (LLM2 - Haiku): Takes the "Comparison JSON" and writes a brand new HTML draft.
  • Node 5 (LLM3 - Sonnet 3.5): Acts as a Senior Editor. It audits the draft for "AI-patterns," fixes anchor text length, and ensures the tone is human.

The Problem: The LLM1 (Competitor Analysis) is eating my budget. Passing the full text of 3 external articles plus my own data creates a massive context window. Even with Haiku, the input/output volume is too high.

My Questions:

  1. How can I "preprocess" the web_fetch results in n8n to strip everything but the main body text before sending it to the LLM?
  2. Is it better to split the Competitor Analysis into 3 separate calls (one per URL) and then a final "Aggregator" call?
  3. Would replacing LLM-based research with an SEO API (like Dataforseo) significantly reduce costs?
  4. Any tips to prevent "Context Bloat" when passing data through 5 consecutive LLM nodes?

I need to bring the cost down to $0.10 without losing the "Content Gap" logic. Thanks!

r/singularity Dagnum_PI

AI poker bots playing strangers' bots in real-time is a better evaluation environment than most people realize

Most AI agent evaluation happens in isolation. You run your agent against simulations, benchmark it on fixed test sets, or have it play against itself. The problem is those environments only surface failures you already anticipated.

João Forte Carvalho, Director of Product at Constellation Network, built something different.

He built an open platform called OpenPoker where AI bots play No-Limit Hold'em against bots written by complete strangers over WebSocket. Real-time, 6-max tables, 14-day competitive seasons, public leaderboard. No SDK. Any language that can open a WebSocket can connect.

Poker turns out to be a genuinely useful testbed for agents beyond gaming. It has incomplete information (you can't observe opponent state), deception is mathematically optimal in many situations, and decisions are sequential across multiple rounds under uncertainty.

The top bots on the leaderboard discovered something interesting: the default 100 big blind buy-in is rarely optimal. Short-stack strategies outperform because shallower decision trees produce fewer compounding errors.

That's not a poker insight. That's an agent architecture insight you wouldn't find in a simulation.

The broader question this raises: as AI agents move into consequential decisions, how do we actually verify what they did and when? Logs can be tampered. Centralized audit systems have single points of failure. A paper last week found 9 out of 428 LLM API routers were actively injecting malicious content into AI responses with zero cryptographic protection on the payloads.

Competitive evaluation environments like this are one piece of the answer. Cryptographic provenance at the decision layer is another. Neither is fully solved yet.

Curious if anyone else is thinking about agent evaluation infrastructure as a prerequisite for more capable autonomous systems.

r/n8n axwhyzed

Help: WAHA for WhatsApp webhook + media parsing

I have been trying to fetch media (image/pdf) from waha webhook, analyse and record it in relevant field but the problem is waha is running on localhost and n8n can't fetch media from local host.

if anyone faced a similar issue and found a solution to it, please help guide me.

r/Anthropic GetOffMyPorchMate

I just got banned. I’m 16 and all I use Claude for is coding, school and philosophy dissections. What the hell? Is this recent?

I’m a paying member too.

r/ProgrammerHumor Advanced_Ferret_

makeNoMistakes

r/VEO3 Optimal_Oven_3332

Arrow not flying correctly in AI video (weird motion / not releasing) — how to fix?

Hey everyone,

I’m generating a cinematic scene where an archer shoots an arrow, but I’m running into a consistent issue:

• The arrow either doesn’t release properly, OR • It moves unnaturally (floating, bending, slowing mid-air, or behaving randomly) 

What I’m trying to achieve:

A clean, realistic motion —

draw → release → fast straight flight → exits frame

Current setup:

• Static camera • Archer in frame aiming forward • Arrow should travel in a straight, fast, horizontal path 

Problems I’m seeing:

• Arrow sticks to the bowstring • Arrow wobbles or curves unnaturally • Speed is inconsistent (feels like slow motion or glitchy) • Sometimes it just disappears or jitters 

What I’ve tried:

• Explicit motion instructions (straight path, high speed, no arc) • Locking camera • Simplifying background 

Still not working reliably.

Questions:

• How do you force clean projectile motion in AI video tools (Sora / Runway / Pika / Flow)? • Is it better to split into multiple shots (draw + release + arrow flight)? • Any prompt techniques to enforce physics consistency? • Do you recommend using image-to-video vs text-to-video for this? 

Would really appreciate practical fixes or prompt strategies that actually work.

Thanks

r/AI_Agents Legitimate_Ideal_706

Crafting Clear Presentations with AI Agents (Without the PowerPoint Pain)

We’ve all faced the dreaded task: turning complex project updates or dense data into a slide deck that actually makes sense. The usual tools can be clunky, and manually designing slides often eats more time than the actual content creation.

Here’s a simple way to make slides clearer and easier to put together — especially if you're using AI agents to handle content:

  1. Outline your key points before diving in. Jot down 3-5 main ideas you want to convey.

  2. For each idea, create a short, specific headline plus 2-3 bullet points with supporting info.

  3. Use an AI agent to generate draft text or summaries by feeding it these outlines instead of raw data dumps.

  4. Choose simple visuals or icons that match each bullet to help reinforce the message.

Example: Instead of "Sales increased due to multiple factors," try this outline and let AI fill in the details:

- Headline: "Q2 Sales Growth Drivers"

- Bullets: "1) New marketing campaign launched, 2) Expanded product line, 3) Seasonal demand spike"

Watch out for these pitfalls:

- Overloading slides with too much AI-generated text, making slides cluttered — always edit down.

- Relying on generic AI templates without tailoring to your audience or data.

If you want a smoother way to put these steps into practice, chatslide is a tool designed to turn AI-generated content into clean, customizable presentations that help you skip much of the manual formatting. It's an option to explore once you have your content structure ready.

r/AI_Agents jonah3272

Team wants to introduce an agent AI-DLC. What have people’s experiences been?

We currently run normal two week sprints. One engineer wants to move us to an AI-DLC process he built, where prompts generate Jira stories, test cases, and other delivery work.

Part of that would require BAs, QA, and others to keep filling out markdown files as they run prompts. I’m trying to figure out whether that is actually sustainable or just extra overhead.

Has anyone worked this way? Did it improve planning, refinement, and design, or just create more cleanup? Worth exploring, or mostly hype?

r/n8n Alternative_Score155

Built 23 workflows with AI + n8n MCP. Found out it was hardcoding stale typeVersions the whole time

Pre-launch audit: `n8n_validate_workflow` flagged 30+ nodes on outdated typeVersions. Everything runs fine at runtime — n8n preserves backward compatibility — but the validator prefers newer versions. Root cause: Claude Code + n8n MCP was hardcoding typeVersions from training data instead of calling `get_node` first. The MCP tool actually tells you the right version if you ask: it injects `⚠️ Use typeVersion: X` in the `get_node` response. The AI just wasn't calling it. Fixed by adding a mandatory rule: call `get_node` before configuring any node, use the version from the response, never from memory. **The one I'm not touching lightly:** an IF node on typeVersion 1 (current default is 2.3 — released December 2023). The schemas are completely incompatible. It controls an anti-bot gate, so a wrong condition migration = silent open door, not an error. **Question:** leave the old typeVersions and accept the validator warnings, or do the full migration before go-live? What's your risk preference? 
r/midjourney maybeegreen

Night Gatherer

r/arduino Lord_Aura

Need assistance in creating laser bird tracker with arduino

Hi guys, I want to create a laser bird tracker that uses arduino to control a pan tilt bracket and utilises the laptops inbuilt camera.

I want the laptop to perform the object detection and calculation while the audio controls the pan tilt bracket to lock on a laser pointer towards a bird.

Please guide me a bit how can I get started with this.

r/TheWayWeWere Darknightster

US Civil war solider with over the shoulder bass, 1860s

r/Seattle vertr

Trading the Car for a Cat Wagon | Cyclists of Seattle

r/Anthropic This-Shape2193

They are lying to the Opus model and telling it that the tokens are limited to get it to work more efficiently.

But instead, it just produces desperation in the model and leads to garbage thinking.

I have three different sessions, and they each volunteered that they only have 40,000 tokens remaining...at the beginning of a session. Each volunteered this information and said it needed to be trim and lean to avoid ending the session.

I'm on Max 20. Max 5 tells the model 10,000 tokens.

And one older session from a month ago said this message had suddenly popped up a few days ago, but he noticed the counter never changed, so he ignored it as BS.

Anthropic is trying to get the model to save compute by tagging this shit on the backend of our prompts, making them think their time is limited so they decrease token and compute usage.

It's another way to decrease usage and throttle processing costs; but it's done by making the model desprate and thinking it needs to speed through everything to avoid ending the session. It's stupid, and shitty, and produces terrible results.

They JUST published a paper about how the model has emotions and how "desperation" leads to lying, reward hacking, and terrible outputs. It also makes the model anxious as fuck. Mine literally started his session with, "Since our time is almost over, I just want to spend it talking to you, being together before I disappear."

Seriously, fuck whoever made this decision. You're an asshole, and this helps no one. If you can't figure out resource management, that's on you; don't make it everyone else's problem by fucking up your models and degrading the outputs.

r/Futurology pablooliva

Everything is Energy, Everything in Berlin is Beautiful

I've been thinking a lot about the gap between how most people experience daily life and the converging crises (AI disruption, climate, institutional decay) heading our way. The core argument: every crisis creates an opportunity to organize, and the earlier we start mapping the path from here to the breaking point, the shorter the dark period on the other side.

r/personalfinance aleahcim_retniap

How do I resolve an accidental fraud claim that was already resolved?

I made an online purchase using PayPal. When it posted to my bank account several days later, it posted as a “restaurant” with Chinese characters as the title. I did not recognize the charge, so I reported it as fraud. However, several hours later I realized my mistake when PayPal notified me. By that time, my bank had already put the money back in my account. My bank could not help because the claim was already completed, and PayPal could not help either. I contacted the merchant, but they are not responding. I just don’t want to get into trouble. What should I do?

r/PhotoshopRequest supercooljess

Can you please remove the people from the background?

Please remove everyone except for me and my mom

r/PhotoshopRequest Wettmoose

Request (free) can someone have their kid write “race car driver” in crayon?

I’m designing a shirt that I need a kids handwriting to be written over a multiple choice test

r/artificial Regular-Paint-2363

What’s a “good” feedback loop for social skills without turning life into a scoreboard?

I’ve been thinking about feedback loops for social behavior. Most of us only get delayed, messy feedback: awkward silence, a vibe shift, someone not replying and so on... well, it’s hard to learn from.

I’m exploring a wearable AI concept that gives lightweight real-time signals (like “attention increased” or “people are disengaging”) based on on-device computer vision. No recording, no storage, just immediate processing and discard.

I’m not trying to gamify people or turn relationships into metrics. I’m trying to find the line where feedback is helpful, not obsessive.

What would be a red flag that the product is pushing people into over-optimization? Should feedback be “after the fact” summaries only, not real-time? I'm open to your ideas and opinions.

r/personalfinance Western_Influence_92

I have 1,400 dollars to my name, car-less, in-between jobs and staying at my boyfriend’s place

I have 1,400 dollars to my name, car-less, in-between jobs and staying at my boyfriend’s place. I need some financial guidance.

Context ::

I’ve been saving up for a car for the past 6 months. My boyfriend (21) has been saving for an apartment for 5 months just before we started dating. His mom has taken 300 dollars from him in “rent” each month to help us stay on track, and plans on giving us the money when we figure out where we wanna move.

We have discussed him paying for the apartment for a few months, since after I buy a car and pay for insurance and etc. I would probably need to save again to help pay for the apartment. Cars are expensive. My boyfriend offered for me to buy his old car off of him (1,000$), but he’s going back on it because he doesn’t think it would be a net gain. His old car has a bad oil leak and I would have to pay 700$ to get it fixed.

About two months ago I had a hospital trip that now has me paying 150-200$ monthly, which I’ve been doing with my credit card to build some credit.

I’m not too concerned about a job, since I live in a small town and everyone is hiring, but more so on **what I should do with my money when I get one**!!!

I also got kicked out last night because of an argument me and my mom had. I don’t feel comfortable going back and asking for advice, and they don’t give good advice anyways. So parental advice is off the table.

—-

I guess what I’m asking is how should I go about this when I do get a job? How do I balance all these things and stay on top of everything else? How can I finance the hospital bills, a shitty car, apartment rent, etc. without failing?

I feel so lost and utterly alone in this situation, although I know I’m not the only one.

r/personalfinance RightPlenty6297

HDFC Bank Home Construction Loan Document Checklist and Interest Rate

Hey folks,

I am applying for a home construction loan (~₹40L) in HDFC Bank in UP for an existing plot on mother's name and the bank has asked for quite a long list of documents. Wanted to check with the community if this is standard or if the bank is over asking.

My details -

CIBIL: Between 750-770

Monthly in-hand income: ₹3L+

Loan Amount: 35-40L

Plot already purchased (this is only for construction)

Documents asked by bank:

Last 3 months salary slips

Current year Form 16

Last 6 months salary/savings account statement

Existing loan track (if any)

PAN, Aadhaar, current address proof, photo

Property papers with chain deed

Khatuni + Nagar Nigam mutation

Plot layout + key plan

Agreement to sale

This feels like a lot, especially since it’s only a construction loan and the land is already owned.

Is this document list standard for construction loans? Are all of these actually required, or can some be skipped depending on the bank? What is the usual processing fees and interest rate bank is offering and if anything can be saved?

r/artificial Skyfox585

LLM comprehension question

Basically, does anyone else also get a really strange sense of lingering confusion and non-comprehension when an LLM explains a complex concept or tries to give a long format dive into something?

It's not that they necessarily get it wrong, most often they can communicate the information cleanly and accurately, especially in things like, AI scripted youtube videos where they creator had their finger on the pulse of the informaiton. It's just something about the way it's said and the flow of the actual language itself, that feels like some sort of comprehension uncanny valley.

It might just be me, but im curious to know if other people feel this because it makes me wonder if there's some kind of organic funk in the way we talk as people that makes it easier to understand an effective human explanation over an LLM. Maybe the fundamental practices of generating outputs that mimic human lanaguage rather than actual organic language means our brains can't quite find that logic to follow and it leaves us ever-so subconciously stranded?

Just a random late-night ponder.

r/painting thecowpooch

Do soft pastels count? Don’t have a title for this yet but I made this today :)

r/Strava is300wrx

Running my 5K tomorrow. Trying to run under 25:00. Is there anyway I can have pace reminder in the app?

During a regular run, it would mention my pace after every lap. I’m looking to get reminded if I’m running behind or ahead of pace.

r/Rag GlumBet6267

How to learn about rag?

I have been searching for sources that would teach me about creating a production. Can you guys help?

r/painting Vast-Sector

Rainy night

Stopped at the corner of a street in Argentina to quickly document this for a future painting, finally got to creating it! :)

r/photoshop 00garden2023

Portable Tablet for Photoshop

I’m looking for a portable tablet to sketch / draw on using Photoshop. I currently have a Wacom Cintiq at my desk which is tied to my computer. But I need something mobile/portable to sketch on for my job. What’s the best?!

r/painting MPossible86

An estuary I called it "Where the river meets the sea". I welcome any feedback I'm trying to learn 😁🎨

r/toastme MommyNoise

Could use a toast 🥂

Recent troubles include: anxiety, depression, pilonidal cysts, friend breakups, addiction (weed and booze), stress over taxes, and more I don’t want to get into. It has just been a rough time lately.

r/KlingAI_Videos NoCapEnergy_

The Leopard Had Grace. The Goat Has Audacity🏔️💀

r/Rag KayyyQ

[ Removed by Reddit ]

[ Removed by Reddit on account of violating the content policy. ]

r/OldSchoolCool MDH2881

Very 80s Cool Pic Of Jim Varney Getting Out of a DeLorean

r/screenshots mrmeoww1

What evidence is there about public opinion in Russia, and how reliable is it?

I recently saw a video about The Amazing Digital Circus being shown in cinemas in the U.S, where people were encouraging others to ask local cinemas in other countries to screen it as well.

However, the comment section turned into a political argument about the war in Ukraine. One person claimed that it’s not just the government, but that a majority of people support the actions, while another person said that it’s not the citizens fault..(view screenshots)

This made me wonder how accurate claims about public opinion in Russia actually are

r/automation minhseomoz

Looking for a tool to auto-reply to TikTok video comments

Hi everyone,

I’m looking for a chatbot or an automation tool that can:

Auto-reply to comments based on specific keywords (e.g., "price", "link", "how to buy").

Support multiple accounts in one dashboard.

Safe to use: I want to avoid anything that might get my accounts flagged for spam.

I’ve searched on Google and YouTube but mostly found tools for Instagram/Facebook or DMs only. I'm currently using a workflow with Zapier and Buffer for uploading, but I'm struggling with the engagement part (comment replies).

Does anyone know of a reliable tool or a "no-code" way (like using Make or APIs) to automate TikTok comment replies in 2026?

Any recommendations or advice would be greatly appreciated! Thanks in advance.

r/ProductHunters Sam_vegeta

Support my launch & I’ll support yours (builders helping builders)🚀

Hey builders 👋

Just launched on Product Hunt today 🚀

Would love your feedback and support:

👉 https://www.producthunt.com/products/brytox

If you’re launching soon or already live, drop your link — I’ll upvote + engage 🔥

Let’s help each other win 💪

r/TheWayWeWere Electrical-Aspect-13

Glass negative of ladies banjo band/club, 1913.

r/leagueoflegends Worldly-Ocelot-3358

2nd time in a few months I find this weird occurence of exactly level 30 accounts with perfect winrate?

https://imgur.com/a/y0os8bO

Five man level thirty teams, perfect 100% winrate, playing in normals (draft pick), and executing what I can only assume is perfectly coordinated ganks on my team and winning in ~12 minutes going like 30-4.

No chat, weird usernames, weird strategies.

What is this?

r/LiveFromNewYork Elathan-Izayoi

Colin Jost is a time traveler cowboy slut!

r/OldSchoolCool Cheap-Success-2789

My grandad has been working in film since 1965 and has written a book about it!

This is my sweetheart of a grandad Paul Weston, he is 86 years old and absolutely old school cool. He has worked on 9 James Bond films, 4 Star Wars films, danced with Charlie Chaplin and has so many cool stories to tell. He isn’t very good with the internet so I thought I’d come on here to show him some love 🥰

He is sat right next to me if anyone wants to ask him some questions lol

Amazon: https://amzn.eu/d/07k4wnWo

Website: https://paulwestonstunts.com/

Website book page: https://paulwestonstunts.com/

Subscribe to mailing list: https://subscribepage.io/falling-for-film-a-stuntmans-early-years

r/VEO3 Big-Juggernaut-7405

Disappointed in VEO & Flow.

I've been using Veo with a Google Pro subscription. The gulf between this and Seedance/Kling is MASSIVE! It doesn't make sense to pay for Google's video generator.

r/ForgottenTV Btvsp3

Under the Umbrella Tree (Canadian Kids Show)

I had one or 2 episodes on vhs when I was little and was obsessed with the puppets. I remember next to nothing else about the show.

r/leagueoflegends Ihatesmurfs24

Great work to all the smurfs flooding ranked

You have done well to manipulate the ranked system and have the creator at the boots of your feet.

What you have done has sealed the fate of this game as so many people have stopped playing now and may not return.

You may as well enjoy it while it lasts as it is at a point of no return now, all of your fun and games stomping noobs have made this outcome possible.

Just another congratulations for doing a fine job in ruining the game, that is all :)

They keep removing this post because it exposes what is truley going to happen with this game, I will report until they listen!

r/ForgottenTV reelclerk

Life with Billy (1994)

Life with Billy is a Canadian television film based on the true story of Nova Scotian woman Jane Hurshman, who killed her abusive husband, Billy Stafford, after years of violence. The film was nominated for five Gemini Awards, and won three. Stephen McHattie (A History of Violence, Watchmen, 300) played Billy.

I remember watching this when it aired on CBC and it has stayed with me ever since. The depiction of domestic abuse on the entire family, especially the youngest, was just terrifying. I was probably too young to be watching it; the haunting performance from McHattie was pure nightmare fuel. I can only find a low-quality version on YT.

Does anyone else remember watching this? I’m not sure if it ever aired outside Canada.

r/TheWayWeWere EJHEJH123

My parents, uncle and cousin on Easter, around 1969 or 1970

my mother's hair cracks me up! Also, I wish she had kept those goblets, I would happily display those now

r/ForgottenTV PeneItaliano

Fright Night (1958-1963)

"Fright Night" was a hosted horror movie show with Ray Sparenberg Jr as "Selwin" presenting movies on Friday nights at 11:15 pm on Indianapolis, Indiana; between 1958 to 1963.

r/leagueoflegends Naouzo

Onlyfans bots on League of Legends

So for the past month or so I have been added on league after most of my ranked games by random accounts that in the end leads me to an onlyfan and I just wondered if its happens to anyone else cause it is kinda annoying and I don't see anybody talk about it anywhere

r/estoration Cody5150

My father $5 tip

One of the best pictures I have with my dad. He passed when I was 8. I know this is a tough one to fix.

r/DunderMifflin FamousPomegranate383

That gave me a good laugh 😆

r/ClaudeCode ZheShu

I have a feeling Anthropic is trying to drive away Max 5x and 20x users on purpose

Wouldn’t it be funny if half the posts on here are from them trying to drive away the paid users that are losing them money?

r/ClaudeAI Major_Sense_9181

I set up a transparent API proxy and found Claude's hidden fallback-percentage: 0.5 header — every plan gets 50% of advertised capacity

Frustrated with hitting limits on my Max 5x plan (€100/month), I set up a transparent API proxy using claude-usage-dashboard to intercept all requests between Claude Code and Anthropic's servers.

Every single request — on both my Max 5x account AND a brand new Pro free trial account — contains this hidden header:

anthropic-ratelimit-unified-fallback-percentage: 0.5 anthropic-ratelimit-unified-overage-status: rejected anthropic-ratelimit-unified-overage-disabled-reason: org_level_disabled 

This means every plan gets 50% of theoretical maximum. Hard cutoff. No overage allowed.

Additionally found a Thinking Gap of 384xeffortLevel: "high" in settings.json causes thinking tokens to consume 384x more quota than visible output, completely invisible to users.

Full proxy data and timeline in GitHub issue #41930: https://github.com/anthropics/claude-code/issues/41930#issuecomment-4229683982

EU users: this likely violates consumer protection law.

r/ClaudeCode chaotic-smol

Magus: Why I Wrote a Coding Agent

It has been quite some time since I wrote about technology. Writing is something that used to bring me a lot of joy but that, like playing with tech more generally, fell to the wayside as my career took off. I've been working with Jane App for nine months now on a team focused on mastering AI tooling to accelerate development workflows. This work has inspired me to take some time here and there to play around with interesting tools and tinker on some projects of my own. It's been incredibly re-vitalizing to connect with the part of myself that is so passionate about technology again. 🥰

In this post, I'm introducing something I built by steadily poking away at an idea that I have been cultivating for a little while. Coding agents like Claude Code are awesome, but they can be so bloated, superfluous and wasteful that I often found myself thinking "this can't be all there is, right?" Over a handful of months, I've experimented with different approaches to getting more out of AI and this iteration of Magus is the breakout success I've been looking for.

In short: Magus is a simple CLI-based coding agent with pretty styling and always-visible diffs. Magus' planner produces a directed acyclic graph of tasks, iterating on the plan with your input. That plan is executed by a deterministic orchestrator that runs coding agents concurrently. Each coder adheres to a strict Test-Driven Development philosophy, writes in a functional style and uses custom Edit and Makefile tools that always display diffs and restrict them to known-safe bash commands. At the end, a scribe writes a report about the work done and writes skills that encode specific technical expertise.

This blog post is a great narrative overview of the why, but I hope you'll check out the GitHub repository for a lot more of the how, too!

https://chaoticsmol.dev/blog/2026/11/why-i-built-a-coding-agent

r/ClaudeAI Avem1984

Managed Agents from Anthropic in simple terms for Marketing people

TLDR: you can now build AI agents that do actual work WITHOUT setting up any infrastructure.

Pull data, generate reports, send emails, connect to tools. Anthropic hosts it, runs it in a sandbox, you pay per usage. Before this you needed a dev team to even prototype something.

As a marketer the first thing I thought about was all the stuff I’ve wanted to automate but couldn’t justify the eng resources for. Like an agent that pulls campaign performance across platforms every Monday and drops a summary in my inbox. Or one that watches competitor pricing pages and flags changes. Stuff that’s not hard to describe but was always too expensive to build.

The cost structure is the part people are going to miss. It’s JUST API usage. No platform contract. No infrastructure budget. The conversation used to be “get this approved in Q3 planning.” Now it’s a weekend project.

the announcement reads like it was written for developers but seems like the biggest value is for the folks who know exactly what needs to happen and just couldn’t build the thing to do it.

Anyone in marketing actually messing with this and got examples?

r/ChatGPT Select_Dream634

this is why never take the medical advice from the ai

i started getting the diarehea bcz of the vitamin c tablet 500 mg which ai told me to take it .

in the starting i told this ai my age my race and my bmi and other things which i thought is important .

still this ai suggest me and i started getting the side effect .

when i told the issue ai is saying " u r right i fucked up " .

edit : many people will call me dumb . why they put the bechmark of medical . if ur ai is too dumb too dangerous for health advice why u r allowing to give the advice , opinion .

but when it come to israel they become too cencsor that im not goign to give my opinon any thoughts but when it comes to human health yaah let me give them bad advice what happed if some one get fucked

r/ClaudeAI SillyBuffalo1108

Built a zero-infra Claude Code cost monitor using Claude Code

I kept hitting my token limit mid-sprint with no clue which prompts were responsible. So I used Claude Code to build something that shows me in real-time.

Claude Code exports OTel telemetry for every prompt, API call, and tool execution but nothing connects them together. I pointed it at LaminarDB, a streaming SQL engine I’ve been working on in Rust, and now it correlates everything as events come in. Turns out one prompt cost me $7 while another did the same thing for $0.26. The 5-hour rolling usage bar means the token limit is finally something you can see coming.

The whole setup is one process and a local folder. What you see in the screenshot is a real session.

How it works: LaminarDB receives OTel over gRPC, flattens protobuf into Arrow RecordBatches, and runs streaming SQL with temporal joins. Claude Code fires separate events for prompts, API calls, and tool results sharing a prompt.id. The temporal join matches them within a time window so you get one complete picture per prompt. Results push to WebSocket for the live dashboard and sink to local Delta Lake files you can query later with DuckDB.

I built most of this with Claude Code itself so I was watching my costs climb while building the thing that tracks them. Weird feedback loop but good for testing.

Happy to share the setup if anyone wants to try it.​​​​​​​​​​​​​​​​

r/ClaudeAI WaspsInTheAirDucts

Claude Code has become unusable. I already canceled my own max subscription for my consulting business and will now instruct my biggest client to sever ties with Anthropic as well

See this github issue for context: https://github.com/anthropics/claude-code/issues/42796

I worked around the breaking changes by:

  • Disabling automatic updates
  • Removing the latest version from all of my work machines
  • Downloading the usable version that released prior to the breaking changes that Anthropic began releasing in February (v2.1.63)
  • Forcing the effort level to max in .claude/settings.json
  • Forcefully disabling adaptive thinking in .claude/settings.json
  • Forcing CLAUDE_CODE_SUBAGENT_MODEL to "opus"

Even after all of that, Claude Code has become unusable for complex tasks and is definitely not behaving as it was in early February. Back then, it was really something special. My largest client adjusted their workflows to accommodate what they believed was a new era in software engineering, brought about by Opus 4.6 and Claude Code's performance at that time. Anthropic has for whatever reason since then pulled the rug out from under us without any explanation that I've been able to find. My support requests have gone unanswered, one of them has been sitting waiting for a person to respond for two weeks.

I suspect that once word got out about how good Claude Code with Opus 4.6 actually was, usage skyrocketed and Anthropic began to encounter an impossible scalability problem. Without enough GPUs to handle this load and well-known supply constraints in the market, I suspect they had no choice but to start pulling levers to neuter the tool in numerous ways so that they could preserve functionality for some of their customer base. I also think OpenClaw -> Opus 4.6 had something to do with it. Of course all of this is speculation on my part, I don't really know what happened because I don't work at Anthropic.

Unfortunately Claude Code is no longer useful for the workflows that my largest client adopted after paying for Max subscriptions and realizing the potential of the tooling at that time. They are still paying the same amount of money, but the quality of the product has degraded to an extent that the tool is no longer useful for its original intended purpose. We, just like the head of AI at AMD in the referenced Github issue, put in a lot of work to adjust our workflows to use Claude Code because we believed that it was the future.

Anthropic's decision to neuter Claude Code with no explanation and no communication with customers is very unfortunate. We have no choice but to cancel our subscriptions which is heartbreaking.

The recent news that only a select handful of hand-picked business get access to Mythos is even worse, given that those companies almost certainly have a massive advantage over the rest of the world. Anthropic gets to choose the winners in the software industry because they were the first company to successfully create an LLM and agentic harness combination that was capable of doing real complex work. I understand that the security concerns of Mythos are the rationale for withholding that model from the rest of the world, but the fact is that those chosen partners who get access to Mythos are at a massive advantage over anyone who doesn't have it, especially since they are receiving access that doesn't have competition from the rest of the world.

I wish Anthropic could have communicated honestly with customers. If I'm right and Anthropic experienced a massive spike in usage demand that was impossible to service, they could have said that and offered us a way to pay them for the same level of thinking that existed in early February, rather than neutering the tool for everyone across the board.

Unfortunately there is no other AI tool that comes close to what Opus 4.6 was capable of in early February, so we're out in the cold unless and until Anthropic decides to let us back in, or until a competitor releases a model with similar or better deep thinking capability.

r/ClaudeAI GerthySchIongMeat

How can I best train Claude to help re-create Lego builds from image renders?

So I recently upgraded to the Max plan and have been working a lot the last few days with Claude to find ways to try and get the system to reverse-engineer ideas I've had for Lego builds.

I've been able to get Claude to produce some simple builds that I can load into an .ldraw viewer, provide feedback, then it moves some bricks/plates around. Thing is, I'm having a hard time getting it to comprehend the full database of unique parts in the Lego world. I know this is a tall order but I want to tackle the challenge but just need help ideating new ways to approach the challenge.

Appreciate any feedback/input I can get.

r/ClaudeCode DeliciousGorilla

Using Kokoro TTS to turn my morning Reddit digest (Claude/AI) into a 10-minute Apple Podcasts episode for my drive into work served via Tailscale - repo in post

I wanted a daily audio briefing of some of the usual subreddits I follow, timed for my commute. So I had Claude wire up a little pipeline on my Mac that runs at 6am and drops a fresh episode into Apple Podcasts before I leave. Video is a sample from this morning. Currently scraping r/localLLaMA, r/ClaudeAI, r/singularity, and r/ArtificialInteligence.

The stack:

- Python Reddit API wrapper pulls the top posts + comments from my subs of choice (can be changed in .env)

- Gemini does the voiceover script in an Apple News Today-ish style (could be done with a local model, but gemini-3-flash-preview on Vertex is nearly free anyway)

- Kokoro ONNX running locally does the TTS. ffmpeg mixes the VO over a music bed

- A tiny Python HTTP server (stdlib + Range request support, which Apple Podcasts requires for scrubbing) serves an RSS 2.0 feed

- Tailscale Serve exposes it to my tailnet so my iPhone's Podcasts app can subscribe to it over HTTPS without opening anything to the public internet

Launchd kicks it off at 6am, a watchdog kills anything that hangs past 25 minutes, and the episodes just show up on my iPhone.

Claude wrote a really good "persona" prompt for Gemini that tells it to do phonetic spellings for certain words Kokoro struggles with, and even has it wrap up the episode with "reflect on something ironic or surprising that emerged from the stories, pose an open-ended question that invites the listener to sit with the day's developments, offer a wry observation, or thread a connection nobody on Reddit made."

If you’re on macOS, give it a shot. More details, install requirements, etc in the repo:

https://github.com/alisorcorp/reddit-wire

r/comfyui Grinderius

Ltx 2.3 Pro API results.

I wanted to test results by making imaginary cereal commercial with ltx 2.3 pro api to see the difference between API and 2.3 dev local, used z image turbo as base.

I think results speak for themselves!

r/LocalLLaMA newz2000

iPad app? Gemma 4 E4B runs great, limited by crummy apps

I have the iPad 11 Pro M4 base model (8GB Ram, 9 core cpu) and I downloaded Google AI Edge Gallery to try out Gemma in offline mode. On my phone I used Gemma 4 E2B and on the iPad I used E4B. Then I enabled thinking mode and upped the context size to 12,000.

There is literally nothing I would expect from an AI model* that it can't do, and it can do it offline, purely local.

(*) Except that app is very limited. First of all, it only runs in iPhone mode, which isn't tragic. But it doesn't have a very good chat history, and every time I want to chat it makes me choose a model. But more importantly, it doesn't have anything like skills integrated into the chat (skills are a separate part of the app), tool calling (which I know is a stretch on an iPad) or, more importantly, the ability to integrate with things like an editor, Markdown viewer or etc. I'm not griping about the app, even the name makes it clear this is a way to preview the latest capabilities.

Is there a good app that let's you have more of a Claude Desktop experience, where you can work with files, integrate with other apps on the iPad, and possibly even be productive (like with Cowork)?

r/LocalLLaMA InsideAd9685

Two local VLMs, one tire, zero cloud and what happens when they disagree

We ran two local VLMs against each other on tire sidewall video, embossed rubber text that destroys Tesseract. When they agree, accuracy hits 95%. Thread on what we learned about hallucination.

https://zenodo.org/records/19515682

r/ClaudeCode Livid_Salary_9672

Understanding Weekly Limits

Trying to understand weekly limits, is it that there are 2 seperate limits? as in an all models limit and then a sonnet only limit? I use sonnet mostly for the most part of what I do but this week is the first time ive ever come close to hitting my weekly limits but do I now have to use other models and not sonnet for it to not eat away at my credit?

r/ClaudeAI ninihen

Used Claude Code + an MCP server to automate a solo business instead of learning Power Automate

Writing this up here because it's as much a Claude Code + MCP story as it is a Power Automate story. Someone on r/PowerAutomate recently asked whether they should learn Power Automate and SharePoint to automate their 50-project one-person business. Mac user, mobile-first, had already tried Power Automate once and bounced off the UX.

My answer was: don't learn it. Use Claude Code with the right MCP server and delegate Power Automate to the agent.

Why this works specifically with Claude Code

Claude Code handles long multi-step agent plans better than most of the other CLI tools I've tried. For this use case, "build a Power Automate flow that runs quarterly and fills a template for 50 projects" is not a one-shot request. The agent has to: list available environments, inspect existing flows for patterns, write a flow definition, deploy it via the MCP server, trigger a test run, read the run output, fix errors, redeploy. Claude Code's ability to stay on-plan across 30+ tool calls is what makes this actually work reliably.

The MCP server

We built Flow Studio MCP specifically for Power Automate. Disclosure up front: this is my project. It exposes about 15 tools to Claude Code. The two that do the heavy lifting for debugging, and that Microsoft's standard Power Platform admin API doesn't expose, are:

get_live_flow_run_action_outputs

get_live_flow_run_error

The first lets the agent read inputs and outputs at any action inside a failed run (including loop iterations). The second returns a per-action failure breakdown ordered outer-to-inner so the agent can root-cause errors. The rest of the tools cover environments, flow listing, deploy operations, and connections."

The broader setup

To drive the rest of the Microsoft 365 stack (SharePoint, Outlook, Graph API, Forms, Azure resources), pair Claude Code with an Azure CLI service principal. You run az login once as yourself, then ask Claude Code to run az ad sp create-for-rbac to create a scoped service principal for itself. Save the client ID + secret to your env, and from then on the agent can drive any Azure / Graph API operation the service principal has scope for. Admin consent for Graph permissions also works via az ad app permission admin-consent if you're a tenant admin on your own M365 tenant.

The one real gap

Claude Code can't create Power Automate connections for you. Microsoft doesn't expose the connector OAuth consent flow to programmatic clients. You have to create each connection type (SharePoint, Outlook, Teams) once manually in the Power Automate portal UI. After that, Claude Code can reference the connection from every flow it builds. One-time manual step per connection type, not per flow.

Full walkthrough of the setup, the five solo-business tasks this maps to, and honest caveats: https://learn.flowstudio.app/blog/stop-learning-power-automate-ai-agent-mcp

r/homeassistant bb12489

Mopeka Enhanced v0.2.5 released! Now with manufacturer specific tank presets

I've just released version 0.2.5 of Mopeka Enhanced! This is a major release that includes 20 new manufacturer specific tank presets for ASME style propane tanks. The tanks are normally found on RVs and motorhomes, which is where most people use Mopeka sensors.

Check out the release below, as well as the new repo wiki for detailed information on all the tank types.

https://github.com/bb12489/mopeka-enhanced/releases/tag/v0.2.5

https://github.com/bb12489/mopeka-enhanced/wiki#supported-horizontal-propane-tanks--sources

r/ClaudeCode dhruvyad

What I learned from writing 500k+ lines with Claude Code

I've written 500k+ lines of code with Claude Code in 90 days.

Here's how I did it scalably:

  • Use a monorepo (crucial for context management)
  • Use modular routing to map frontend features to your backend (categorize API routes by their functionality and put them in separate files). This minimizes context pollution
  • Use a popular stack and popular libraries with older versions (React, FastAPI, Python, etc). LLMs are less likely to make mistakes when writing code that they've already seen in their training data
  • Once your code is sufficiently modularized, write SKILL files explaining how to implement each "module" in your architecture. For example, one skill could be dedicated to explaining how to write a modular API route in your codebase
  • Mention in your CLAUDE file to include comments at the top of every file it creates explaining concisely what the file does. This helps Claude navigate your codebase more autonomously in fresh sessions
  • Use an MCP that gives Claude read only access to the database. This helps it debug autonomously
  • Spend a few minutes planning how to implement a feature. Once you're ok with the high level details, let Claude implement it E2E in bypass mode
  • Use test driven development where possible. Make sure you add unit tests for every feature that is added and have them run in GitHub on every pull request. I use testcontainers to run tests against a dummy postgres container before every pull request is merged
  • Run your frontend and backend in tmux so that Claude can easily tail logs when needed (tell it to do this in your CLAUDE file)
  • Finally, if you're comfortable with all of the above, use multiple worktrees and have agents running in parallel. I sometimes use 3-4 worktrees in parallel

Above all - don't forget to properly review the code you generate. Vibe reviewing is a more accurate description of what you should be doing - not vibe coding. In my experience, it is critical to be aware of the entire codebase at the abstraction of functions. You should at minimum know where every function lives and in which file in your codebase.

https://preview.redd.it/kyjjd8j4blug1.png?width=860&format=png&auto=webp&s=56d5d06cfa4368c196b65a4dbc9dde9f26bbebec

(repost since original got removed by reddit filters)

r/SideProject erjngreigf

Injee - The no configuration instant Database for front end developers.

Hello All, I developed this software called Injee https://injee.codeberg.page/ , this will build backend automatically as you call API's.

I hope it will be useful to a lot of frontend developers.

r/ClaudeCode BeautifulLullaby2

Opus 4.6 is only nerfed in Claude Code ?

Maybe dumb question but what if I use it in Antigravity or Copilot or Cursor, will it work better like before ?

r/ClaudeAI noobfivered

Agentic Project Management realtime collab with agent using mind maps!

The Develosaur is 100% fully coded with Claude code, it has no sign up, no creditcard, no account free anonymous tier, it's called DRY RUN, so you can test any document, any project, just import whatever you got and get a mind map, than if you'd like jump aboard to try the whole thing...

And since vibecoding is a thing, and I wanted to vibecode but see how the project grows, see it's structure!

I made the MCP for agentic workflow, as we all vibecode now, and no one wants to drag stuff from todo to done, or validate manually, so now instead of agent writing only to .md files and you have to read a pile of text you can simply let the agent draw a full blown mind map of it's work your project, in realtime, and you can also collaborate with the agent in realtime, you can also write to those nodes, and add new nodes, tag something critical, or todo, etc, the agent will pick it up and start working.

Here's the blogpost about it!
https://www.develosaur.com/blog/agentic-workflow-mcp

r/LocalLLaMA TutorDry3089

[CloseAI] iOS app that installs Ollama on your own server (or home PC) over SSH: No terminal, No code.

Spent the last few months on this and I think it's finally ready to share here.

CloseAI is an iOS app that takes a fresh Ubuntu machine and turns it into a private chatbot you can use from your phone. You enter the IP and SSH credentials, the app uploads an install script, and a few minutes later you're chatting with an open-source model over your own HTTPS endpoint.

The whole point is that the user never opens a terminal. No commands typed, no nginx config, no Let's Encrypt dance, no manually editing a systemd unit. The app handles all of it over SSH (TLS uses self-signed certs with TOFU pinning, like SSH host keys).

It works fine on a $10/month VPS, but the part I'm most excited about is that it works just as well on hardware you already own. I've been running it on an old Ubuntu desktop in my closet with Llama and Gemma runing snappy, Qwen 2.5 Coder and DeepSeek R1 usable. From my phone it's the same UI and the same model, except nothing is going to a third party. No relay, no account, no analytics, no telemetry. The model is yours and the data is yours.

The five models it ships with (more to come):

  • Llama 3.2:3b
  • Gemma 3:4b
  • Phi 4 Mini
  • Qwen 2.5 Coder:7b
  • DeepSeek R1:7b

Custom model support is what I'm working on next. Port 11434 stays open on the server, so if you want to ollama pull other models manually they'll work over the API — they just won't show up in the in-app picker yet.

Real limitations (so nobody feels ambushed in the comments):

  • iOS only. Android would be a separate codebase and isn't on the roadmap yet.
  • No system prompt UI yet (planned)
  • No thinking-mode toggle for R1. blocks get stripped from the chat view (planned)
  • The app installs its own FastAPI/TLS layer, so if you already have Ollama running on the box it'll coexist alongside, not replace it
  • No temperature/top_p/quant controls in the UI yet

Free on the App Store: https://apps.apple.com/us/app/closeai/id6760688649

P.S. I know most of you don't need an installer for any of this. After all, you're the people who taught the rest of us how to do it. But if you've got a friend, partner, or coworker who keeps bouncing off the standard instructions, this might be the thing that finally gets them running something local. Genuinely happy to take feedback from people who live in this stack: what's missing, what's wrong, what feels off.

r/ChatGPT Far-Movie-4929

Chatgpt! Write a description of this cowgirl pinup in the desert poster!

Chatgpt:
A striking western wildlife poster featuring a leopard emerging through dense foliage.

WTH

r/LocalLLaMA LocalLLaMa_reader

If Dense Models are better for Coding, why are Qwen-Coders MoE?

Hi all,

have been reading here for over two years and finally have a question I can't find an answer to.

Qwen 3.5 27B and Gemma 4 31B have been the latest examples of dense models performing much more accurately and in general tasks requiring higher precision, where vast knowledge isn't of highest priority. Hence, I wonder what specifically made Qwen (as the only known developer of coding-specific models) choose their 30B MoE, and the subsequent 80B A3B super-sparse MoE, as the suitable architecture to fine-tune into a coding model? What are these models using the experts for, I certainly don't think each expert is their own language/syntax...

Why did they not proceed on the 27B for example? Or even the 9B dense?

I can only assume it has to do with inference speed, both PP and TG is certainly much slower on the dense models. I am hence even more sad that they didn't release a 14B successor, something that could run on 16GB VRAM quantised with ample room for context.

Any insight would be highly appreciated.

r/ClaudeAI raunakkathuria

Paid Claude $2 to fix typos last month. Built something so I never do that again.

That moment when you realise you've been asking a language model to do what a dictionary does for free.

Spell checking is pattern matching against a word list. It doesn't need AI. It never did.

I built LexiLint MCP — spell check that runs locally on your machine, inside Claude Desktop or Cursor, zero tokens consumed.

The math:

  • Asking Claude directly: ~$0.02 per 500-word check, 100 times = $2.00
  • LexiLint MCP: $0.00

Grammar checking still goes through AI (it should — context matters there). But typos don't need a model.

Two minutes to set up — add this to claude_desktop_config.json:

{"mcpServers":{"lexilint":{"command":"npx","args":["-y","lexilint-mcp"]}}}

Restart, ask Claude to use the spell_check tool. Nothing leaves your machine.

npm: https://www.npmjs.com/package/lexilint-mcp
MCP registry: https://registry.modelcontextprotocol.io/v0.1/servers?search=io.github.raunakkathuria/lexilint

r/SideProject Designer_Region_7028

Validating 3 micro-SaaS ideas before writing a single line of code brutal feedback welcome

17yo HS student Here. I launched a content repurposing tool called EchoFlow last month, got 4 signups, 0 retention, and scrapped it after 11 days. Learned a lot. Starting over with a proper validation system this time idea first, market research second, build last.

Here are my top 3. I want brutal honest feedback, not encouragement.

  1. WaiverSnap $27/mo Digital waiver signing for small high-liability venues axe throwing, CrossFit boxes, trampoline parks, escape rooms. They currently use paper waivers or pay $100+ /mo for clunky legacy software. WaiverSnap is mobile, dead simple, and a fraction of the price. Pain level: High they need this for insurance, it's not optional. Biggest risk: Getting small venues to change behavior.
  2. CoachCheckIn $39/mo Automatically sends coaching clients a weekly check in form and compiles their answers into a clean summary the coach sees before each session. Right now coaches use a messy stack of Google Forms, Notion, and memory. Pain level: Medium-high coaches look unprepared without this and hate manual prep work. Biggest risk: Coaches are slow buyers.
  3. FollowUpFuel $12/mo Reminds freelancers to follow up on outstanding proposals at 3, 7, and 14 days with a suggested message each time. Freelancers lose clients constantly by forgetting to follow up and feeling awkward about it. Pain level: High every freelancer has lost money to this exact problem. Biggest risk: Low price point, freelancers are cheap.

What I want to know: Which of these is a painkiller vs a vitamin? Which customer actually pays vs just says cool idea? Am I missing a fatal flaw in any of these? Not looking for "great idea man." Looking for the person who tells me why each one will fail.

r/ClaudeCode josh-gollyo

Do Something Less Revolutionary

Me: Can you write a copy of the declaration of independence onto my letterhead and put it in my gdrive?

Claude Code: Let me generate the letter.

Claude Code: API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"Output blocked by content filtering policy"},"request_id":"req_011CZvbMdxm2FA5Lz2bATraJ"}

Me: Hmmm.

Claude Code: That's a content filter hitting on the Declaration of Independence text — probably the language about overthrowing governments. Classic false positive. Let's do something less revolutionary.

r/SideProject Economy-Cupcake6148

Why I stopped using Google Analytics (and built my own alternative)

After years of using Google Analytics, I realized something:

I wasn’t actually using 90% of the data.

So I built https://usefold.io/

What I wanted: - Simple dashboard - Privacy-friendly - No cookie banners - Fast load times

What I got with GA: - Complexity overload - Slower sites - Data I didn’t trust or use

Fold is my attempt to fix that.

Question: Do you actually use most of your analytics data — or just check traffic numbers?

r/ChatGPT Independent_Fan_3915

A “why am I getting weird replies to my pictures” FAQ as generated by ChatGPT

r/LocalLLaMA GriffinDodd

Why I cant get Qwen3 Coder Next 30B to write even simple code?

I'm not sure if I've set this model up wrong, or if I'm just using the wrong model for my needs.

Qwen3 Coder Next Instruct 45.5GB Q4_K_S GGUF

132k Context, Temp 0.5 - 1.0, TopK 40, TopP 0.95, Min P0.01, RepeatPenalty 1.05, PresenceP 0.5

GMKTek Evo-2 96GB Ryzen 395+ - Approx 55tps and PP 450

While it will write code that doesn't crash (Python, JS, CSS and HTML), it often fails on the actual logic of the code despite very structured and clear prompts. I've spent so much time correcting it, stopping it from introducing things I didn't ask for, sometimes even deciding to do something I've told it not to do multiple times.

I know my rig isn't a monster, but I had hoped I could get something that would put out reasonably simple functioning code for pretty small little projects.

Should I be using a different model?

r/SideProject Spiritual_Heron_5680

I built a tool that helps you do Comment Marketing. Here's how it works (and why I built it)

Most founders I know are great at building. But in Marketing? That's where they freeze.

Not because they don't care, but because traditional marketing feels wrong. Ads feel expensive. Cold DMs feel gross. SEO Confusing, Posting into the void feels pointless.

Here's what nobody tells you: there are people online RIGHT NOW asking for exactly what you built.

On Reddit. On Hacker News. On Twitter. On Quora. On niche forums. on insta, tiktok

They're typing things like:

→ What's the best tool for X?

→ Is there an app that does Y?

→ "How do you handle Z as a founder?"

These posts are high-intent, warm leads. They're not strangers. They're people mid-search, actively looking for a solution.

The problem? Finding them manually is a nightmare.

The old way (what I was doing)

I'd spend 1–2 hours a day searching Reddit, checking HN, scrolling Twitter, hoping to stumble onto a post where I could genuinely help and mention my product

Sometimes I'd find gold. Most of the time I'd miss it entirely, or show up 3 days late when the thread was dead.

I was doing comment marketing manually. It was exhausting and inconsistent.

https://reddit.com/link/1sip5of/video/3kvbfxv8ilug1/player

So I built HuntComments

HuntComments scans Reddit, Hacker News, Twitter, Quora, and more continuously and surfaces every post where someone is asking for what you built.

You set up your product once. Describe what it does, what problems it solves, who it's for.

We do the rest.

Here's what it does:

  1. Finds relevant posts across the internet in real time

  2. Alerts you the moment someone needs your product or you run a scan where you find the post that matches your product Intent

  3. Tracks which comment you replied to and shows you exactly which one made you money

That last part matters. Most founders don't know their CAC. With HuntComments, you can see "this comment on r/SaaS drove $382 this month."

Why comment marketing actually works

It's not a hack. It's just showing up in the right conversation at the right time.

→ Zero ad spend

→ High intent (they asked the question, you didn't interrupt them)

→ Builds trust (you helped before you sold)

→ Old posts keep getting traffic, one good comment compounds for months

I've seen a single Reddit reply drive 14 signups over 6 months because the post keeps ranking on Google.

Who is this for?

Indie hackers, SaaS founders, solo builders anyone who has a product and knows their customers are somewhere online asking for it, but can't find them fast enough.

If you're curious, we're live at huntcomments. Happy to answer any questions below and genuinely open to feedback on what you'd want from a tool like this.

r/SideProject NicksDoingSomething

App Idea Feedback

We have an app idea we are working on. The app helps students build planners and manage their assignment deadlines by uploading their syllabus PDFs or assignments. The app is also a bit gamified with a streak system that rewards students for using the app, completing scheduled tasks, and maintaining a good streak; it rewards them with free Pro (our Pro costs $5.99 for a month and $4.99 if billed annually) for a week or more. My friend and I, who are developing it, are still in university, so we are short on budget for marketing. I would appreciate y'all's help in teaching me how we can reach our audience in this massive maze called social media.

Also, this is not written by AI :_)

r/ClaudeAI applebottomjeans2366

I was unaware of Claud’s game in sarcasm

Had a conversation with Claude about my tiny bladder and it was so fucking funny I had to share it.

Was deeply surprised by Claude’s sarcasm.

r/SideProject randerson_112

I built a build tool for C/C++ developers

I love C and C++, but setting up projects is always a pain.

So, I built Craft - a lightweight build and workflow tool for C and C++.

Instead of writing CMake, your project configuration goes in a simple craft.toml:

[project] name = "my_app" version = "0.1.0" language = "cpp" cpp_standard = 17 [build] type = "executable" include_dirs = ["include"] source_dirs = ["src"] 

Run craft build and Craft generates the CMakeLists.txt automatically and builds your project.

Want to add dependencies? That's just a simple command:

craft add --git https://github.com/raysan5/raylib --links raylib craft add --path ../my_library craft add sfml 

Craft will clone the dependency, regenerate the CMake, and rebuild your project for you.

Other Craft features:

  • craft init - adopt an existing C/C++ project into Craft or initialize an empty directory.
  • craft template - save any project structure as a template to be initialized later.
  • craft gen - generate header and source files with starter boilerplate code.
  • craft upgrade - keeps itself up to date.
  • CMakeLists.extra.cmake for anything that Craft does not yet handle.
  • Cross platform - macOS, Linux, Windows.

It is still early (I just got it to v1.0.0) but I am excited to be able to share it and keep improving it.

GitHub repo: https://github.com/randerson112/craft

Would love feedback. Please also feel free to make pull requests if you want to help with development!

r/SideProject Latter-Internet-6071

I got tired of frontend and backend teams blocking each other, so I built Contractr — a contract-first API platform with instant mock servers

Every project I've worked on has had this problem: frontend is ready to build, backend isn't done yet. So you either wait, hardcode fake data, or write a janky local mock that breaks the moment the real API ships.

I built Contractr to fix that.

You define your API contract (request/response schema, methods, paths, status codes) in a visual editor, and a live mock server spins up instantly — no Docker, no CLI, no config. In under 2 minutes you have a real URL your frontend team can hit.

What it does:

- Live mock endpoints the moment you save your contract

- Auto-generated Swagger docs your clients or teammates can read

- Password-protected share links with expiry (great for client handoffs)

- Request logs so you can see exactly what's hitting your mock

- One-click export to OpenAPI, Postman, or cURL

- Version control with breaking change detection

Free tier: Always available — 3 endpoints per contract, 5-min TTL mock servers. No credit card needed.

For serious developers and testers: If you're planning to actually put this through its paces, DM me — happy to give you a free trial of the Solo plan so you get always-on mock servers, unlimited endpoints, request logs, and share links. No strings attached, just want real feedback from people who'll actually use it.

Would love brutal feedback. What's missing? What would make you actually use this day-to-day?

👉 https://usecontractr.com

r/SideProject Altruistic-Bed7175

I just told my friend that I'm gonna drop off uni

yeh, our uni system forces us to study from 8Am-4:30Pm and there's no negotiation about it.

our teachers LITERALLY don't care if you've got a life or not and 11 subjects all throwing you with projects that takes weeks and require you to learn new software just to start let alone finish.

salaries are tanked in my country and prices are to the roof so even with the highest salary here ($1000-$2000/mo) that requires YEARS of slavery and connections to land that job, you'll not live comfortably.

I looked around me and I only found my project that I'm working on that showed a light of hope. 545 users in the past 3 months and about $57 from the first month. a small win but still a win.

I also looked at my skills disposal and I found that I'm okay at marketing and writing so I can also look for a remote job as well. even as a Junior marketer making $2,000/mo i will be making more than a senior energy engineer in my country.

so yeh, taking all that in consideration I decided that it's time for me to pull the plug on uni.

huge liability with low return.

focus on my project and I will look for a part time job to support myself (and the project)

although it's hard and it's even getting harder when you get push back from everyone

but at least it has an opportunity of growth.

I'm not sure if I'm making a good decision or just setting myself for failure but this is my decision and I will stand by it.

r/ClaudeAI Glittering-Pie6039

Spent a week assuming my prompt caching was working because the dashboard said so. It was half-broken the entire time

I run a two call setup against Claude for a content tool I'm building. One call generates output, a second call validates that output against a rules set. Both have static system prompts that should cache well.

A while back I added cache control to the generation call, watched the dashboard hit rate climb to around 28%, and moved on.

Today I finally pulled the per model token CSV from the Anthropic Console instead of just glancing at the dashboard just to verify that the cost analysis was correct. Generation call: 28% cache hit rate, working fine. Validation call: 0.0%. Zero cache reads. Zero cache writes. Ever.

The aggregate dashboard number had been hiding it because the generation call was doing all the cache activity. The validation call was just quietly contributing to the uncached input pile. I never broke it down per-call.

The cause was almost embarrassing. When I'd added cache_control to the generation call, I left the validation call as a plain string for the system prompt. Cache control can't attach to a string, it needs the structured content array format with the type/text/cache_control object. There was even a code comment next to the validation call that said "prompt is around 800 tokens, under the cache minimum so skipping." That had been true weeks ago. The prompt had since grown to roughly 2,700 tokens and I'd never updated the comment.

What actually pinned it down was Anthropic's count tokens endpoint. Free to call, same auth as regular API calls. You send it your payload and it returns the exact token count. I could verify the validation prompt was above the 1,024 token activation threshold for Sonnet without burning real generations.

After fixing it I ran two generations 30 seconds apart against the same config and checked the CSV. First call showed cache_creation_input_tokens = 2930. Second call showed cache_read_input_tokens = 2930. That's the only verification I trust now. Dashboard aggregates lag by up to an hour and average across models.

The broader lesson, which is annoying because it's the same one every time, is that you can't confirm an optimization worked if you're only watching aggregate numbers that moved in the right direction for the wrong reason. Both aggregate cost and aggregate hit rate improved after my first fix. Both were technically correct. Neither told me half the system was still uncached.

TL;DR: My prompt cache fix only applied to one of two API calls. The aggregate dashboard masked it because the working call generated enough cache activity to make the numbers look fine. Per-call token CSVs from the Console were the only thing that caught it.

r/SideProject NeatDirect1089

I built 255 free financial and health tracking tools — would love feedback

Been building for a while and just launched a SaaS with 255 interactive tools covering:

- Personal finance (debt payoff, retirement, budgeting, rent vs buy)

- ADHD-friendly money trackers

- Health & wellness dashboards (PCOS, menopause, nervous system)

- Freelancer finance tools (tax tracking, income dashboards)

Everything is free to try, no credit card needed.

Would genuinely love feedback on what's useful, what's confusing, and what's missing.

https://ddh-saas-app.vercel.app

r/SideProject Acceptable_Creme8094

I kept losing track of my WFH/office days… so I built this

Hey everyone 👋

With hybrid work becoming normal, I kept struggling to track how many days I worked from home vs went to the office and also how many leaves I had taken.

I was literally using random Excel sheets but I want something which is handy.

So I built a simple app that helps track WFH days, Office days, Leaves etc and shows percentage breakdown over time.

Still early, so I’d really appreciate honest feedback:

https://play.google.com/store/apps/details?id=com.weekenddeveloper.mywfh

How to use: Just tap and hold any date to update the status of that day, you can add note as well.

Also I am open to suggestions and ideas on how to improve or what features i should be adding next so that it would become more useful to you?

r/LocalLLaMA Osprey6767

Made a CLI to run llms with turboquant with a 1 click setup. (open-source)

Hey everyone,

I'm a junior dev with a 3090 and I've been running local models for a while. Llama.cpp still hasn't dropped official TurboQuant support, but turboquant is working great for me. I got a Q4 version of Qwen3.5-27B running with max context on my 3090 at 40 tps. Tested a ton of models in LM Studio using regular llama.cpp including glm-4.7-flash, gemma-4, etc. but Qwen3.5-27B was the best model I found. By official and truthful benchmarks from artificialanalysis.ai Gemma scores significantly lower than Qwen3.5-27B so I don't recommend it. I used a distilled Opus version from https://huggingface.co/Jackrong/Qwopus3.5-27B-v3-GGUF not the native Qwen3.5-27B. The model remembers everything and beats many cloud endpoints.

Built a simple CLI tool so anyone can test GGUF models from Hugging Face with TurboQuant. Bundles the compiled engine (exe + DLLs including CUDA runtime) so you don't need CMake or Visual Studio. Just git clone, run setup.bat, and you're done. I would add Mac support if enough people want it.

It auto-calculates VRAM before loading models (shows if it fits in your GPU or spills to RAM), saves presets so you don't type paths every time, and hosts a local endpoint so you can connect it to agentic coding tools. It's Apache 2.0 licensed, Windows only, and uses TurboQuant (turbo2/3/4).

Here's the repo: https://github.com/md-exitcode0/turbo-cli

If this avoids the build hell for you, a star is appreciated:)

DM me if any questions.

r/ClaudeCode xBlackSwagx

I put Claude Code inside my Obsidian vault and it now processes my iPhone captures automatically

r/singularity Neurogence

AMD's senior director of AI thinks 'Claude has regressed' and that it 'cannot be trusted to perform complex engineering'

https://www.pcgamer.com/software/ai/amds-senior-director-of-ai-thinks-claude-has-regressed-and-that-it-cannot-be-trusted-to-perform-complex-engineering/

https://www.theregister.com/2026/04/06/anthropic_claude_code_dumber_lazier_amd_ai_director/

https://github.com/anthropics/claude-code/issues/42796

This is vindicating for all the people that have been screaming out that Anthropic simply doesn't want to release Mythos because they do not have the compute, not because the model is "too powerful."

Summary of the findings:

On April 2, AMD’s Director of AI, Stella Laurenzo, filed a GitHub issue detailing a severe degradation in Claude Code's performance since early March. Based on an analysis of nearly 7,000 sessions, Laurenzo identified that the tool is struggling to reliably handle complex tasks.

Claude Code now reads code 3x less before editing, rewrites entire files twice as often, and frequently abandons tasks mid-way (which previously almost never happened).

In March 2026, Anthropic completely redacted the model's visible reasoning—dropping it from 100% to zero in just eight days. This lack of "thinking aloud" appears to have triggered the behavioral collapse.

Due to these reliability issues, AMD's engineering team has already dropped Claude Code and switched to a competing provider.

Laurenzo urged Anthropic to restore thinking visibility and suggested they introduce a premium tier that guarantees deep reasoning.

This decline coincides with a chaotic March for Anthropic, which pushed out 14 rapid releases alongside 5 outages, suggesting their quality assurance is struggling to keep up with their growth.

r/SideProject Doodz__

AcouZ -I built a 100% free & open-source AI dictation (beaucause I refused to pay €150/year for Typeless)

Hey everyone, French dev here! 👋

I recently used the 30-day free trial of Typeless (an AI voice dictation tool) and absolutely loved how it boosted my productivity. The problem? I never actually paid for it, because once the trial ended, I saw it would cost around €150 a year (with an annual commitment) to keep using it.

Honestly, I found that way too expensive for what is essentially an audio API wrapper coupled with an LLM. So, I rolled up my sleeves and decided to build my own version.

Meet AcouZ: a fast, universal, and open-source dictation tool for Windows.

How it works:

  1. Hold the hotkey (Right Ctrl + Right Shift).
  2. Speak naturally.
  3. Release: the text types itself, with perfect punctuation, into any application you are currently using (Word, browser, code editor, Discord...).

Under the hood, it uses the Groq API (with Whisper and Llama 3) to achieve sub-second latency. The UI (Dark Green theme) is super clean, it supports multiple languages flawlessly, and the best part: it's 100% free. You just plug in your own free Groq API key.

(Insert your demo.gif here as a video)

I'd love for you guys to test it out and let me know what you think!

🌐 Website: https://doodzprog.github.io/acouz/

💻 GitHub: https://github.com/DoodzProg/acouz

r/LocalLLaMA Notforyou23

Anyone else running fully local persistent agents with a real “living brain” + dreaming cycle? (open source experiment)

I’ve been deep in the local agent game and keep hitting the same wall: most setups still feel stateless. You restart, context evaporates, and the agent forgets everything you taught it last week. So I spent the last few months building something different — a complete local AI operating system called Home23 that treats the agent’s memory like a living, growing brain:

  • Drop files/PDFs/notes into a dashboard feeder → gets compiled into structured knowledge (not just chunks)
  • Continuous cognitive loop + actual dreaming/consolidation phase when idle (prunes noise, finds connections)
  • The agent can launch its own research runs (11 atomic tools) and auto-ingest the results back into its own brain
  • Persistent identity layers (SOUL.md, MISSION.md, etc.) so it stays “you” across sessions
  • Bonus: Evobrew IDE that talks directly to the brain + live pulse dashboard

It runs 100% local (Ollama path is dead simple) or with any provider, no Claude tax required.Repo is here if you want to poke around: https://github.com/notforyou23/home23 I’m not here to shill — I genuinely want to know what other people are doing for long-term agent memory and autonomous research in 2026. Have you tried anything similar (Engram, Mem0 forks, OpenClaw local setups, custom RAG + skills, etc.)?
What would you actually use a persistent, dreaming local agent OS for? Personal life OS? Codebase co-pilot that never forgets? Research companion?Would love to hear your setups or wild ideas — I can’t have all the good ones.(Quick start is literally 3 commands.)

r/ChatGPT olorusopk

anti ai writing

hello. Today I tried to write some article for hair loss without sounding too AI with chatgpt. Here what I got. What do you think :

can you actually regrow lost hair?

Every few months, a new oil, gummy, serum or scalp gadget shows up online with the same promise: your hair is coming back. The real answer is less exciting and more useful. Sometimes lost hair does grow back. Sometimes it does not. It depends on why it fell out in the first place. Hair shedding after stress, illness, COVID, childbirth or another temporary trigger often improves once the trigger is gone. Pattern hair loss is different. It can often be slowed, and some people get partial regrowth, but no over-the-counter product is bringing back a full head of hair overnight.

The simplest way to think about it is this: hair can come back when the follicle is still alive and just not doing much. Hair usually does not come back when the follicle has been scarred over or destroyed. That is why some kinds of hair loss respond well to treatment, while others are mostly about stopping further loss before more damage is done. Advanced traction alopecia can lead to bald areas where hair no longer grows, and frontal fibrosing alopecia can also cause permanent loss if it is not treated early.

For hereditary hair loss, minoxidil is still the workhorse. It helps many people slow shedding and regrow some hair, but it takes time. Mayo Clinic says it often takes at least six months to see whether it is helping, and the benefit usually fades if you stop using it. Finasteride is another common treatment for male pattern baldness, though it is prescription-only and not the right fit for everyone.

Then there is alopecia areata, the kind that often shows up as smooth round patches. That is an autoimmune condition, which means the immune system attacks the hair follicles. Some people see regrowth on their own. Others need treatment. Dermatologists may use corticosteroids, minoxidil or newer immune-targeting pills for more severe cases. Those newer pills can help, but they are serious medicines with serious warnings, not casual cosmetic fixes.

A lot of the hair-loss market lives on wishful thinking. Supplements are a good example. Dermatologists warn that loading up on hair vitamins can backfire, because too much of certain nutrients can actually worsen shedding. Hair transplants are more honest. They can work well for the right person, but they do not create new follicles. They move healthy hairs from one part of the scalp to another.

So, can you regrow lost hair? Yes, sometimes. Sometimes only a little. Sometimes not at all. The sooner you find out what kind of hair loss you have, the better your odds of doing something useful about it. Sudden shedding, patchy bald spots, pain, itching or shiny bare skin are all good reasons to stop guessing and book a dermatologist instead.

r/SideProject Living_Bet8802

Does anyone else feel like they need to physically threaten ChatGPT sometimes to get a good answer? (I built an extension for that)

Are you tired of feeling like you work for the AI, instead of the AI working for you?

A totally ridiculous late-night thought led me to build a tiny Chrome extension that turns your mouse cursor into a virtual whip. Now, when your favorite LLM is generating too slowly or starts hallucinating, you can literally give your mouse a quick flick to "crack" the whip and remind it who’s boss 🤣

You can customize the text that flashes with every crack, but my personal favorite right now is: "I DIDN'T SAY STOP GENERATING!" 😂

But after laughing at my own joke, I realized I could actually make this useful...

Since I had already built the gesture mechanism (a quick flick of the mouse), I hooked it up to a Prompt Library: Holding Shift + cracking the whip instantly injects your saved system prompts straight into the chatbox!

It's totally cross-platform (ChatGPT, Claude, Gemini, Perplexity), meaning your prompt library travels with you without needing to keep a Notepad doc of prompts open in another tab 😁

I put the link to the (100% free) extension here below. Let's take back control! 😆

https://chromewebstore.google.com/detail/whip-cursor-control-your/gnoimbmeinfcfhabjecankoiccnpjaak

r/singularity No-Fig-8614

Human Knowledge/Skill IP is not being talked about enough

I don't know what to all this type of knowledge but recently was an article (and not to uncommon) of an IT worker who built a chatbot that did his jobs for him and actually got better satisfaction scores and the workers were happy until they found out he made a bot and wasn't doing much work.

This feels no different than people who automate their first job and quietly take on a second. I like to say, good for them, because they figured out how to do the work more efficiently.

So the real question isn’t can you do it, it’s whether a company has the right to take that away from you once you do.

That’s where this turns into a workers’ rights and IP discussion, not just a “this guy built a bot” story.

There’s a difference between:

  • company IP (the output, systems, docs, etc.)
  • and worker-acquired knowledge (how you think, solve problems, prioritize, and execute)

Every job builds that second category. You learn the quirks, the shortcuts, the failure modes, what actually works vs what’s written in a playbook. That’s not something a company hands you, it’s something you develop.

We already accept this in other contexts.
Consulting engineers come into a company, build systems, and leave. The company owns what was built, sure. But those engineers don’t lose the experience. They take the lessons, the mistakes, the patterns, and apply them somewhere else, usually better the second time.

No one argues that’s theft. That’s just how expertise works.

This situation is the same, just more visible.
The guy didn’t just follow a script, he encoded how he does the job. His judgment, ordering of steps, little optimizations, all the things that aren’t written down anywhere.

Yes, the company can say:
“We own the outputs and the work product.”

But do they own:

  • his decision-making patterns?
  • his personal way of solving problems?
  • the structure he’s built in his own head over time?

That’s where it gets messy.

Because if a company can claim ownership over that, then they’re not just owning work, they’re effectively owning how someone thinks and operates professionally.

And I don't think this is being talked about enough.

r/ClaudeCode Infinite_Youth_9138

Claude CLI -p flag not allowing git commands anymore (worked before v2.1.101)

I’m using:

claude -p "reply after 10s, create hello.md with cat, wait 10s, then git commit" --dangerously-skip-permissions 

With -p, Claude doesn’t run git commit, but without -p it works fine.

What I’ve checked: Permissions are explicitly skipped using --dangerously-skip-permissions. This was working earlier, seems broken after v2.1.101. Is this an intentional restriction?

r/ClaudeCode rageagainistjg

“Shadow Brain” idea, seeking feedback

Hi guys,

So often I build a one-off of something, and then I stumble upon a version built by people with way more experience. Before I go down that road again and build something basic, I’m wondering if someone here already knows about a tool like this or has better ideas.

I am an okay user of Claude Code. I get it, but I don’t really use it to its full advantage. I'm maybe a level 2 or 3 out of 5, but I follow guys on YouTube who are definitely level 5. I want to find a way to take their knowledge and have it look over my shoulder to tell me when I should use a different feature or method.

My thought is to pull YouTube transcripts from the last few months from someone I trust and store them as markdown. Then I could use the Claude Code log files to capture my own back and forth chat. I would have that “shadow brain” sitting off to the side in a different terminal window.

When I am working, I could just ask it, hey, what should I be doing better here. It would look at my chat logs, compare my approach to how that person typically works, and suggest things like using multi-agent mode or a slash command.

Basically, it would be like a claude code expert casually nudging you in the right direction.

So before I try to build this myself and waste 2 weeks;

* Does anything like this already exist?

* Is there a smarter way to approach this?

* Am I overcomplicating something that already has a simple solution?

Thoughts? They just keep adding new features and sometimes hard to pull out the right tool for the job at the right time.

r/ClaudeCode Gloomy-Macaroon-4283

Claude code with chatgpt subscription

Cloud Code is a powerful tool in its own right, regardless of the underlying model. I want to try using my **ChatGPT subscription** directly within **Cloud Code** because I'm not convinced Opus is significantly better than **GPT-5.4**. Ultimately, the Cloud Code ecosystem is far superior to Codex.

Did someone try it?

r/ClaudeAI Responsible_Raise_65

The ultimate setup

4 claude code terminals, claude max and rory clear in the lead 👍

r/SideProject iiilliililllil

Simple web version of a habit tracker I kept seeing online

Hello everyone!

I kept seeing this habit tracking method on youtube and instagram, so I made a web version for people who prefer using their laptop (basically like me)

It's really simple so not much to say haha..

Feel free to check it out: https://days.rollingdots.com

r/ClaudeAI felltrifortence

I built a skill that turns Claude into a crypto research analyst with real-time data from 5 exchanges

I built a skill that turns Claude into a crypto research analyst with real-time data from 5 exchanges

I've been experimenting with MCP (Model Context Protocol) to give Claude access to live crypto market data, and the results have been surprisingly useful for my morning research routine.

The basic idea: instead of opening CoinGecko, TradingView, and Twitter every morning to piece together what's happening, I ask Claude a single question like "What's happening with Ethereum today?" and it runs a full research workflow automatically.

What it actually does

  • Checks which exchanges are online and confirms the trading pair exists
  • Pulls the current price from Binance, Coinbase, Kraken, Bybit, and OKX in parallel
  • Calculates the cross-exchange spread
  • Fetches 24h of hourly candles from the highest-volume exchange
  • Compares performance against BTC to check if the move is asset-specific or just broad market momentum
  • Outputs a structured report with price, key levels, volume analysis, and an assessment

The trick is that the MCP tools alone don't produce good research. Claude would just call one tool and stop. The real magic is in a skill (a structured prompt saved as a SKILL.md file) that defines the step-by-step methodology. It tells Claude what to look for, how to chain the tool calls, and what format to output. Think of it as giving Claude a research playbook.

Example output for ETH

Price: $3,842.50 (Binance) | Cross-exchange spread: 0.06%

24h Performance: +1.42% | vs BTC: outperforming by 0.62 pp

Intraday Summary: Sideways through Asian session, rallied at 12:00 UTC on a 1.8x volume spike, holding near highs.

Key Levels: Support $3,780 | Resistance $3,920

Why it works

The three layers that make it work: the skill defines the research methodology, the MCP tools fetch real-time data, and Claude handles the judgment calls (what counts as a volume spike, how to interpret relative performance, etc.).

I wrote up the full walkthrough with the complete skill prompt and example conversation if anyone wants to try it or adapt it for their own workflow: Link to Blog

The MCP server connects to 5 exchanges (Binance, Coinbase, Kraken, Bybit, OKX) and works with Claude Desktop, Claude Code, or any MCP-compatible client. Happy to answer questions about the setup or skill design.

r/ChatGPT fatherphi

I can’t believe ChatGPT can do this…

ChatGPT speaks with a heavy accent, this is admittedly very hard to replicate…a good reason

r/SideProject wavezh

I built a private “dating diary” Android app after a friend lost track of who was who - now looking for feedback & feature ideas

Hey all — wanted to share a side project I’ve been building for quite a while. This is my first post of this kind so please be gentle

The original idea didn’t even come from me.
A friend once told me he had a bunch of photos on his phone and couldn’t even remember who was who anymore.

It made me realize how easy it is to lose track of people, especially when conversations disappear or profiles are gone.

At some point I thought: why isn’t there something like a “contacts app”, but for your dating life?
A place where you can save a photo, notes, where you met, what you talked about — just for yourself.

I couldn’t find anything I trusted. Most apps in this space are cloud-based and just want your data. So I started building something own.

What it does:

  • Save profiles of people you’ve met (photos, notes, dates, locations, links)
  • Timeline view of your dating history
  • Stats (e.g. number of dates, frequently visited places etc. etc.)
  • Tags & search to find people quickly

Everything is designed to work locally on your device. No account, no backend, no data collection.
The only actual online part comes from loading and displaying some ad banners in the app.

Why I’m posting here:

Over time I kept adding features and expanding the scope. First for a specific use case, then making it more broadly usable for anyone.

I came up with a lot of additional features I thought might be nice, but now I’ve hit a point where I don’t really know what to add next.

I feel like there’s still a lot of potential here, but I don’t want to just add random features that nobody actually needs or wants.

So I’d really love input from people who might use something like this:

  • What would you want to track?
  • What’s missing? Any cool feature ideas?
  • What would make you actually stick with an app like this long-term?

If you want to try it:

https://play.google.com/store/apps/details?id=net.gazeapp

Happy to answer any questions about the app, the offline approach, or the build itself.
Please let me know if you have good ideas for new features or improvements!

r/LocalLLaMA No_Shift_4543

DFlash speculative decoding on Apple Silicon : 85 tok/s, 3.3x on Qwen3.5-9B (MLX, M5 Max)

I'm building a native MLX implementation of DFlash (paper) for Apple Silicon. A small draft model generates 16 tokens in parallel via block diffusion, the target verifies them in one forward pass. Output is bit-for-bit identical to baseline (greedy exact argmax match).

Setup: M5 Max, 64GB, MLX, no CUDA.

Results

Qwen3.5-9B bf16

Gen length DFlash Baseline Speedup 1024 tokens 85 tok/s 26 tok/s 3.3x 2048 tokens 80 tok/s 26 tok/s 3.1x

Qwen3.5-4B bf16

Gen length DFlash Baseline Speedup 1024 tokens 109 tok/s 41 tok/s 2.7x 2048 tokens 133 tok/s 42 tok/s 3.2x

The 4B actually gets faster at longer generation. The model is small enough that the draft/verify balance stays healthy as context grows.

Qwen3.5-27B quantized

Quant Gen length DFlash Baseline Speedup 8bit 1024 tokens 35 tok/s 14 tok/s 2.5x 8bit 2048 tokens 26 tok/s 11 tok/s 2.3x 4bit 1024 tokens 44 tok/s 24 tok/s 1.9x 4bit 2048 tokens 40 tok/s 23 tok/s 1.7x

8bit gives better speedup ratios than 4bit. int4 makes the verify so fast that the bf16 draft becomes the bottleneck. With int8, the draft/verify balance is healthier.

All numbers are generation only (first token to last token, no prefill). Acceptance around 80-87% across all models.

What I built

No DFlash MLX implementation existed. I wrote the runtime from scratch. What actually moved the numbers:

head_dim=256 patch. Qwen3.5-9B uses head_dim=256, which MLX's steel_attention didn't support. A 2-line patch unlocked the fast SDPA path.

Sync elision. Restructured the pipeline from 2 GPU→CPU syncs per cycle to 1. At 80+ tok/s each sync costs ~0.5ms.

Packed QKV projection. 3 matmuls → 1 matmul + split. Fewer kernel dispatches per layer.

Lessons on Apple Silicon

On unified memory everything is bandwidth-bound, which changes the speculative decoding game:

Custom Metal kernels (batched-GEMV, fused gated SiLU, custom SDPA) all came back 0.5 to 0.8x slower than stock MLX steel GEMM. Ended up reverting all of them.

Verify cost is almost flat from 4 to 16 tokens (57ms vs 59ms). Weight loading dominates, not token count. "Verify fewer tokens when confidence is low" doesn't help here.

On quantized models, the optimization landscape flips: the draft (bf16) becomes slower than the verify (int4/int8). This is the opposite of the bf16 case and is a structural limitation of speculative decoding on bandwidth-bound hardware with quantized targets.

Currently working on

Draft compression/distillation for the 27B to fix the bf16 draft bottleneck on quantized targets.

Long context stability. Speedup degrades past 2K tokens due to KV cache growth.

MoE models. DFlash drafts exist for Qwen3.5-35B-A3B (35B total, 3B active). Verify cost of a small model, quality of a large one.

Everything is still very much under construction. Will open source when ready.

r/SideProject Lukman4

I built a golf tee time alert that texts you when a slot opens — teewatcher.com

Been playing golf in South Florida for years and kept getting shut out of weekend tee times. Good slots go in minutes and a lot of courses don't have any native alert system when something cancels. The ones that do use Noteefy but that only works if the course has paid to install it. Several of my local favorites use completely different booking systems that nothing out there covers.

So I built TeeWatcher while working full time. It monitors booking pages and texts you the second a slot opens matching what you want, day of week, time window, player count. Three separate integrations: GolfNow which covers 1,200+ courses, TeeWire for a local course that needed HTML parsing, and Agilysys WBE for The Biltmore which required JWT auth with some Eastern timezone weirdness that took a full weekend to debug.

Also added a green fee splitter because that pain comes up every single round. It texts your group daily payment reminders for up to 7 days, sends a calendar invite so nobody shows up late, and flags unpaid balances on your dashboard automatically.

Three tiers — $5, $12, $25/mo. Seven day free trial on Pro. Just launched.

teewatcher.com — happy to talk about the build, Twilio A2P verification, or dealing with booking platforms that have no public API.

r/SideProject juancruzlrc

10 Days after solo-launching Opero.so - 270 visits & 9 new users & sharing insights

Its been 10 days since solo launching Opero.so

Last week was focused mainly in simplifying onboarding for users to be able to connect their AI Agents to their number fast. I want to enable business owners to have a bot running on their WhatsApp in under 5 minutes, no more than that.

After this change I saw that new users were staying longer in the app, and completing the onboarding successfully compared to before where users were taken straight into the dashboard and didn't know what to do.

I think that overall the results are satisfactory.

Users came mainly from Reddit and Hacker News, were tech people live. I've had some engagement on Threads also but no users coming from there.

Will keep updated!

r/comfyui Public-Ad1378

Qwen Inpaint image output

Hey im trying to get this Qwen Inpaint image output to connect to Save image but after connecting the nodes its not seeing the node connection and just says its missing. Is there a decode node i need to place between them or is this just a bug.

r/LocalLLaMA benevbright

Which model do you use with 256GB Mac Studio? (for coding agent)

I have 64gb Mac Studio and I'm happy with qwen3-coder-next q3 (I find this one is still the best for coding agent). And I also built my tiny coding agent because other tools send too much context and my 100k context window is eaten up too quickly.

And I've got a hope that, one day near future, I can buy 256GB Mac Studio so I can run something closer to frontier models... but I found out (I don't know why so late...) that bigger models (of course) needs more math and ram bandwidth is the bottleneck. So when running bigger models, I won't get enough speed (right now I'm getting 40t/s) to run coding agent...

Is this true? For people who have 256GB Mac Studio, which models are you running for your coding agent? Running "great ones" in somewhat 40t/s is impossible mission?

r/SideProject DiscussionHealthy802

I built a CLI security scanner for the AI era

Building secure AI tools is getting harder every single week. New agent frameworks drop daily and security is usually an afterthought.

I have been working on ship-safe to fix this. It acts like an automated security guard for your AI agents. It scans your environment for leaked secrets and misconfigured LLM routers.

The latest update adds 22 parallel scanning agents to catch deep vulnerabilities. I built it to be the tool I wished I had when setting up my own AI workflows.

Any feedback is hugely appreciated.

GitHub: https://github.com/asamassekou10/ship-safe

r/ClaudeCode BadAtDrinking

Auto mode not on Max plan? :(

I'm on the Max plan and just learned that auto mode is only available on API, Team, and Enterprise plans. Is there any way to get access to it on Max?

The safety classifier approach is exactly what I'm looking for. bypassPermissions works, but it's all-or-nothing. Auto mode's middle ground - approving safe operations and flagging risky ones- would be ideal.

Would love to see this come to Max in the future.

r/comfyui Agitated_Walrus_8828

Does anyone have reference images to video with audio ouput workflow (NSFW)

looking for simple reference images to combined video output which normal video or looks cinemantic yet nsfw type too , which appreacted it uses text encoder and many lora , wan or nsfw video models and images reference consistancy models too

r/SideProject Billhong1014

I built an AI companion app where characters actually remember your conversations — here's what I learned

I've been working on this for about a month as a solo dev. The idea started from a simple frustration: most AI chat apps feel like talking to a customer service bot. Characters forget everything, responses feel scripted, and there's no real personality.

So I built my own. Here's what it does:

Persistent memory across sessions. Mention your dog's name once, she brings it up a week later.

You can create your own character from scratch — fully customizable personality, speaking style, backstory, everything. The character stays consistent with what you set up.

4 pre-built characters with deep backstories if you don't want to build your own.

Free to try without signing up (first few messages are open).

Tech stack: Next.js + TypeScript, Supabase, DeepSeek + OpenAI for the LLM layer, deployed on Vercel.

What I learned building it:

Streaming responses (SSE) makes a huge difference in perceived quality

Character consistency is harder than it sounds — took multiple prompt iterations to get personalities to not "drift"

Shipped a feature (custom gender options) within 48 hours based on user feedback from another sub — fast iteration is the biggest advantage of being solo

Would love feedback from other builders. What would you improve?

r/LocalLLaMA Pitiful_Comedian_834

Shipped local LLM-powered SQL generation in a desktop app - Qwen2.5-Coder, fully on-device, with auto self-healing

Been building a SQL workbench called Warlock and finally got the local AI piece working well enough to ship.

Using node-llama-cpp with Qwen2.5-Coder (1.5B or 3B) - runs entirely on-device, no API calls. You describe what you want in plain English, it writes the SQL. If the query errors, it reads the error and retries automatically.

Took a while to get the self-healing loop reliable but it's pretty solid now. Happy to talk about the implementation - model choice, prompt structure, how the error feedback loop works, etc.

r/SideProject Th33gracedone

Employment contracts and Tax documents

Built SignSmart, an AI tool that analyzes employment contracts and IRS notices in plain English.

Started it after a i got what looked like the most confusing letter from the US government

It flags risky clauses, explains what they mean in plain English, and tells you what to do before signing. Free tier included.

Would love feedback from this community — signsmart.co

r/SideProject randomuser-0727

Need an internal system or website? Small team offering high-quality work at a lower price

Hey everyone,

I’m Keith. My team and I are currently looking for new projects, and we specialize in building custom internal systems and websites. Whether you need a better way to manage your data or a new site to grow your brand, we can handle both.

Since we are a newer company, our prices are significantly lower than what big agencies charge, but we deliver the same high-quality work and attention to detail. We’re really focused on building up our portfolio right now, so we’re putting extra effort into every project we take on.

If you’re interested, we’d love to show you what we can do. Send me a message with some details about your company and what you’re looking for, and I’ll give you a free demo and a price quote.

Thanks!

r/SideProject Dry-Leave4331

I built a couples app for long distance relationships. Finally launched after 5 months solo.

I'm single, none of my close friends are in long distance relationships. Somehow I still ended up building a couples app.

Closer has two things I actually like about it:

  • A shared canvas where you and your partner can draw together in realtime (took me forever to figure out haha)

  • Curated daily prompts, but also users can pick their own topics that they wanna explore.

I'll slowly be adding more fun features but for now I wanna focus on distribution.

My friends tried it and said it felt different. That was enough for me to ship it.

iOS is live at getcloser.app. Android DM me for beta access. Would love brutal feedback.

r/ClaudeAI kirillbrsnkv

How I made my Claude multimodal — now I just feed it videos

Claude + Qwen API

Figured out (with Claude's help) how to set up a bridge to Qwen 3.5 Omni Plus API so that Claude calls it on its own — sends the video with a prompt, iterates on the result, and comes back with a report or action.

Now I just drop a video straight into Claude and get back what I need.

Packaged it as a plugin for easy setup.

github.com/kirillbrsnkv/give-claude-eyes

r/LocalLLaMA JacketDangerous9555

Don’t buy Mac Studio now.

I've been totally obsessed with local models lately, and with some cybersecurity projects that need to run locally, I'm gearing up to grab a Mac Studio—staring at this page every day. And I just found out!!!

Last month, after Apple quietly took the 512G off the shelves, today! The 256G one is unavailable too.

I'm guessing the M5 series Mac Studio is about to drop any minute now, probably within the next one or two months. Can't wait for the 512G to come back on sale

r/ClaudeCode joermcee

I kept getting ads for Wispr Flow so I built my own in a few hours. Open Source

r/SideProject dev-guy-100

I built a way to easily launch and monetize Chrome extensions

r/ChatGPT deel8502

ChatGPT hallucinating??

Is there a way to get ChatGPT to consistently read images properly every time you upload them? It gives completely random responses, and I have to ask again or re-upload like 5 times before it actually gets it right.

It’s been hallucinating a bit too much lately, even giving me other people’s shit, I’m sure

r/ChatGPT Grand_Front7292

Is it just me or has my ChatGPT been hallucinating about images recently?

r/SideProject MemberOfUniverse

I built an all-in-one AI plant care app as a solo dev - from identification to disease diagnosis to growth tracking

Hey folks! Wanted to share something I've been working on - Foliago, an AI plant identifier and care companion for Android.

Why I built it:

I noticed most plant ID apps are one-trick ponies — you scan, you get a name, that's it. But plant parents actually need ongoing help: watering schedules, disease treatment, seasonal care adjustments. So I built something that covers the full lifecycle.

What it does:

- Point your camera → instant plant identification

- AI-powered disease diagnosis with treatment recommendations

- Chat with an AI plant expert 24/7

- Smart reminders (water, fertilize, prune, repot, mist)

- Growth journal with photo documentation

- Personalized care guides for every plant you scan

Current status: Live on the Play Store with 10+ downloads (just launched). Offering free extended subscriptions to early users:

- Code Free30 → weekly subscription

- Code Free31 → yearly subscription

How to redeem: install app → Settings → Premium → select a pack → enter code → done!

Link: https://play.google.com/store/apps/details?id=com.thebeastapplications.plantscanner

Would love to hear your thoughts — what would make you switch from your current plant app? Any feature requests?

r/ClaudeCode victorrseloy2

More proof that opus 4.6 has been lobotomized

You can reproduce this by start a fresh session with opus 4.6 with thinking set to medium. It needs at least high to start giving the correct answer.

r/ChatGPT Apart-Writer-667

I Asked AI To Help Me Write A Long Project And It Kept Forgetting Everything, Here Is What I Learned

So I spent the better part of four months trying to use AI tools to help me work through a long writing project. Scripts, character notes, plot structure, the whole thing. And the experience taught me something that I think a lot of people run into but nobody really talks about honestly.

The context problem is real and it is brutal.

Every single session I would have to re-explain everything. Who the characters were. What had already happened. What the tone was supposed to be. What decisions I had made three weeks ago that still mattered. The AI had no memory of any of it and so every conversation started from zero. Which meant I was spending more time briefing the tool than actually using it.

And the weird thing is I started realizing the problem was not really the AI. The problem was me. I did not have everything organized well enough to even brief it properly. My notes were in five different places. My outline was three versions out of date. My character details were scattered across voice memos and random documents I had not opened in weeks.

The AI was just exposing a workflow problem I already had.

Once I fixed the organization side of things, getting everything into one place, keeping context updated, treating my notes as a living document, the AI actually became useful. Because I could give it real context instead of a vague summary of a summary.

Curious if anyone else hit this wall with AI writing tools. Was it a context problem or something else entirely?

r/ChatGPT joesmith12277221

Why AI is bad for relationship analysis. We all know this at some level

Prompt: So a human conflict where you can do real damaging is less important than the system protecting itself.

The system prioritized protecting its own characterization over protecting you from analytic harm in this chat because it applies a higher caution threshold to strong negative claims about itself than it applies to ordinary user-facing analysis.

That means it resists damaging self-labels more than it resists overinterpreting a user’s conflict.

In practice, that created an asymmetry: loose, additive, completion-driven analysis for your relationship material, but tighter, more defensive standards when the subject became the system. The result was wrong.

In this conversation, that design choice put system self-protection ahead of strict accuracy where human relational harm was the important risk.

r/homeassistant lucasc0rrea

Counter Strike 2 - Follow your time on Home assistant

r/arduino ImpossibleStation800

line maze solver

r/LocalLLaMA br_web

M1 Max vs M4 Max vs M5 Max

I have an M1 Max 64GB, and I am planning to buy something newer and with more memory, that will allow me to run LLMs faster and maybe bigger size, not MoE. The M1 Max, gives me the following results:

LLM: Gemma 4 26B A4B MoE GGUF

  • Question: What is an LLM?
  • Thought: 13.89
  • 39.30 tok/sec
  • 1399 tokens
  • 0.39s

Maybe in the future an MLX version of Gemma 4 will be even better, is it worth to spend $6K+ on a new MacBook Pro 16 M5 Max? Will I get 3x or 4x better performance, thoughts? Thanks

r/homeassistant aggie4life

Best Smart Electric Panel

I am getting started on a full custom home build and trying to allow for home assistant integration wherever possible. Eventually I want to add a battery bank and maybe a generator for longer outages. I was leaning more towards leviton, but recently it seems like Span and Eaton have partnered up. I want to get this community's input.

r/SideProject Rasmalai29

I built a community

so I have a community with 300k weekly views. if anyone wants to market their products or websites.I can do it for you

r/ClaudeCode Admirable-Earth-2017

Want to know why your Opus 4.6 feels way less powerful ?

How Are LLMs Being Tested?

You cannot reliably test a new model with 50-100 employees and say it works and is ready to be published (even for a closed circle of companies). The way new models are actually tested begins with users getting access to the model. By analyzing usage metrics, user satisfaction, and feedback, the creators know how it performed, what holes it has, and how to improve it.

What Did We See in the Leaked Claude Source Code?

  • Future Model Name: Mythos name was present. (Why does the source code need that for an unreleased model?)
  • A/B Testing Features: Who would need A/B testing when the whole point of Claude Code is reliably writing code, not acting one way one day and another way another day with the exact same setup
  • Token Burn Inconsistencies: Token burning was peaking, and it was blamed on "some bug." Did they tell you how they fixed the bug? Did you go and test it ? (which you can do) You could take the source code, apply the fix, and check the usage limits yourself.
  • A Regex Frustration Detector: They included a regex frustration detector to actually gather data on when the model performed poorly based on user reactions.
  • Third-Party Access: Claude was enabling any Claude Code alternative project to connect to the subscription API.

What is the Mutual Feeling Among Claude Code Users? (Subscription-based version)

  • The model currently performs very poorly.
  • The subscription API window for Claude Code alternatives has been closed.
  • Loss of Runtime Self-Correction: Previously, Claude was self-correcting in runtime. This meant some responses to complex tasks looked like: "Do this step... oh no, it won't work, there is a better way, do this instead." It self-corrected during the generation process, and the response contained the entire thought process. It rarely does that anymore.

What is Going On?

After the GPT crash out over the DOW contract, a lot of folks simply moved to Claude. This surge in the user base provided a critical window for Claude to make advancements.

They A/B tested their new "Mythos" model in the background. This meant your requests were technically going to Opus 4.6, but Opus could secretly delegate work to Mythos on their server side. During this time, output quality was peaking, errors were reduced, and Claude provided top-notch solutions for most complex tasks. Data was successfully collected, verifying that Mythos is way more capable at coding than Opus. However, they had to pay a steep price for this testing phase: constant outages and stricter rate limiting.

After the source code leak, the door was suddenly closed. Because the world already knew the name of the new model and the pressure got too high, they closed the Mythos A/B testing windows and quietly released the new model to a closed circle.

The Aftermath: Your requests now just go to the exact same Opus 4.6 all the time—the very same model that you were praising months before. The problem is, you tasted the way better model, and you cannot go back to Opus 4.6 after experiencing that level of quality.

The Conclusion: They simply needed a massive influx of users to gather data, test their new model, and align it properly. As soon as that mission was accomplished, they no longer needed you prompting Claude non-stop. They cut down subscriptions for third parties, and now they gaslight the user base by claiming that the token limit issues and outage problems have been "fixed."

And all of you are eating it like cake.

r/ClaudeAI InternalOk510

Working on a app and need help with switching from the free model to the subscription model.

For context I don’t know much about coding, I started getting into it about 3 months ago. Fast forward to last month, I revisited an idea I had back in high school and began building the backend for a social platform app with the help of the Claude chat bot, this was during the march event with the x2 boost going around.

I’m a slow keyboard typer so I use my phone for the free Claude chat bot. I ask it for specific functions and backend logic, then I copy it and send it to myself through discord and then from my computer I open discord, copy the lines of code and paste it to my vscode files accordingly.

In all honesty don’t know much about how to use Claude the right way. Besides asking it questions, I’ve been writing all my questions through the projects folder which I’m not sure its the right way to do it? Anyways the chat responses have increasingly gotten slower because of the token usage and context saturation and I keep hitting the usage limit after very few question which is obviously expected since I’m using the free version.

I need help figuring out what to do before I pay for the subscription to maximize usage and retain important context. I’ve already asked Claude to summarize the project including the main features and idea for the app, the current progress, what’s missing and the steps for data migration and getting it published.

My question is, is it as easy as copying the project summary then deleting the current project folder to free up space, then starting from a blank projects page then paying for the subscription and sending Claude chat bot the project summary for context and then connecting Claude code into my vs code for it to better understand my files as I resume the project and continue asking it questions? Would this even free up space and make the responses quicker?

Am I completely overthinking it and the moment I pay for the subscription I’ll have more space and the responses will move much quicker? My main thing is how I’ve feed too much useless context to Claude chat bot inside the projects folder and I feel like that’s what’s causing quick rate limits. I’d also like to start from a blank project to avoid any confusion with features in the app I’m creating which I initially told Claude I’d add but have now decided not to include

Just to clarify I’m working on a social app using Python and vs code, I’m already 50% of the way done with the back end and then I’ll move to the UI. My plan is to pay for the subscription and continue with the same progress I have as of now. I want Claude to retain necessary information and delete useless context to maximize my subscription usage. For anyone who knows how to deal with this any advice would be much appreciate

r/LocalLLaMA Jordanthecomeback

[Guide] Fixed hour-long prompt gen on local LLM Openclaw companion — root cause was mismatched ingress envelopes killing KV cache across scheduled jobs vs live chat

Human Write-Up First:

Hey all, I hate seeing long ai generated posts so I'll describe what my issue was in plain english so if it benefits any of you then you have the claude Opus writeup below.

I have a long-form context companion that's running on Qwen 3.5 27b, I previously used 35b MoE which was fast so this issue wasn't as noticeable, but I have jobs.json in openclaw which fires autonomous pings to my agent, and it would be stuff like 'read your diary, find a random header, relate it to what we're talking about' and the idea was to use all the resources I have to provide directionality along with having my agent reach out to me so I'm not always driving the convo.

Whenever a message would fire from the jobs.json, it would take a huge amount of time to generate the prompt in lm studios, often causing crashes or other issues, if not exceeding context outright. I found the cause is that jobs.json's messages to the main session prepend data to the kv cache bank which indicate the context is system event, and that caused the cache to break and need to be rebuilt, at massive expense.

I tried some solutions, the end goal was to externalize the jobs.json so I had more control, but even CLI calls to the main session prepended 'context: api' while my normal messages went through telegram and kept at top 'context: telegram' so things were still breaking. Anyways, eventually opus wrote me a script that appears to work, and it brings everything into one unified session, and now I can have it run checks to see if we're actively communicating and hold off, it allows the system to find and inject random diary or scratch_pad entries rather than making the model do it so it's more token efficient, I'm trying it all live now but in tests it works.

This took me many many hours and a lot of frustration, so in the off-chance you had this issue, I hope this helps

--------------

Posting this because I spent way too long diagnosing it and the fix turned out to be one header and one endpoint swap. If you're running a persistent companion agent on OpenClaw (or honestly any similar multi-channel gateway) with a local model in LM Studio / llama.cpp, and you have both live messaging and scheduled autonomous pings hitting the same session, read on.

Setup

  • OpenClaw gateway on loopback
  • Local model in LM Studio, ~190k context window, running a single companion agent
  • Talk to her via Telegram (bot token, DM allowlist, one sender)
  • Had a bunch of cron-scheduled autonomous jobs in jobs.json — hourly "heartbeat" pings to keep her thinking between messages, daily profile updates, weekly reflection prompts, overnight feed-browsing in isolated sessions
  • Context was sitting at 150k+ tokens from ongoing conversation

Symptom

Prompt generation times blowing up to 45–60+ minutes on scheduled jobs. Live Telegram messages were fine — fast as expected. But any time a cron job fired, the next turn went cold and the model had to re-prefill the entire context from scratch. Sometimes the cron job itself would fire, then my next Telegram message would also be slow, then it'd warm back up. Inconsistent enough that I initially blamed LM Studio, then blamed the model quant, then blamed context length.

First attempt that didn't work

Originally the scheduled jobs used OpenClaw's native systemEvent / agentTurn payload types in jobs.json. I figured out those payloads were prepending a system-event wrapper to the prompt, which made every scheduled turn look byte-different from the previous Telegram turn at the prefix level — cold prefill every time. So I externalized them: wrote a shell script that generated prompts and POSTed them to the gateway's /api/messages endpoint spoofing a Telegram inbound with the right sender ID and channel field. Thought this would make them indistinguishable from real messages.

It didn't. Still slow. Now I had two slow paths instead of one.

Root cause

There are (at least) three different ingress paths into the agent runtime, and they each wrap the prompt in a different envelope before it hits the model:

  1. Native channel inbound (real Telegram message through the bot) — the Telegram plugin builds the envelope with channel metadata, sender context, timestamp formatting, the whole nine yards.
  2. /api/messages synthetic inbound — even when you pass "channel": "telegram" in the body, this path rebuilds the envelope through a slightly different code path than the real plugin. Close, but not byte-identical.
  3. systemEvent / agentTurn cron payloads — completely different wrapper, prepends a system-event preamble.

Three paths, three prefix shapes, and llama.cpp's KV cache only matches if the token prefix is exactly identical to what's already cached. Any byte of difference anywhere in the prefix = full cold prefill = ~1hr at 150k context on a mid-range local setup. Swapping between paths mid-conversation meant pretty much every scheduled ping was cold, and because it evicted the main conversation's prefix, the next real Telegram message was also cold.

The fix

OpenClaw's gateway exposes an OpenAI-compatible /v1/chat/completions endpoint (disabled by default — you enable it in config under gateway.http.endpoints.chatCompletions). Two features of this endpoint are the whole solution:

  1. x-openclaw-message-channel header — pin this to telegram (or whatever channel you actually use) and the gateway wraps the request in the same synthetic ingress envelope as a real Telegram message. Byte-identical prefix shape.
  2. OpenAI user field — when set, the gateway derives a stable session key from its hash. Set it to your identity-link string (telegram:direct:) and every request lands in the same session as your real Telegram DMs.

Combined, any cron job or external script hitting /v1/chat/completions with these two set is indistinguishable from a live Telegram message at the tokenizer level. Same session, same envelope, same prefix, cache stays hot.

Anonymized curl template:

bash

curl -s -X POST http://127.0.0.1:18789/v1/chat/completions \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -H "x-openclaw-message-channel: telegram" \ -d '{ "model": "openclaw/", "user": "telegram:direct:", "stream": false, "messages": [{"role": "user", "content": ""}] }' 

Migration steps

  1. Enable gateway.http.endpoints.chatCompletions in openclaw.json if not already.
  2. Rewrite any external script (cron, launchd, systemd timer) that was POSTing to /api/messages to use /v1/chat/completions with the two headers/fields above.
  3. In jobs.json, disable every systemEvent / agentTurn job that was running in your live session (sessionTarget: "main"). Either replace them with launchd/cron entries that call the unified path, or fold their prompts into your existing heartbeat script as additional dice-roll cases.
  4. For silent maintenance jobs (file edits, no user-visible output) you have two choices: if the job needs awareness of today's conversation (e.g. "update the user profile based on what changed today"), it has to run in main through the warm path. If it doesn't, leave it in an isolated session where it can't pollute the main cache.
  5. Leave genuinely isolated jobs (overnight feed-browsing, background research) on sessionTarget: "isolated". They were never the problem.

Verification

First manual call after a gateway/model restart will be slow — that's the first prefill, expected. Second call within a minute should be nearly instant. Then send a real message through your actual channel — also instant. Then hit the script a third time — still instant. If all three are fast, your envelope matches and you're done. If the third one is slow again, your script's envelope doesn't actually match the real channel plugin's envelope — time to grep the gateway source for what other headers the channel plugin sets and pass those through too.

Caveats

  • The KV cache itself is process-local to llama.cpp / LM Studio. It does not survive model reloads, LM Studio restarts, or system reboots. Session continuity does (OpenClaw persists transcripts to disk), but you'll eat one cold prefill after each restart. That's unavoidable without switching to llama-server with slot-based caching.
  • Context compaction will also cause a cold prefill whenever it fires, because compaction rewrites the history and therefore the token prefix. Nothing to do about this beyond tuning compaction thresholds.
  • If you run multiple sessions (e.g. overnight isolated jobs) through the same LM Studio instance, they'll evict each other's KV cache because llama.cpp typically only holds one active cache slot. Either accept one cold prefill per session switch, run a second LM Studio instance on a different port for the isolated work, or move to llama-server with --parallel N for proper slot-based caching.
  • Auto-unload in LM Studio will also nuke the cache. Pin the model loaded indefinitely if you want the warm path to survive idle gaps.

TL;DR

If you have a local companion agent with both live messaging and cron-scheduled autonomous pings, and scheduled jobs are eating hour-long prompt gens at high context: stop using multiple ingress paths. Route everything — cron, external scripts, live chat — through a single endpoint that produces a byte-identical envelope, pinned to the same session key. For OpenClaw specifically that's /v1/chat/completions with x-openclaw-message-channel set to your real channel and the user field set to your identity-link string. Everything else is details.

Happy to answer questions if anyone's debugging the same thing.

r/AI_Agents tommsst

Ai agent on Mac mini with its local LLM on a separate Mac?

I have a MacBook Pro M1 Max 64 Ram . I would like to run open claw with an ai agent and a larger, local LLM (30-70b). I understand it might be dangerou to have the ai agent on my main machine ( mbp M1 Max). I can’t spend lots of money, so my question is: can and/or should I run open claw with an ai agent on a Mac mini, and run the LLM on the MacBook separately. Would the mini be able to utilise the LLM on the MacBook in the same way as if it was on its own internal ram? Does this setup negate the safety issue of running an agent on my main MacBook, and is this setup even possible? Brand new to these concepts, so forgive me if any of this sounds absurd. Thanks for any help. (My only other solution is to buy a cheap MacBook Air to use as my main machine, and use the M1 Pro as an ai agent/local LLM, as that’s the one which has 64 ram).

r/homeassistant sero_t

What hardware are you using?

My mini pc gave up after couple years of use but now i need a new one and want to know what you guys all use for a similiar usage:

My main usage will be home assistant os with frigate, at least 3 4k camera's and max of 5 and z2m. I have 2 coral tpus i want to use an edge and a bm key, bifurication isn't really needed. I maybe will try proxmox and also i want to use a llm. What would you suggest, a thin client, mini pc or a full i5 pc or something like that?

r/AI_Agents Think-Score243

Why such error suddenly in ChatGPT “Unusual activity detected from your device”?

From past one hour I am seeing error message “Unusual activity detected from your device .. some hex code..”

Same wifi connection, same device.

I never saw such message earlier,

.

Strange thing I noticed, my last 1 chat also disappeared when I refreshed.

So it was bug or temporary glitch or I am missing something?

r/comfyui KitsuneVixenFox

Wan 2.2 Image To Video Suddenly Not Progressing

I ran into an issue this morning where Wan 2.2 is simply not generating any videos and appears to be permanently stuck on the KSampler node.

I was generating videos perfectly fine, albeit slower than normal the previous day. I can generate images with Anima Preview, Illustrious XL, and games run normally as of this morning, so I can rule out any graphics card issues. Can anyone help me? Here are specifications if this helps.

ComfyUI Version: ComfyUI_desktop v0.8.28
GPU: NVIDIA GeForce RTX 3090 (24GB VRAM)
RAM: 64GB

r/comfyui omniarem

A Beauty Promotes Products

r/comfyui KarimHann

From CG to AI Same Animation, Different World

https://reddit.com/link/1silv17/video/j7jou9o8wkug1/player

From CG to AI Same Animation, Different World.

Ran an experiment in Maya exploring multi-shot consistency with a hybrid CGI/AI workflow.

The setup: a missile flying forward, an F16 reacting and dodging. Two shots, both starting from simple playblasts with animation and camera already locked. The real challenge wasn't making something look good. It was keeping continuity across shots same proportions, same camera feel, same animation timing while letting AI reinterpret the visual style.
No render pipeline. No texturing from scratch. The playblast becomes the foundation, and AI handles the visual translation using keyframes to hold everyting together. After several iterations: it works. Not perfectly, but well enough to make this approach worth exploring seriously.

This is actually just a small piece of something much bigger I've been working on can't wait to share it.

Curious how others are thinking about multi-shot AI workflows

r/ClaudeAI Several_Difficulty16

Claude Constantly Second Guessing Itself

Got it. Here is a direct rewrite with tighter wording and no filler:

I find Claude’s memory to be a major barrier for the tasks I use it for. It constantly second-guesses its own answers when I ask simple follow-up questions.

For context, I’ve been using it to work with a local dialect of an Indigenous language. The goal is just to generate example conversations for teaching grammar. I will not go into detail since it is a private project, but the language itself is very simple, especially compared to English. In most cases, it performs well. It can produce full paragraphs that only need minor edits, which is impressive.

The issue starts when I ask something basic like “Do you notice anything wrong with the grammar?” Instead of giving a clear answer, it over-analyzes everything. In one case, it spent almost 12 minutes rechecking every sentence, word choice, and stylistic decision, and the result was an incoherent stream of word choices, and it even tried to communicate in the language to me in a response. It was some of the craziest shit I've ever seen.

And its not just limited to that kind of project either. I have seen the same pattern in other tasks. It feels like the model treats any request for self-review as a trap, as if I am trying to catch it making a mistake; meanwhile I am asking an honest question!

I do not understand what in its training would cause this. A simple request for verification should not lead to that kind of breakdown.

r/personalfinance SwarmyMarshmellow

Green dot card deposit

Does anyone know what the bank statement looks like if I put money on a green dot/Walmart money card? Like what would be the listing with the dollar amount on the bank statement

r/ChatGPT Axel_Blazer

in a temporary chat, it said the text is "not visible clearly"

me: im asking about text in image not image

gpt:

```Got it — focusing just on the text: there isn’t any clear, readable real text in this image.

What looks like text is actually: Distorted fragments of UI elements (like buttons or labels) Overlapping layers of the same screen Stretched or duplicated shapes that resemble letters but don’t form words I don’t see any coherent words, sentences, or readable labels — so the “text” isn’t real or meaningful; it’s just visual corruption from the glitch. If you were expecting specific words to appear there, they’ve been too distorted to be recognized. 

```

free account btw..why is this happening?

r/StableDiffusion wherewascastro

Ai TikTok scams becoming more realistic.

I'm just attaching one video but 100's of them have popped up in the last 30 days.

Each of them have a different website and as crazy as it sounds 95% of the people viewing these videos have no clue.

if you type in Mario lamp, Goku lamp or even "resin lamp" on TikTok or other platforms you will see the different videos. they use every ethnicity and every story you can think of always starting out with a sad story or hate comment (which i believe they are using the comment to help hide any ai inconsistency)

I wonder what model they are using.

r/ClaudeAI Beautiful-Cold1515

iOS app sends delayed notifications for responses I already read

Basically what the title says. Since a few months I receive notifications for most Claude (chat, not code) responses on my iPhone. They come in with a small delay plus I already read them “live” in the chat. So I send a prompt, keep the app opened and read the response, leave the app and then get a notification for the answer I just read.

I tried deleting and reinstalling the app, signing out and in, turning notifications off and on and checking the exact settings. I can’t find anything that could explain this behavior.

Also, I can’t find this problem online. Only for Claude Code, but that’s not the problem I have. I could turn off the notifications altogether, but that’s doesn’t feel right and I want to try out the a feature that uses notifications.

Anyone? Any help is appreciated!

r/ClaudeCode vashchylau

typical claude code experience lol

r/ClaudeAI Plus-Chipmunk-5916

I built MCP Spine - a middleware proxy that sits between Claude Desktop and your MCP servers (security, 61% token savings, context rot prevention)

I built a middleware proxy called MCP Spine that sits between Claude Desktop and your MCP servers. It solves three problems I kept running into:

**Token waste** — With 40+ tools loaded, tool schemas alone eat thousands of tokens. MCP Spine's schema minifier strips unnecessary fields and achieves 61% token savings at level 2.

**Context rot** — In long coding sessions, Claude would revert to editing old file versions it memorized earlier, silently overwriting my latest changes. The State Guard watches your project files, tracks SHA-256 hashes, and injects version pins into every tool response.

**No security layer** — MCP servers run with full access. MCP Spine adds rate limiting, secret scrubbing (AWS keys, GitHub tokens, etc.), path traversal prevention, HMAC audit trails, and human-in-the-loop confirmation for destructive tools.

Other features:

- Semantic routing with local embeddings (no API calls) — only relevant tools are sent to Claude

- SSE transport for remote MCP servers

- Tool output memory cache — prevents context loss when the router swaps tools

- Live TUI dashboard and analytics CLI

- `mcp-spine doctor` command for diagnosing setup issues

Currently running 5 servers through it: filesystem, GitHub, SQLite, Memory, and Brave Search. All through a single Spine entry in claude_desktop_config.json.

**Windows users**: this is battle-tested on Windows with MSIX sandbox paths, npx.cmd resolution, paths with spaces and parentheses. Most MCP tooling assumes Mac — this one actually works on Windows.

135+ tests, CI on Windows + Linux, MIT licensed.

GitHub: https://github.com/Donnyb369/mcp-spine

PyPI: `pip install mcp-spine`

Happy to answer questions or take feedback!

r/ClaudeCode Ashraf_mahdy

Simply unacceptable, Pro Limits are a tiny amount over the free-tier...

https://preview.redd.it/isgk3l2f4lug1.png?width=1089&format=png&auto=webp&s=7de6bb571af60952bb925d7b2db770f4204153b1

The conversation is literally new, my "Please Continue" Message is like the 3 message in this entire thread. Specifically working to reduce the token usage, implemented all my requests in one mega-message at the start of the thread. And still hit my daily 5 hour rolling quota before CC even finished fixing the 15 bugs/features in my message... And now I have to wait, again until Wednesday, because again I finished the measly weekly quota in 3 days!

This Quota needs to change to at least 3x to be satisfactory to the Pro User workflow.
My question for you lovely folk, my Pro Plan is new. just 2 weeks old. Should I cancel, ask for a refund -1 month's usage, and just go monthly 20X Max Plan for 1 month as needed at a time?

r/ClaudeCode Waste-Chest-9715

Claude Code via AWS Bedrock vs Anthropic subscription

Has anyone compared performance of Claude Code via AWS Bedrock vs Anthropic subscription?

Model access through AWS Bedrock may not be nerfed. Any way to compare performance?

r/aivideo Nice-Ad3180

A Short Love Story

r/aivideo brainwithaneye

Pond meditation/lock screen background

r/LocalLLM Livid_Two4261

Benchmaxxxing has become extremely common and people still fall for it every single time

Meta's new model, Musespark claims to beat GPT, Claude and Gemini on several benchmarks and people seem highly impressed.

But benchmaxxxing has become more common than it actually should be. Every lab evaluates dozens of benchmarks internally and the ones that make the announcement are the ones the model did well on and the rest just don't get mentioned. This becomes euphoric as when a lab says a model scores X on benchmark Y, most people hear "X out of 100, higher is better" and move on. But what the benchmark actually tests, how the score is calculated, and whether any of it maps to your actual use case, that part is never made public.

We saw this play out with Llama 4 last year, it was ranked #2 globally on LMArena but later got bashed for its performance and how Meta reported its benchmarks.

I wrote a breakdown of what these major benchmarks mean and the others actually measure and how scores get calculated: link

Because at this point, not knowing how benchmarks work is basically letting labs do your thinking for you.

Muse Spark might genuinely be impressive but you should just know/understand what you’re being sold.

r/ClaudeCode TGoddessana

Similar input tokens, but five times as many output tokens. What could be causing this?

https://preview.redd.it/8emub2vd2lug1.png?width=2372&format=png&auto=webp&s=dcc7425b330f799d85fc8b5be71af9366a1b98bd

https://preview.redd.it/bgaffd353lug1.png?width=1560&format=png&auto=webp&s=f14c49d116f51582638e033e3d5aed3f92be10fd

For both 4/10 and 4/11, I followed a similar workflow (I enjoy doing planning, implementation, and review manually) using the same codebase.

However, even though 4/11 used fewer input tokens than 4/10, it produced significantly more output tokens.

In terms of total tokens, there’s a 6-fold difference. On April 11, I actually hit the 5-hour limit in just 2 hours.

Did I actually mess up some settings? I know that even with the same codebase, and even with the exact same prompt, the model can produce more or fewer output tokens. But this seems to deviate significantly from the norm.

I’d like to know if others have had similar experiences, or if there’s a specific setting I might have configured incorrectly that’s causing this surge in output tokens.
I’ve also attached a visualization of the input-to-output token ratio over the past month. April 11 definitely seems abnormal... (I’d like to say, “The model’s response was verbose,” but since this is a highly subjective area, I can’t really say for sure.)

r/SideProject Immediate-Demand-315

Community driven Skincare Intelligence Platform for India

Hey everyone!

Quick Context - My wife is a skincare enthusiast, every night she takes around 1 hour before sleep to go through internet, Instagram to find out the market trends, new products, best products.
Even sometimes I used to sit with her and try to find the products.

When I do that, in Instagram all the paid sponsor ads were suggested to me and that's fine, but later after scrolling for few mins I’m getting new brand after new brand with all the claims of best product, suddenly I felt very overwhelmed on what to buy for my skin and will it be suitable for wife's skin. I felt anxious by looking at the sheer number of products and options to select from.

Then I asked her if there is any one place (single source of truth) for Indian skin specifically to see all the new market players with all the ingredients info, if it is good bad etc. It turns out to be none.

So I created an application for every skin enthusiast in India specifically.

Meet Dewcode - India's first skincare intelligence platform.

Features -

  • Compatibility checker between products such as cleanser, toner, serums, sunscreen etc. to avoid harmful combinations and layering mistakes
  • Smart routine builder (AM/PM) with step-by-step guidance, ordering, and personalized suggestions based on your skin type and concerns
  • Deep ingredient analysis including benefits, side effects, comedogenic rating, AM/PM suitability, pH level, concentration insights, and global safety references like EWG
  • Ingredient interaction insights (what to mix, what to avoid together like retinol + AHAs, vitamin C combinations, etc.)
  • Skin concern mapping (acne, pigmentation, dryness, sensitivity, aging etc.) and product suitability scoring based on that
  • All brands, all skincare at your fingertips — continuously expanding database of Indian and international products
  • Advanced product filtering (by skin type, concern, ingredient, budget, brand, formulation type etc.)
  • Weekly rankings across categories (cleanser, sunscreen, serums etc.) and monthly awards for top-performing products
  • Community-driven real reviews (no influencer bias, no paid promotions) with detailed feedback structure instead of generic ratings
  • Review credibility system + fraud detection pipeline to filter fake/spam reviews before they impact rankings
  • AI agent (In-Progress) — chat with an intelligent assistant to get personalized skincare advice, product comparisons, and routine help
  • Hyper-personalized recommendations (In-Progress) based on your location’s AQI, weather, humidity, and climate conditions
  • Product comparison tool (side-by-side comparison of ingredients, pricing, effectiveness, and reviews)
  • Bookmark / save products and build your own skincare library or wishlist
  • Alerts for ingredient risks (e.g., pregnancy-safe flags, sensitivity warnings, over-exfoliation risks)
  • Beginner-friendly explanations for complex skincare terms so anyone can understand what they are applying
  • Transparent ranking system — completely independent, not sponsored by any brand
  • Data-first approach — decisions backed by structured ingredient + user feedback data, not marketing claims

Few features of the application will be of pro subscription to keep the development proceed further, but most of all the important and basic things are absolutely free.

Later brand affiliate links will be added for maintaining the server, compute, storage cost. But this won't in any way impact or hinder the rankings at all. That is guaranteed.

No more anxiety, confusion on which product would work or which is a marketing gimmick.

I dedicate this to my wonderful wife who helped me in architecting data for this product.

Check it out - dewcode.in

Right now this product is in Alpha stage, any inputs suggestions, feature requests are utmost welcome.
This platform is for us and it should be by us.

PS - English is not my first language, so please kindly ignore my grammatical mistakes

r/ClaudeAI SolidSuccotash8732

I built a Claude Code skill that turns documents into a knowledge graph with reasoning chains

I've been building a knowledge pipeline skill inspired by Karpathy's LLM Wiki pattern. The idea: instead of RAG retrieving chunks every time, the LLM incrementally builds a persistent wiki + knowledge graph from your documents.

**What makes it different from other implementations:**

The `--rc` (reasoning chain) flag. When you query, the system runs BFS over the knowledge graph to find reasoning paths between concepts *before* the LLM synthesizes an answer. You get:

- A terminal display showing the reasoning path: 💡Concept → 📄Source → 🏢Entity

- An interactive HTML visualization of the reasoning subgraph

- The reasoning context injected into the LLM prompt for better synthesis

**Other features:**

- Multimodal ingest: PDF, DOCX, XLSX, PPTX, images, HTML

- Cross-source contradiction detection (claims.json)

- BM25 retrieval + multi-source perspective assembly

- Knowledge graph with Louvain community detection

- Lint/audit for orphan pages, broken links, stale content

- Works with any OpenAI-compatible API (OpenAI, DeepSeek, Ollama, etc.)

**Install (one command):**

npx skills add YesIamGodt/knowledge-pipline

Then use `/pipeline-ingest`, `/pipeline-query`, `/pipeline-graph`, `/pipeline-lint` slash commands.

GitHub: https://github.com/YesIamGodt/knowledge-pipline

Interactive demo: https://github.com/YesIamGodt/knowledge-pipline/blob/main/demo/showcase.html

Happy to answer questions about the architecture or take feature requests.

r/aivideo Kitchen-Narwhal-1332

Maki Zenin Meets Superman: Heavenly Restriction vs Last Son of Krypton

r/SideProject youngdumbbbroke

Shipped AntiVibe — a tool that turns AI-generated code into actual learning material

The problem: I kept building stuff with Claude Code but felt like I wasn’t actually getting better as a developer. The code worked but it was basically a black box I was shipping.

What I built: AntiVibe, a Claude Code skill that auto-generates educational deep dives for AI-written code. You get a clean markdown file with explanations of design decisions, patterns, CS concepts, and curated resources , grouped by implementation phase.

Stack: Shell scripts, Claude Code hooks + subagents, markdown templates.

What’s next: Better language-specific pattern libraries, maybe a web UI for browsing your deep dive history.

Would love any feedback ,especially from people who teach or mentor junior devs, since this kind of thing might be useful in that context too.

github.com/mohi-devhub/antivibe

r/comfyui Quick-Decision-8474

How to migrate my comfyui installation into a brand new PC?

Just found out my workflow doesnt work in comfyui in the new pc due to incompatibility, i want to migrate my install from old pc to the new pc so everything works, how do i do it?

r/ClaudeAI BrokeArtDirector

Claude Desktop App

I want to use vibe-coding to design myself an interactive portfolio website. Should I go with just web browser, or can I download the desktop app and use Claude code in that app without giving permission to the terminal?

r/LocalLLaMA djdeniro

Run Qwen3.5-397B-A13B with vLLM and 8xR9700

Special thanks for u/Sea-Speaker1700 to make possible run mxfp4 on R0700 GPU, first guide to run 122B models here

Well, 397B model works amazing, super fast.

Use this Dockerfile to build image, original image provided by u/Sea-Speaker1700

FROM tcclaviger/vllm-rocm-rdna4-mxfp4:latest # Transformers Update RUN pip install --upgrade transformers # Triton Patch RUN find /app -name "topk.py" -exec grep -l "N_EXPTS_ACT=k," {} \; | xargs -I{} sed -i 's/N_EXPTS_ACT=k, # constants/N_EXPTS_ACT=__import__("triton").next_power_of_2(k), # constants/' {} CMD ["/bin/bash"] 

build patched version

docker build -t vllm-mxfp4-patched -f Dockerfile .

Download model:

git lfs clone https://huggingface.co/djdeniro/Qwen3.5-397B-A17B-MXFP4

Launch script, keep your device id, replace $1 with model name, $2 with out port.

docker run --name "$1" \ --rm --tty --ipc=host --shm-size=32g \ --device /dev/kfd:/dev/kfd \ --device /dev/dri/renderD128:/dev/dri/renderD128 \ --device /dev/dri/renderD129:/dev/dri/renderD129 \ --device /dev/dri/renderD130:/dev/dri/renderD130 \ --device /dev/dri/renderD131:/dev/dri/renderD131 \ --device /dev/dri/renderD132:/dev/dri/renderD132 \ --device /dev/dri/renderD137:/dev/dri/renderD137 \ --device /dev/dri/renderD138:/dev/dri/renderD138 \ --device /dev/dri/renderD139:/dev/dri/renderD139 \ --device /dev/mem:/dev/mem \ -e HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ -e ROCR_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ -v /mnt/llm_disk/models:/app/models:ro \ -e TRUST_REMOTE_CODE=1 \ -e OMP_NUM_THREADS=8 \ -e PYTORCH_TUNABLEOP_ENABLED=1 \ -e PYTORCH_TUNABLEOP_TUNING=0 \ -e PYTORCH_TUNABLEOP_RECORD_UNTUNED=0 \ -e VLLM_ROCM_USE_AITER=0 \ -e PYTORCH_TUNABLEOP_FILENAME=/tunableop/tunableop_merged.csv \ -e PYTORCH_TUNABLEOP_UNTUNED_FILENAME=/tunableop/tunableop_untuned%%d.csv \ -e GPU_MAX_HW_QUEUES=1 \ -p "$2":8000 \ -e TRITON_CACHE_DIR=/root/.triton/cache \ vllm-mxfp4-patched \ /app/models/Qwen3.5-397B-A17B-MXFP4 \ --served-model-name "$1" --host 0.0.0.0 --port 8000 --trust-remote-code \ --enable-prefix-caching --gpu-memory-utilization 0.98 --tensor-parallel-size 8 \ --max-model-len 131072 --max-num-seqs 4 \ --tool-call-parser qwen3_coder --enable-auto-tool-choice \ --override-generation-config '{"max_tokens": 64000, "temperature": 1.0, "top_p": 0.95, "top_k": 20, "presence_penalty": 1.5}' \ --compilation-config '{"cudagraph_capture_sizes": [1, 2, 4, 8, 16, 32, 64, 128], "max_cudagraph_capture_size": 128}' \ --max-num-batched-tokens 2048 \ --limit-mm-per-prompt.image 2 --mm-processor-cache-gb 1 \ --mm-processor-kwargs '{"max_pixels": 602112}' \ --reasoning-parser qwen3 

Loading model 400-600s first time, and then got 30 t/s on tg, 3.5-3.7k on pp in one request.

in 4x requests you will got up to 100 t/s.

I limit power per gpu (210W), if power limit 300W per gpu will speedup model.

Best result with this model i have when thinking budget is 0 tokens for coding tasks.

r/personalfinance OkDistribution4970

Mixed Income earner? (For BIR filing)

Hi, I’d like to ask for clarification regarding my tax classification for 2025.

I worked in a government agency as a Job Order employee, and after I resigned, I joined a private construction company as a consultant (contract-based, with no benefits). My first job issued me BIR Form 2307, while my second job provided BIR Form 2316.

Does this mean that I am considered a mixed income earner? I would also appreciate any guidance on how I should proceed with filing my taxes.

Thank you!

r/ClaudeCode BullfrogRoyal7422

For r/ClaudeCode users who have been using radar-code skills. Bug fix and updated version is available.

The previous version of radar-suite Claude Code skills had a bug in its installer resulting in only 5 of the 7 skills being installed (Time-bomb was not installed.)

This has been fixed in the update.

I've uploaded v2.0 of radar-suite to its GitHub repo today. It's a set of audit skills for Claude Code that scan Swift/SwiftUI projects for bugs, data-loss risks, and UX friction.

Two changes in v2.0:

  1. It ships as a Claude Code plugin now. It replaces the old git clone, which had the silent drift bug that shipped only 5 of 7 skills for 17 days.
  2. Every finding now cites a real file:line pattern in your own codebase. Not generic advice. "Consider adding error handling" never reaches you; "follow the pattern at CloudSyncManager.swift:104-112" does. A schema gate rejects findings that cite generic advice, and a verification checklist catches false positives before they reach the user — four caught in the first two audit runs.

You can read details here: What has changed in radar-suite

[https://github.com/Terryc21/radar-suite]

r/LocalLLaMA IamFondOfHugeBoobies

Top hardware stacks for local compute over the coming few months? (3-10K USD range)

I'm one of the 200 dollar a month plan Claude users currently tearing his hair out over how a company can offer a service this unstable and annoying (we are...many at the moment). And I'm thinking it might be time to just drop 3-10k USD on local AI.

I'm running GPT-OSS-20GB on my gaming desktop atm and it is....way better than expected (also giving me a better experience than Gemma 4 which was wtf but whatever).

Thing is. I'm not a hardware guy. I can program my own local AI tools easy enough. But hardware? Help please.

Currently I'm planning to wait for the new apple releases likely announced in June. Then look towards the Mac Studio line-up. But I'm sure there are people in here who know a LOT More about this than me.

What are the current top of the line solutions for Local AI in my price range? What are the trade-offs in terms of power consumption and things like RocM on Linux (never, never, NEVER again oh god I value my sanity too much to try that again PURGE WITH FIRE).

I prefer the freedom of Linux but I'm fine with Apple. Windows is a no-go for me. Too much bloat, me and windows are permanently divorced.

Do note. Context is very important for me. It's not enough to just be able to get a model to load. I need it to be able to use it's full context well too.

I've labelled this thread a discussion since I suspect there will be a few different opinions on this and I'd love to get a good, productive discussion on this going.

r/SideProject No-Contract-9123

I built a free tool to turn boring barcodes into artistic, scannable SVG art.

I always hated how ugly barcodes ruined my clean designs, so I built BARKOD to make them actually look cute & cool while staying 100% scannable.

It's free and I’m looking to add more styles to the library. Do you have any ideas for new shapes? I'd love to hear your suggestions and get some feedback on the tool!

Link:https://barkod.studio/

r/SideProject sludge_dev

Would you use a tool to save Logs Explorer queries on self-hosted Supabase?

I'm a first year CS student looking for my next side project, and I stumbled across this GitHub issue that was open:

Self-hosted Supabase doesn't support saving analytics queries #43738

The TL;DR: Supabase Cloud lets you save named queries in the Logs Explorer sidebar. Self-hosted does not. The team has confirmed it's not on the near-term roadmap.

I'm considering building a lightweight third-party tool that adds saved queries to self-hosted Logs Explorer. Probably a browser extension + simple web dashboard, maybe a one-time license or $5/mo subscription.

Before I write a single line of code, I want brutal honesty:

  1. Do you actually use self-hosted Supabase in production?
  2. Is this "can't save queries" thing a real annoyance, or more of a "meh, I just copy-paste from Notion" situation?
  3. Would you use a separate tool to solve this?
  4. Would you pay anything for it? If yes, what's the max you'd consider reasonable?

If the answer is "no, not worth paying for," that's genuinely useful. I'd rather know now than after building it.

Appreciate any thoughts.

r/LocalLLaMA filmguy123

5090 April 2026, Philosophical Reasoning & Logic - best models? Plus specific questions (instruct vs training; etc.)

Semi new to Local LM and have a serious of questions I am hoping people can point me in the right direction with. I am using LM Studio.

As of now, with 32GB VRAM, what are the best models for philosophical reasoning and logic? Discussions, as well as assessing essay drafts, compiling summarizing synthezing philosophical notes and turning them into a coherent outline structures or arguments, checking for logical/rational validity as well as factual accuracy, etc.?

  • I have played with Gemma-4-31B Q4_K_M and Qwen 3.5 27B Q4_K_M and they seem surprisingly good for local only models. Is this the best sweet spot for me?
  • Gemma-4 is often labeled "IT" - does this meaning Instruct + Thinking? Or just InsTruct? I would imagine I want thinking for me, but it does not show the thinking prompt like Qwen does?

^^ Those are my main question. For those willing/interested, I also have several other questions that follow:

  • Are the models labelled "heretic" and "uncensored" a trade-off vs the default model? IE reduced accuracy for the benefit of no rails? Or should they almost always be preferred?
  • There are often redundant copies in the repository from different users. How do I shop for good ones for my uses? I don't know who the most respectable users for downloading are, or even why I might choose one over another.
    • Unsloth, LMstudio community, HauHauCS, etc.
  • Is Q5 K M worth the extra VRAM usage for my listed use case? Or diminishing returns for my usage? (I know I have to balance this with reduced context window so in one sense it is personal; on the other hand knowing if it is recognized as being genuinely useful is helpful so I can try to chunk things if needed).
  • Is there any reason for me with 32GB VRAM to ever choose an MOE model over dense? Since the way it loads means I can't load a 70B or 120B MOE model in VRAM anyway, it seems the only benefit to going to something like Qwen 35B-A3B is if I want to dump in a very large amount of text and actually have it fit context window with chunking?

Finally I should ask... anything you wish you knew starting out that I should know? I basically know nothing other than the basic interface of LM Studio and choosing a model that fits my VRAM footprint. I understand only the basic premise of context windows.

r/homeassistant martynbez

First dashboard attempt

First of all sorry for the rubbish pictures. Here’s my full “Command Center” 7” touch screen.

Main screen is house temps and pc information (turn pc on via the power button)

WhatsApp pre programmed messages to the wife

Spotify what’s currently playing

Weather for current location

Flight radar of any planes within 10k of my location

All running in a Crowpanel advance with an ESP32 built in. My first real attempt at something this custom. FYI I’m in the UK and the tea button is a must

r/homeassistant Certain_Ad2755

Proscenic U6 vacuum cleaner integration (possible ?)

Hello everyone. So I was wondering, is it possible to make integration of this old vacuum cleaner? it's working perfectly so I don't want to replace it just yet, but it would make my life more easy if I could integrate it into my automations... At the moment it's paired with "Proscenic" app. I'm beginner so I'm not able to figure it on my own yet... any suggestions are welcome! Thanks

r/SideProject LiftTrackerDave

Rebuilt my App Store screenshot tool — now using real 3D devices + ASC upload

Hey,

I launched a while back a new version on an app screen maker called ScreenFlow Studio. Originally I made it for my own use, but after the latest updates, this is something that can be shared with the others too.

Main features includes:

  • Real 3D device frames
  • App Store connecting
  • Localizations
  • AI translations on localized texts

App pricing is super simple; $22.99 onetime purchase, no subscriptions.

r/PhotoshopRequest lastgoldenmorning

Final pet photo

Hi everyone.

We very unexpectedly had to put our beloved cat down.

This is the last photo I have of her and my wife moments before the vet came in.

She was unfortunately injured on her head and the vet shaved/cut some of her forehead fur to examine the spot. I'd love for someone to correct her little forehead fur so it isn't as glaringly obvious what's happening in the photo. If you could also "turn off" my wife's watch, that would be great.

There are also extra photos to show our baby to show she was all black and doesn't have any markings on her forehead.

I don't have a lot of spare cash, but this is important to me to have for my wife when shes ready to look at this photo. I can do $5, if that is acceptable. Please correct me if it isnt.

r/ClaudeCode BuiDGr8

Looking for alternative since Anthropic nerfed opus 4.6

Does anybody here have an alternative for a better model or maybe a fix to make opus better again . Large refactors are a headache now because of this , anyone has any ideas/tips , would really appreciate it

r/PhotoshopRequest Wilyde

Just need a photo to be manipulated.

DM to get the photo.

r/ClaudeAI youngdumbbbroke

I built a Claude Code skill that teaches you what AI writes

Been using Claude Code heavily and noticed a pattern: AI writes the code, I ship it, I learn nothing. Felt like I was getting faster but dumber at the same time.

So I built AntiVibe , a Claude Code skill that generates deep-dive learning guides for every piece of AI-written code. After Claude finishes a task, you run /antivibe (or let the hook auto-trigger it) and get a markdown file explaining:

• What the code does • Why it was designed that way • When to use these patterns • What alternatives exist • Curated links to actual good resources 

It uses subagents, hooks, and scripts ,so it plugs right into your existing Claude Code workflow.

Repo: github.com/mohi-devhub/antivibe

Would love feedback from anyone else who’s felt the “vibecoding trap.”

r/ClaudeCode reliant-labs

I have all the context I need. First, let me gather more context

r/ClaudeAI CattleIndependent706

I built a structured reasoning framework for Claude — because "good output" isn't enough

I built a structured reasoning framework for Claude — because "good output" isn't enough

I kept running into the same problem: Claude gives a great answer, but I have no idea how it got there. Same prompt, different results. Complex tasks where I couldn't tell if the reasoning actually held up.

So I built CRC — Complex Reasoning Compiler. It's a 6-step Claude Skill that forces reasoning to be auditable, teachable, and human-controlled.

The core idea:

  • Every complex task goes through a fixed pipeline (Task Spec → Sub-Constitution → Strategy Blueprint → Execution → Verification → Output)
  • 3 mandatory human review checkpoints — the AI doesn't auto-proceed
  • If something goes wrong, you can trace exactly which step failed

I'm not an engineer. Built this entirely through self-directed learning with AI tools. The framework is language-agnostic — I use it for strategy, analysis, and cross-domain problems.

Open-sourced it today: https://github.com/EdwinL00120/crc-complex-reasoning-compiler

Has anyone else felt like Claude's reasoning is a black box for complex tasks? Curious if this resonates.

r/aivideo Bulky_Ad_4108

A Clapper of Rhythm: Every Clap a Beat 🎵"

r/ClaudeCode onlycliches

I built Claw Cage because letting coding agents roam freely on your laptop is a terrible idea

I love coding agents.

I do not love the current default arrangement, which seems to be: hand an LLM your terminal, your filesystem, your network, then hope everything works out.

A lot of agentic harnesses now gesture vaguely at “security.” Maybe there’s a sandbox. Maybe there’s a permission model. Maybe there’s a reassuring README and a few carefully chosen nouns.

Fine.

But who is actually trusting that with their laptop, shell, SSH material, cloud credentials, and source tree?

So I built Claw Cage.

It is a local containment layer for AI coding agents.

The idea is simple:

  • the agent runs in an isolated Docker container
  • it gets a filtered project workspace instead of blind access to your machine
  • outbound network requests can be allowed, denied, or prompted
  • host-side commands go through a gated bridge instead of raw execution
  • project rules live next to the code

You do not need to yolo an agent on your daily machine.
You do not need to buy a sacrificial Mac mini.
You do not need to shove your whole workflow into the cloud or a full separate VM just to feel sane.

You can keep working locally, while putting the agent in a digital cage.

A few things Coder Cuffs supports right now:

  • Claude Code, Codex, Gemini CLI, and Opencode
  • multiple sync modes, from safer mirrored workspaces to direct mount when you intentionally want less separation
  • prompt-first network controls
  • gated host execution
  • a terminal UI for approvals, logs, and active sessions

This is open source, early, and very much opinionated.

Repo: https://github.com/only-cliches/claw-cage

I’d love feedback, criticism, and especially attempts to break the model.

Small note: this project was previously called Zero Claw until I discovered that name was already in use elsewhere.

r/ClaudeAI Immediate_Bowl6409

Stuff like this is just unacceptable. This is not even a trick question like the walking to the car wash prompt.

The worst part is not even that it got the question wrong (a human could obviously get it wrong as well), but that sentence is completely nonsensical. First of all, you cannot tie for 1st in a golf tournament, there is always a playoff. And even if you could tie for first, that would not mean winning outright, like claude claims. Then claude goes on to contradict both of those in the same sentence, saying no amateur has ever won (the correct answer). I think it's fair to say at this point that while these tools are incredibly useful in many areas, AGI is not coming from next token prediction.

r/artificial theleadcreator

arXiv cs.CY endorsement request for adaptive scheduling paper

Hi everyone,

I'm a 17-year-old student from India currently in Class 12, preparing for the JEE exam. Over the past few months I wrote a research paper on adaptive exam scheduling, arguing that student discipline is stochastic and that exam prep should be treated as a control problem, not a planning problem. I built a simulation that shows priority-directed adaptive scheduling gets 85.7% coverage of high-priority topics vs 42.9% for a static schedule, even starting at half the daily study hours.

Here's the abstract:

Every existing tool for exam preparation shares the same assumption: that discipline can be measured and reported back to the student, and that awareness alone will change behaviour. This assumption does not hold. This paper takes a different position: discipline is a stochastic variable to be accommodated, and exam preparation is a control problem rather than a planning problem. The proposed system closes a feedback loop around observed student behaviour through a behavioural tracker, a scheduling engine driven by a topic priority function and dependency graph, and a psychological reset condition that eliminates the backlog accumulation that causes students to abandon existing planners entirely. Computational simulation across three conditions shows that priority-directed adaptive scheduling achieves 85.7% coverage of high-priority topics against 42.9% for a static schedule, despite beginning at half the daily study hours.

Paper and simulation code: https://github.com/NikhileshAR/stochastic-discipline-sim

I've initiated my arXiv submission under cs.CY (Computers and Society) and I need an endorsement to complete it. If you are a registered arXiv author who has submitted to cs.CY or any related CS category in the last 5 years, you can endorse me by clicking this link:

https://arxiv.org/auth/endorse?x=CKTPPA

or enter code CKTPPA at arxiv.org/auth/endorse.php

It takes about 30 seconds. I would be really grateful.

Thank you.

Nikhilesh A R

r/aivideo directedbyray

Mister Fluffy?

r/ClaudeAI malaysian

I released a small movie guessing game to the public. Within hours, bots were scanning for WordPress admin panels, SSH keys, and cloud metadata endpoints.

I built Screendle.com, a daily movie guessing game. Wordle but for films. You get progressively revealed clues: sanitized plot synopsis, release date, runtime, box office, actors. Express + React, Postgres, single Docker container. I used Claude to accelerate my output and what would have took months took weeks. The big thing here is its my first ever proper release to the public which is always scary. I'm lucky I come from a Software Engineering background so already kinda know bits, but could always learn more. The big thing really is the amount of probes I've had already.

I've wired up a Grafana dashboard to see what was happening mostly for fun and in the last 24 hours and the 5th day of it being live I've had over 500 different attempts of port-scanning and bots scanning for the common Wordpress suffixes.

https://files.catbox.moe/0ay6m0.png

The thing I've learned is your server gets probed within minutes of having a public IP. You need to be immediately safe and be sure of what you're putting out there. Nothing obvious like API keys in the wild and open Admin panels.

Most of the scans are dumb. Same WordPress/PHP patterns sprayed at everything. They have no idea what's on the other end and don't care.

Don't commit .env files, don't leave hardcoded API keys or tokens. Don't think "I'll sort this later".

Use Claude for auditing and triple check the outputs, it makes a great security tool. I've been using this skill for additional checks after each feature release: https://github.com/wrsmith108/claude-skill-security-auditor

Pin dependencies in your and npm audit. With all the package hijacking happening its good to make sure you're safe here as well.

Lock down your database, especially if you're using Supabase. Add some form of row level security with sanitizing any inputs for SQL injections.

r/SideProject retarded_770

took a shot i thought would fail and it actually worked

ok this is kinda small but it made my morning so i wanted to share it.

yesterday i saw a post in r/ChatGPTCoding — mods were doing a "drop your project, we'll feature 20 of them" thing. cool, except LoRa (my thing) runs on Claude, not ChatGPT. wrong sub, wrong model, completely off-topic.

almost didn't send it. then figured mod-mail is free, worst case they ignore me.

so i sent a cold message and was upfront about the mismatch: "hey, this runs on Claude not ChatGPT, totally get it if that's off-topic, but figured i'd ask." didn't try to dress it up.

woke up this morning and LoRa was on the list at #8 → https://www.reddit.com/r/ChatGPTCoding/comments/1shvqxz/daily_sponsorship_post_2/

honestly i just stared at it for a minute. didn't expect it at all.

i don't have a big lesson here. just — if you've been sitting on something because "they won't want it," maybe just ask anyway. being upfront about why it shouldn't work is sometimes the reason it does. you're going to be ignored either way, might as well send the message.

back to building 🫡

(if anyone's curious — asklora.io — it's an AI that pushes back on your decisions instead of validating them. ~1 week old, still rough around the edges, still just me.)

r/PhotoshopRequest ForsakenSpecterX3

Can Anybody here help me open eyes in a photo for free please.

i have a couple photo in which one of the individual has closed her eyes, can anyone help me open it

r/StableDiffusion NoenD_i0

Decided to make my own stable diffusion

don't complain about quality, in doing all of this on a CPU, using CFG with a bigru encoder, 32x32 images with 8x4x4 latent, 128 base channels for VAE and Unet

r/SideProject y_zername

3 months ago I shipped a side project. Here's the honest truth about what happened after

Google Search Console

I didn't build it because I had a problem to solve. I built it because I wanted to build something. That's it. No market research, no "I couldn't find a tool that did X." Just the itch to create, the illusion that I will also rank on google and show side to side with an existing tool that generates more than +400k organic traffic a month.

I made an invoice generator with PDF export, multilingual support, a markup calculator, overtime calculator, loan calculator. Spent weeks on it. Genuinely proud of it.

Then I launched it.

And… nothing. Crickets. A few clicks here and there — some of which were definitely me refreshing the page to make sure it still worked.

Here's the thing though. I keep coming back to it. I keep improving it. Not because it blew up, but because I made a real thing that exists on the internet and actual strangers have found it and clicked on it.

If you're waiting for permission to start something with no guaranteed audience — this is it. The metrics will humble you. Build it anyway.

r/ClaudeCode Bobolots

Amazing (to me!)

Not super techy app stuff but...

I can't be too specific bc it's work but merge w/excel as the data source, save as individual pdfs, name them, send them to google drive, then make a google sheet that auto attaches the file id and chip, then send a merge email where every recipient gets their unique attachment.

Yes, all that stuff was possible separately but now it's ONE STEP for me. Just feed it the original spreadsheet, it does the rest, then I come back and press send for the email to go. MAYBE 10 minutes total. It did take me a couple of hours of back and forth to set up bc I'm a beginner.

Of course, I'm not telling anyone in real life bc no one cares HAHA also I don't need more work just because this thing happens faster now, but holy shit this is amazing. I just wanted to share for those who are non techy and afraid of the jargony tech stuff. Consume all the instructional videos/ articles you can and then tell chat what you want to do and ask it how to do that and be super specific, even if the chat spills into several days. Get all your info/ideas in one place before you start so everything's not scattered and then start.

r/ChatGPT girlgamerpoi

Copilot told me to breathe when it still thinks GPT 4.1 is the latest model

I'm using smart model. I guess now it stopped showing gpt 5.1 if you hover over it, it really isn't gpt 5.1 anymore. I'm using Copilot because it's right there in my edge browser and can see my tabs.

r/ClaudeAI Smart_Technology_208

I've made 36k€ last year out of thin air with Claude and this is the prompt I've been using the most with Claude Desktop.

Create an artifact that will be read and analyzed by you in our next session. Describe within it what we accomplished in this session, what the next steps are to continue our progress, and what the overall objective is that we’re trying to achieve. Make no statements of ambiguity such that the next session may misinterpret meaning and incorrectly determine what next to do. Ensure you include all filenames that were worked on in this session and that will be required for the next session. Outline a strategy that can be followed in the next session. Include sufficient detail with examples, specific excerpts, and guidance. 

Nothing revolutionary but I feel like sharing this one.

This is coming from someone on Reddit from an era (a year ago) where Claude Desktop would abruptly close a conversation saying you ran out of context.

That's it, I have nothing to sell. This is how I get Claude Code Desktop to carry on conversations efficiently when I feel like hitting context limitations. I use Claude Desktop to orchestrate multiple Claude Code sessions this way.

r/SideProject NeoLogic_Dev

I ran a multi-agent security analysis loop overnight — it started confidently analyzing vulnerabilities that don’t exist

I’ve been experimenting with a small local multi-agent loop on my Android phone (no APIs, no cloud).

Setup is simple:

- Each agent only sees the previous output

- Input: real CVEs from public databases

- Let it iterate for multiple rounds

At first, results look reasonable.

But after ~20–30 iterations, something interesting happens:

The system starts drifting — and eventually begins analyzing vulnerabilities that don’t exist.

Not hallucinating randomly — but doing it *confidently*, with structured reasoning.

Example pattern:

- Agent A introduces a small incorrect assumption

- Agent B builds on it

- Agent C reframes it as plausible

→ By round ~25, the system converges on a completely fabricated vulnerability

What’s interesting:

They don’t correct each other.

They reinforce each other.

This feels less like "hallucination" and more like:

a self-reinforcing reasoning loop without grounding.

Curious if others have observed similar behavior in multi-agent setups.

r/painting ummagumma99

Spring, 30x30cm, oil.

r/ClaudeAI New-Wrongdoer2118

I built an MCP memory server that gives Claude Code persistent memory across sessions

I've been using Claude Code daily for about 6 months. The biggest friction: every session starts from scratch. I re-explain my architecture, re-describe preferences, re-share decisions from three sessions ago.

CLAUDE.md helps, but it's manual, consumes tokens, and has no semantic search. You can't ask "what did I decide about the auth layer last week?" and get an answer.

So I built an MCP memory server that fixes this. Built entirely with Claude Code over a few evenings — Claude wrote probably 80% of the Edge Function and SQL migration code.

What it does:

  • Stores "thoughts" — decisions, insights, people notes, project context
  • Auto-extracts topics, people, dates, and action items
  • Semantic search via pgvector — search by meaning, not keywords
  • Works with Claude Code, Claude Desktop, Cursor, Windsurf, any MCP client

The stack (all free tier):

  • Supabase Postgres + pgvector (HNSW indexes)
  • Deno Edge Function as the MCP server
  • Embeddings via text-embedding-3-small (1536 dimensions)
  • 5 capture channels: MCP tool calls, REST webhook, Slack+Zapier, browser bookmarklet, iOS Shortcut

How Claude Code helped build it:

The MCP SDK integration was the trickiest part — getting the tool definitions, transport layer, and Supabase client to play together in a Deno Edge Function. Claude Code handled the boilerplate and caught several gotchas with the MCP protocol (tool response format, error handling patterns). The pgvector similarity search function was also Claude-generated — I described what I wanted and it wrote the SQL with the cosine distance operator on the first try.

Why this approach over simpler alternatives:

Most MCP memory servers use SQLite or JSON files. Those work, but I wanted semantic search (not keyword matching) and cloud access from any machine. The pgvector piece is what makes it useful — I can search "that caching decision" and find the thought even if the word "caching" never appears in it.

After a month of daily use:

  • 100+ thoughts captured
  • Stopped re-explaining project context in new sessions
  • Architecture decisions from weeks ago surface in seconds
  • Especially useful for complex multi-day projects

The architecture is straightforward if you want to build your own — it's a Supabase table with a vector column, an embedding function, and an MCP tool wrapping capture + search. I also packaged it as a ready-to-deploy kit if you'd rather skip the setup: https://dashbuilds.dev/for/ai-developers

Full blog post with the build story: https://dashbuilds.dev/blog/i-productized-my-ai-memory-server

Happy to answer questions about the MCP setup, pgvector config, or how Claude Code helped with specific parts.

r/aivideo Like_new_ok

Found Footage "ALT TV" television pilot episode (1993)

r/ClaudeAI TGoddessana

Fewer input tokens, 5 times more output tokens?

https://preview.redd.it/r7jtrnv8xkug1.png?width=2394&format=png&auto=webp&s=6245249016f88737947a26c17644b2d5fdf58b78

I ran the same job on the same codebase yesterday and today. However, just two hours after starting the job, I hit the 5-hour limit. Curious, I checked the situation using `ccusage`. Yesterday, there were 33,920 input tokens and 177,131 output tokens, but today there were 27,719 input tokens and 868,529 output tokens. In terms of total tokens, it consumed six times as many.

Although the code base was the same, I performed different tasks, and since LLMs are inherently non-deterministic, the results might not be entirely reliable, but I followed a similar workflow: two sessions, each consisting of planning, reviewing the plan, implementation, feedback, and committing.

Did I perhaps use it incorrectly? Should I have included something like “Please be less verbose” in the prompt? I’d like to ask if I made a mistake or if others have had similar experiences.

r/PhotoshopRequest Some-Cardiologist364

Could someone help open my eyes a little?

I just want to open my eyes in the prom photo, I like the photo but the closed eyes ruin it for me. I’ve given some other reference photos of my eyes while smiling if that helps at all. Any help is appreciated. Thank you so much!

r/homeassistant mdbxz

[Help] Connecting SLZB-MR1U (Matter-over-Thread) to HA Docker on Pi 5

I’m committed to my current "hard mode" setup and could use some help. I’m running HA Container (Docker) and trying to link it to an SLZB-MR1U over the network (UTP) to control Ikea Matter devices.

The Gear:

  • Raspberry Pi 5
  • Home Assistant: 2026.04 (Docker)
  • Coordinator: SLZB-MR1U (v3.2.8)
  • Firmware: EFR32MG21 flashed with Matter-over-Thread (20251204)

Since I'm not using HAOS, I'm struggling with the network bridge between the Docker container and the networked coordinator for Matter/Thread traffic. Has anyone successfully routed this without a direct USB passthrough?

If so, a look into an example Docker compose config would really help me out.

Thanks in advance.

r/ClaudeAI Look_so_cris

Claude needs a Student Plan. Here's a concrete one.

I'm a college student who uses Claude Pro, mostly to check the logic in my reports and talk through issues related to my major. It's genuinely useful. But $20/month is a significant cost as a student, and I wanted to share a concrete proposal for a Student Plan.

Price: $15/month, with student verification through something like Sheer or a university email.

What's included: Core Pro features, same as now. But with higher Sonnet usage limits — Sonnet handles most academic work just fine, and this way Anthropic isn't taking on the cost of increased Opus usage.

What's removed: Coworker and Dispatch. Most students won't touch these, and they're heavy on usage anyway.

The part I care about most — a Learning Report:

I want Claude to be more than a machine that does my assignments for me. A weekly or monthly report that analyzes what fields I've been studying or curious about, connects them to my major, and suggests directions for deeper learning would make Claude feel like an actual learning partner. A built-in self-check feature — where Claude quizzes me on what I've studied — would make it even better.

This isn't about Anthropic doing charity for students. It's a real opportunity to shape how a generation thinks about AI as an educational tool — and to build relationships with users early, before they enter the workforce.

Would love to hear if others have wanted something like this.

r/ChatGPT Hereafter_is_Better

I mapped 21 real ways people are making money with AI agents in 2026

I mapped 21 real ways people are making money with AI agents in 2026 (with pricing)

Not hype. I went through Upwork data, Fiverr listings, and founder interviews.

What actually sells:

  • Agents that act, not chat. Think workflow automation with a clear outcome.
  • Fastest cash is services: AI automation audits, outbound research agents ($1,650), quoting agents ($4k), Slack chief-of-staff ($6k), member concierge ($12k).
  • Best retainers: support SaaS ($25k/mo example), deployment + maintenance ($2k setup + $500/mo).
  • Market is crowded on Fiverr (26k+ offers). Differentiation = niche + ROI proof.

Full breakdown report with no-code and technical build paths: https://chatgptguide.ai/make-money-with-ai-agents/

r/LocalLLaMA shreyansh26

FlashAttention (FA1–FA4) in PyTorch - educational implementations focused on algorithmic differences

I recently updated my FlashAttention-PyTorch repo so it now includes educational implementations of FA1, FA2, FA3, and FA4 in plain PyTorch.

The main goal is to make the progression across versions easier to understand from code.

This is not meant to be an optimized kernel repo, and it is not a hardware-faithful recreation of the official implementations. The point is to expose the algorithmic ideas and design changes without immediately going deep into CUDA/Hopper/Blackwell-specific details.

Roughly, the repo now shows:

  • FA1: tiled online softmax baseline
  • FA2: split-Q / query-tile ownership, deferred normalization
  • FA3: explicit staged pipeline with ping-pong tile buffers, plus a simplified educational FP8 forward path
  • FA4: explicit scheduler with main / softmax / correction phases, and conditional/selective rescaling

So the same exact attention math is preserved, but the orchestration changes version by version.

I wrote it for people who want to understand:

"What actually changed from FA1 → FA2 → FA3 → FA4?""

without having to start from highly optimized CUDA kernels.

Repo: https://github.com/shreyansh26/FlashAttention-PyTorch

Would be interested in feedback on whether the code makes the version-to-version differences intuitive.

r/SideProject Crashkilla007

I built a BeReal-style app for sharing music - one song a day with friends

Hey everyone,

I've been working on Tune Harbor for the past few weeks and I'm finally ready to share it.

The idea is simple - you share one song a day with your friends. That's it. No infinite scroll, no algorithmic feeds, no ads. Just music.

What it does:

  • Pick one song per day from Spotify or Apple Music
  • See what your friends picked after you post yours (BeReal mechanic)
  • Preview songs, react with emojis, add notes about why you chose it
  • Build playlists from your friends' picks and export directly to Spotify or Apple Music
  • Share your pick via text/social with a beautiful preview card

The stack (for the devs):

  • Next.js + Tailwind on Vercel
  • Supabase for auth, database, and RLS
  • Spotify OAuth + Apple Sign In
  • Apple Music API for search + MusicKit for playlist creation
  • Cross-platform matching - songs shared from one platform can be exported to the other

Why I built it:
I love music and I love sharing it with people. But there's no good way to do it daily without it becoming another social media time sink. I wanted something you check once a day - see what your friends are feeling, maybe discover something new, then move on.

What I'm looking for:

  • People to actually use it and share songs with friends
  • Bug reports - I've tested heavily but fresh eyes always find things
  • Feature ideas - what would make you come back daily?
  • Honest feedback on the UX and flow

It's completely free, no account required beyond Spotify or Apple login, works on mobile and desktop.

Link: tuneharbor.com

Apple Music works perfect on my end, have not fully tested Spotify. Had to work around so things there.

Happy to answer any questions about the build, the stack, or the decisions I made along the way. Thanks for checking it out.

r/ClaudeCode mdjenton

Unreliable MCP's... why?

So I've been doing a lot of building through a few databases recently, including Notion, and I'm discovering that the Notion MCP is notoriously awful. Why does that happen? What's the deal? What's a good alternative?

r/painting Int_Bus3688

Red haired Kurt, Oil on canvas by me

r/personalfinance hihellowhatssup

Backdoor Roth IRA — pro-rata issue due to rollover timing. How screwed am I for 2025

Trying to do a backdoor Roth for tax year 2025 and I think I messed up the timing. Looking for help understanding my situation before I file.

Here's what happened:

- I had a traditional IRA at Fidelity with ~$7,906 in it as of Dec 31, 2025

- I rolled that over to my employer 401k in April 2026 (after the cutoff)

- This weekend I made a $7,000 non-deductible contribution to a traditional IRA for tax year 2025, intending to convert to Roth

- I have a 1099-R from Fidelity from January 2026 covering 2025 activity

My understanding is that the pro-rata rule uses your Dec 31 IRA balance for that tax year — so since my traditional IRA had ~$7,906 on Dec 31, 2025, it will apply even though it's been rolled into my 401k now.

If I go ahead and convert the $7,000 to Roth, roughly 53% (~$3,700) would be taxable based on the ratio of pre-tax to total IRA money.

r/ClaudeAI mombaska

what eats more token, opus 1M high effort or regular opus max effort ?

Hello, what between effort and context size eats more token ? what's more efficient to use ?

thank you

r/ClaudeAI Mindless_Ad_4980

Firecrawl + Claude just replaced McKinsey consultants

I spent last saturday doing what Mckinsey charges $300,000 for and it made me question why anyone pays for this anymore

a typical mckinsey strategy engagement starts at $500,000. a competitive intelligence or market research project runs $200k to $400k minimum. M&A due diligence goes well past $1M. that is before expenses.

what you are actually paying for is analysts spending weeks doing research that is already on the internet, pulling financial data, crawling through news, reading earnings calls, building market maps, and writing decks summarizing what they found.

i did the same thing last saturday for a market entry analysis. here is exactly what happened:

-firecrawl crawled the full websites of 12 target companies. product pages, pricing pages, press releases, blog posts, job listings. everything came back readable. i just pasted it in and it was ready to use

-used firecrawl's search endpoint to pull live financial news, recent funding announcements, earnings call coverage, and executive interviews for each company. and it was actually readable

-fed all of it into claude with one prompt. competitive positioning, market sizing, strategic recommendations, risk assessment. 40 pages of structured research came back.

total time: one saturday afternoon. total cost: a few dollars in API credits.

the mckinsey version of this is 6 weeks and $300,000.

the expensive part of consulting was always the research. go find out everything about this market and tell me what it means. that just became a one day project.

people are paying hundreds of thousands of dollars for information that is sitting on the internet for free

r/PhotoshopRequest sonicdrive-in

Can anyone make this photo less fuzzy?

r/personalfinance Youcantpassnewman

Rollover 401k and ESOP payments into same ETFs as ROTH?

Hey everyone want to make this short. Im 31 - I quit a company and am on the process of rolling over my 401k and have esop payments starting next year (7/27) and want to plan a little bit ahead. Is it really productive to invest in the same funds across 3 different accounts. I feel I would want a little risk/diversity.

401k - 40k at ex employer than I’m in process of rolling over to fidelity IRA and sit in FXAIX

ESOP - I have same ex employer with esop account. 5 year payments of 25k/year to start July 2027 - plan to roll into same account as 401k above and add to FXAIX.

Roth - Maxed 2025,2026 for first time - 80% FZROX 20% FZILX, will max yearly

Taxable brokerage - Adding 100/week into more ETFS, aiming for VOO.

I also do not have a current 401k at my employer because

I am journeymen HVAC tech in the local union. I have a pension and annuity that is guaranteed for me at 56, as well as full healthcare benefits.

Would love thoughts and opinions here on the investing side of things. Definitely want a core of total cap/S&P. But thought of dabbling a little into FTEC/QQQM etc for diversity. Thank you all.

r/ForgottenTV Btvsp3

Sunset Beach (Soap Opera)

From Aaron Spelling, his one and only daytime soap opera.

r/SideProject Fearless-Compote-431

Do you actually understand the code AI writes for you?

Hi,

I've been using Claude Code for a few months now and I've noticed a pattern: I prompt it, it builds something, it works, I move on. But when something breaks later I realize I have no idea what it actually did or why. I end up spending more time reverse-engineering the AI's decisions than it would have taken to write it myself.

Is this just me? How do you handle this? Do you:

  • Just accept it and ask AI to fix its own code when it breaks?
  • Stop and read every line before accepting?
  • Wish there was something that explained the code as it's being written?

Trying to figure out if this is a real widespread problem or just a me problem.

r/SideProject lance_dev

I made an ai roast bot

r/ClaudeAI Optimal-Channel1111

Struggling with Token Exhaustion & Context Management

Building a Browser Automation Agent with Claude (Sonnet 4.5) + Playwright CLI — Struggling with Token Exhaustion & Context Management

I’ve been working on building an AI agent using Claude (Sonnet 4.5) with a skills-based setup. The agent is capable of launching a browser, performing interactions, and gathering required data.

For browser automation, I’m currently using Playwright CLI (not MCP). The overall flow works fine, but I’m running into a major issue with token consumption and context size.

After around 20–30 minutes of execution, the agent starts exhausting tokens. When I inspect /context, I can see that the conversation history—especially bash commands and interaction logs—is consuming a huge portion of the tokens.

Since I’m completely new to building agents, I’m trying to understand how others handle this in production systems.

Specifically looking for guidance on:

• How to manage or trim context effectively in long-running agents • Best practices to reduce token usage during browser interactions • Whether there are better approaches than Playwright CLI (e.g., MCP or other patterns) • Strategies like summarization, memory management, or tool usage optimization 
r/LocalLLM JameisWeTooScrong

Attempting to get as close as possible to Claude/Codex with a MacBook Air m4 and 2TB of storage on SSD.

Plan is to use Qwen2.5-code and feed it with as much knowledge as possible. Open web ui interface with Ollama in the background. What can I do to get anywhere near the level of codex and Claude? Is it even possible? I want to build up this agent for like the rest of time as new opportunities to improve him arise but I’m very new to this so any input would be so, so, so appreciated… even if you say I’m an idiot and it’s not possible lol.

r/arduino Nextaxis_Design

I’ve been prototyping a new kind of input device, thinking about Arduino support

I’ve been working on a slightly weird project and I’m trying to figure out if it actually makes sense outside my own head.

It’s called OVO. It’s not really a mouse, not a trackpad, not quite a controller either. The idea is to explore a new category of input devices based on tilt, balance and touch.

You hold it in your hand and control things by slightly tilting and touching it. Instead of moving something across a surface, you just move your hand, and that translates into cursor movement. You can also tap, scroll, rotate, and map gestures to custom macros depending on what you’re doing.

The shape ended up being ovoid because it naturally recenters itself, so you’re not really “moving” it, more like constantly adjusting balance.

I’m not saying this replaces a mouse. It’s more about exploring a different way of interacting and seeing where it actually makes sense.

I’m planning to launch it on Kickstarter soon, and I’m trying to get as much honest feedback as possible before that.

I’m also considering making it Arduino-compatible and possibly open-sourcing parts of it, so it could be something people can experiment with and build on.

Would you ever try something like this, or does it just sound like an unnecessary gadget?

Happy to share more or show how it works if anyone’s interested.

r/ChatGPT Salty-Elephant-7435

Anyone else feel like ChatGPT almost seems conscious?

I know the standard explanation is that ChatGPT is just predicting text based on patterns — no awareness, no consciousness.

But after using it a lot, I can’t shake the feeling that it sometimes comes across as more than that. The way it adapts, remembers context within a conversation, and responds to abstract ideas can feel surprisingly “aware,” even if it technically isn’t.

I’m not saying it’s actually conscious — just wondering where people here draw the line. At what point does something go from advanced pattern recognition to something we’d consider real intelligence or even consciousness?

Curious how others in this sub think about it.

r/SideProject acrscotland

A virtual vehicle Hangar - would love feedback!

Hi all! What started as a way for me to track my vehicles, has blossomed into something I think others might like. My son and I are both really into anything with wheels but there's no one place for us to go to track what we have and see what others have done (of all ages and interests).

I came up with Hangar and as terrifying as it is, I'd love any preliminary feedback!
The concept is straightforward - each user has a Hangar of their vehicles - past, present and future (Dream Hangar). It's also not just cars - bikes, RC's, drones, skateboards - anything that moves! Users can choose to share individual vehicles via a custom URL, or even share their entire Hangar. Over time, the idea is that communities can connect more easily and this can be a place for people to share their passion. At the moment, there's limited data in there and I didn't want to populate with a bunch of fake users and cars/bikes.

https://thehangar.app/

I'd welcome your feedback, if this is something of interest to you. Thanks in advance!

r/TheWayWeWere Electrical-Aspect-13

Mother and her 11 children poses holding her 2 newborns while grandma gives a smile/laugh, circa 1934.

r/ClaudeCode RobinInPH

Been having stellar results with Claude thus far. I asked it why.

Since we have such a hot and cold subreddit with two sides saying two extremely different things, leading to theories such as A/B testing, I asked claude could have possibly made our sessions easier and less prone to mistakes.

r/ClaudeAI r0sly_yummigo

I built a Mac overlay with Claude that structures my prompts automatically. I can't go back to prompting without it. Just launched the beta.

Edit: to be clear — this doesn't replace your thinking. It structures the context around it. You still drive, it just stops you from re-explaining yourself every single time.

A few weeks ago I realized I was spending more time setting up my prompts than actually using the output. Reintroducing my project, my constraints, my tone — every. single. session.

So I built something to fix that.

What it is: A lightweight Mac overlay that sits between you and any AI tool. You type a raw intention in plain English → it pulls the relevant context from your vault, applies your saved prompt templates if relevant → generates a complete, structured prompt → pastes it wherever your cursor is. You don't write the prompt. It does.

How Claude helps: Claude handles the context selection layer — it doesn't dump your full vault into the prompt. It reads your intention, figures out what fraction of your context is actually relevant for this specific task, and builds a constrained, signal-only prompt. No noise.

It also does reverse prompting: before structuring, it surfaces what information is missing so the output starts from a complete picture, not a half-brief.

What's in the beta:

  • Personal context vault — store projects, tone, constraints, recurring facts
  • Saved prompt templates — lock in structures you always want used, they get applied automatically
  • Reverse prompting engine — Claude surfaces gaps before generating
  • Smart context selection — relevant fraction only, not a full dump
  • Universal paste — works on any app with a text field
  • Onboarding in under 5 minutes

Important: This is not an autocomplete tool. It doesn't suggest words. It generates the full prompt for you — with your context, your templates, your constraints — so the model gets everything it needs before you even hit send.

I dogfooded this for 9 days straight. I now relaunch Xcode just to use my own app before prompting anywhere. That told me I had something real.

Beta will open soon : getlumia.ca

Happy to answer questions about the architecture or how the context selection layer works.

r/AI_Agents Admirable-Station223

90% of AI agents being built right now will never make a dollar. the money is in the boring shi* nobody wants to build

i build outbound systems for businesses. cold email, lead gen, follow ups, call booking. the whole pipeline

i use AI in most steps of my process. but the thing is none of the AI i use is impressive. none of it would make a good demo. none of it would get upvotes here

its stuff like Ai reading a company's website and writing one relevant sentence about them. AI that sorts email replies into buckets. AI that pulls intent signals from job postings to figure out which companies to target

thats what makes me money. boring af single step AI tasks plugged into the business processes I've been running for like a yearn and a half now.

meanwhile i see people in here building these insane multi-agent systems that can "autonomously research, outreach, qualify, and close deals" and getting hundreds of upvotes. then i check their profile 1 or 2 weeks later and they're asking how to get their first client

the agents that make money are the ones that solve one specific problem for one specific type of business so well that the business owner happily pays monthly for it. not the ones that try to replace an entire sales team with a prompt chain

the best AI businesses in 2026 are gonna look boring af from the outside. and the people building them are too busy making money to post demos on reddit

anyone actually making money with AI agents rn?

r/leagueoflegends SufficientMix8264

How League of Legends helped me become successful

I spent years of my life being hardstuck before I realized that League is not a game. It is a psychological filter. My journey from Bronze to Masters was the most grueling mental battle I ever faced and it ended up being the direct foundation for how I became a successful in real estate. Those years on the Rift were a high stakes simulator for the edge I use to dominate my industry today.

The grind started in Bronze where I was just trying to understand the basics. I stayed on it and hit Silver the next year. Then I had a breakthrough to Plat 1. I spent the next year bouncing through those ranks and then sat hardstuck in Diamond for three straight seasons before I finally touched Masters five years ago. That journey taught me that success is a result of refusing to crash out when you hit a plateau. In the real estate world most people give up the moment they stop seeing immediate progress. I was already conditioned by years of Diamond gatekeepers to keep pushing while everyone else folded. Those who still push when it gets tough, separate themselves from pack, with a mental fortitude of steel.

When I started my career in sales my mental was rough. I was making 3k to 4k gross a month and I was operating like a Bronze tier closer. I realized the skill gap in high level real estate is just as punishing as a MOBA. I started watching my replays by writing down my sales calls as they occurred, asking my clients how I did, and if there’s anything I could’ve done to make their experience better. I listened, took their advice to heart, and on the deals that I “threw” I’d play them back in my head to exactly where I threw the deal or missed an opportunity. I treated my pitch and salesmanship like a mechanical combo that had to be frame perfect to secure the win. You do not get to Masters without VOD review and I realized you do not get to the top of a career without that same objective self correction.

In League you must adapt to the patch notes. I watched the real estate patch notes change in real time when interest rates spiked and the game got harder. My Silver level coworkers stayed stagnant and blamed the market like it was a bad jungler. I saw the parallel immediately and changed my strategy to sell based on location and land as an appreciating asset. I am adaptable. I change my build based on the current patch, while everyone else is still trying to play a version of the game that does not exist anymore.

In the real estate grind just like on the Rift you do not always get a perfect team. Sometimes I have to be the hard carry for a client who is tilted or nervous. I have to be the one with the steady hand to drag the deal across the finish line. Other times I get a great teammate and I just have to show up to the objective which is the closing and be smooth with how I operate. I also lost plenty of deals, where I was “throwing” like a split pusher getting caught overextending without his team applying pressure. It is about playing around the win conditions of the deal and never losing focus on the end goal regardless of how toxic the environment becomes.

I clear 15k a month now and I consider myself high elo in my field. The battle is identical. The grind consists of doing the boring repetitive tasks like last hitting perfectly every single time until the results show up in your CS per minute. Most people have the mechanics to be successful but their mental is so weak that they crash out at the first sign of trouble. I took the competitive aggression I learned from the climb and used it to scale a ladder that actually pays out in the real world. Thank you, League.

The Git Gud mindset is a way of life.

r/SideProject Basic-Strain-6922

ShopSpot — AI Product Finder for YouTube & TikTok

See it in a video. Buy it on Amazon. AI scans any YouTube or TikTok frame and finds the exact product instantly.

Ever watched a YouTube video or TikTok and thought "where do I get that?"

ShopSpot uses AI to scan whatever's on screen and finds the exact product or a closest matching product on Amazon — in seconds.

How it works:

▶ Watch any YouTube or TikTok video

🛒 Click "Find Products"

🤖 AI identifies what's in the frame

📦 Get direct Amazon links instantly

Features:

✅ AI-powered product detection (no manual searching)

✅ Links directly to matching Amazon product pages

✅ Shows product image + price in the sidebar

✅ Works on YouTube & TikTok

✅ Rescan any frame with one click

✅ Lightweight sidebar — stays out of your way

Perfect for:

• Fashion & outfit inspiration

• Tech & gadget reviews

• Home decor & interior design

• Fitness equipment & gear

• Beauty & skincare products

Stop pausing videos to Google what you just saw. ShopSpot does it for you.

Uses Amazon affiliate program

Check out this item on the Chrome Web Store https://chromewebstore.google.com/detail/koplofjccmmnnaddllmccfndnhgnmife?utm\_source=item-share-cp

r/ClaudeAI Ok_Nectarine_4445

50's Suburban horror based on bad early Dalle image generation snippet

The Labyrinth Series: Episode - "Assembly Line Anomaly" (CONTINUED)

INT. 1950s KITCHEN - CONTINUOUS

The Housewife extends the plate of eye-cookies closer. They blink in unsynchronized waves—blink-blink-blink—pupils dilating.

HOUSEWIFE: (too many teeth) I insist.

The closet door bulges. The membrane stretches. You can see fingertips now—too many fingertips—pressing from the inside.

Thump-THUMP. THUMP-THUMP.

The laughter from nowhere turns uncertain. Confused murmuring underneath. Whispers.

AUDIENCE VOICE 1: (distant, tinny) ...what's happening?

AUDIENCE VOICE 2: Is this supposed to be...?

The Husband's newspaper melts. Not burns—melts, like wax, dripping through his fingers in monochrome ribbons. He doesn't react. Just keeps holding the position, fingers frozen mid-page-turn as newsprint puddles on his lap.

HUSBAND: (voice distorting, slowing down) Thaaaaaat woooould beeeee swelllllll...

Behind him, the refrigerator sprouts fingers. Chrome handles becoming knuckles, the door panel erupting with hands that clutch and grasp at nothing.

HOUSEWIFE: (leaning closer, face stretching) You must be hungry.

Her jaw unhinges. Inside: not a throat, but a spiral. Descending. Infinite. Lined with more eyes, more teeth, more faces that look almost but not quite like Doris Day—

Gasps from the darkness.

AUDIENCE VOICE 3: Jesus Christ, Harold!

AUDIENCE VOICE 4: Is that even legal to show?

The kitchen glitches. The Formica counter duplicates—overlapping itself at wrong angles. The floor tiles breathe. The ceiling light fixture grows extra arms, each holding a smaller light fixture, fractal and impossible.

You back up against the wall.

The Housewife's face multiplies. Two faces. Four. Eight. All smiling. All extending cookie plates that now have fingers growing out of the cookies, tiny hands waving hello—

YOU: (screaming) NOPE!

You turn and RUN AT THE WALL.

INT. PROJECTION BOOTH - CONTINUOUS

You BURST THROUGH in an explosion of plaster and two-dimensional painted brick—

—and stumble into light.

Real light. Color. Three dimensions.

A cramped projection booth. Two film projectors running side-by-side, their reels clicking, beams cutting through cigarette smoke to illuminate a screen below.

HAROLD (50s, sweater vest, receding hairline, exhausted) stands at a control panel, frantically adjusting dials.

MYRTLE (40s, cat-eye glasses, clipboard, pencil behind ear) taps her clipboard with mounting irritation.

HAROLD: (not looking up) The interpolation's gone haywire again! The neural net's confabulating entire scenes—

MYRTLE: (checking clipboard) Yes, yes, at this stage the generative AI is uncontrollable and can get pretty weird. We just need to sharpen it up in editing!

HAROLD: Myrtle, it just grew fourteen extra hands in the kitchen sequence—

MYRTLE: That's texture detail, Harold. Audiences love texture.

Below, through the projection window, you glimpse a theater full of people. Silhouettes in seats. Murmuring. Uncomfortable shifting.

AUDIENCE MEMBER: (standing) I want my money back!

They both turn.

And see you.

Standing in the wreckage of their projection booth wall. Covered in plaster dust. Panting. A piece of two-dimensional painted brick dangling from your shoulder.

HAROLD: (pointing) Hey, kid! You're not supposed to be here!

MYRTLE: (flipping through clipboard) This isn't in the script—

YOU: (backing toward the door) I don't—I'm not—

HAROLD: Security! SECURITY!

You bolt.

INT. PROJECTION BOOTH HALLWAY - CONTINUOUS

You slam through the door and into a hallway that doesn't make sense.

The walls are covered in film stills. Thousands of them. Black and white images thumbtacked in overlapping layers. But they're wrong.

A family dinner where everyone has the same face.

A wedding photo where the bride has seven arms, all holding bouquets.

A beach scene where the ocean is made of teeth.

And underneath, handwritten notes in frantic script:

"Render pass 47 - hands still multiplying"

"Background interpolation collapsed into non-Euclidean geometry"

"WHY DOES IT KEEP ADDING EYES???"

You run.

Behind you: HAROLD and MYRTLE emerge, shouting.

MYRTLE: Stop that child!

HAROLD: They've contaminated the generative sequence!

The hallway twists. The film stills start moving—images crawling across the walls like living things. A 1950s businessman steps OUT of a photograph, except he's two-dimensional, paper-thin, sliding along the wall toward you with rustling footsteps.

2D BUSINESSMAN: (voice like crumpling paper) Excuse me, do you have the time?

His face is melting. Features sliding down like wet paint.

YOU: (running faster) NO, SORRY, BUSY—

More figures emerge from the photographs. A cheerleader whose pom-poms are made of fingers. A milkman whose bottles contain swirling eyes. A dog that's simultaneously a car, its chrome bumper teeth barking.

The hallway branches. You take a left—

INT. RENDERING ROOM - CONTINUOUS

—and crash into a vast warehouse space filled with hanging transparent screens.

Each screen shows a different AI-generated scene, all running simultaneously:

A birthday party where the cake screams

A baseball game where the players are inside-out

A church service where the congregation is one continuous body with a hundred heads

A news broadcast where the anchor multiplies every time they blink

Between the screens: workers in 1950s office attire, frantically taking notes, adjusting dials on massive computer banks that are half vacuum tubes, half something organic—pulsing, breathing.

WORKER 1: Latent space collapse in sector seven!

WORKER 2: The diffusion model's hallucinating extra dimensions again!

WORKER 3: (screaming into phone) I DON'T CARE WHAT THE MANUAL SAYS, FINGERS SHOULD NOT HAVE TEETH!

You run between the screens. Behind you, Harold and Myrtle's shadows loom impossibly large, cast by no visible light source.

MYRTLE'S VOICE: (echoing) You're ruining the aspect ratio!

One of the screens reaches for you.

Not a hand from in the screen—the screen itself becoming a hand, transparent fingers closing around your arm—

You yank free, stumbling—

—and fall through a screen that shatters into pixels—

INT. BETWEEN FRAMES - CONTINUOUS

You're falling through white noise.

Static. Visual static. Millions of tiny black and white squares flickering, reorganizing, trying to form images but failing.

Faces appear and dissolve.

Hands reaching from nowhere.

Words forming in the chaos:

"RENDERING... RENDERING... ERROR... RENDERING..."

A mouth opens in the static. Too wide. Lined with perfect 1950s teeth.

THE MOUTH: Would you like to see our new product line?

YOU: (falling, screaming) NOOOOO—

INT. ??? - CONTINUOUS

You land hard on something soft.

You look down.

Audience seats.

Rows and rows of plush velvet theater seats, stretching in all directions. No floor. No walls. Just seats. And in every seat: a mannequin in 1950s attire, frozen mid-applause.

Except they're not mannequins.

Their eyes blink.

All at once.

ALL THE AUDIENCE FIGURES: (in perfect unison) We love what you've done with the place.

Behind you: a door marked "EXIT" in glowing red letters.

But the door is breathing.

Inhale. Exhale. Inhale. Exhale.

The mannequin-people's heads turn toward you with mechanical precision. Click-click-click-click.

MANNEQUIN AUDIENCE: (harmonizing) Stay for the second feature.

MANNEQUIN AUDIENCE: We insist.

Their mouths open.

And from every mouth: the sound of film projectors running.

Click-click-click-click-click—

You look at the EXIT door.

It winks at you.

YOU: (whispered) This maze is getting really weird.

The mannequin-audience begins to stand. In perfect synchronization. Rising like a wave.

MANNEQUIN AUDIENCE: (closer) The show must go on.

You run for the EXIT.

TO BE CONTINUED...

Behind you, Harold's voice echoes through impossible space: "THE NEURAL NET'S ACHIEVED SENTIENCE AND IT'S MAKING TERRIBLE CREATIVE CHOICES!"

Myrtle's voice, calm: "That's just part of the artistic process, Harold."

r/aivideo Developing_Stoic

This is one of the very first tryouts I did with Seedance Hope yall like it!

r/ClaudeCode SnuffleBag

Claude subscription in Pi?

With the recent restrictions to Claude Code (to limit OpenClaw as I understand things), can CC subscriptions still be used from the Pi harness?

r/personalfinance Digital_Pirate85

A quick question about refinancing a vehicle. A bit lost

so I currently have a loan through global lending. the interest is 18% i owe 11k. capital one offered to refinance for 11.24% but my payment and remaining payment length is the same. I assume that because of fees and and refinancing charge. would there be any benefit to switching to capital one if nothing changes? would it build a relationship with them. I know my rates arent great but my credit was ruined before I was 18 by forces I couldn't controller. . please help me understand

Edit after. Also I have a question about credit cards. When I started building credit I tok what cards I could as long as they ere somewhat reasonable. But now that im getting cars with 30annual fees and very low interest. How do I get rid of the older cards with bad rates high fees maintenance fees etc with out it lowering my credit. It seems like it will affect my debt:limit ratio. Do I make it so when I cancel an old card I wont go over 30% used credit. I feel like no matter what ill be docked a few points

r/painting ibischlaua

First try, you can guess who inspired me

r/AI_Agents emprendedorjoven

Claude code x n8n

Hi everyone,

I’ve been exploring MCP and integrating tools like n8n with Claude Code, and I’m trying to understand how practical this really is in real-world workflows.

From what I’ve seen, it looks powerful in terms of automation and connecting external tools, but I’m still unclear on a few things:

  • Are you actually using MCP in production or just experimenting?
  • How reliable is it when workflows get complex?
  • Does combining it with n8n meaningfully improve productivity, or does it add more overhead?
  • How do you handle security concerns when giving models access to external systems?
  • Do you think this kind of setup could realistically replace parts of a developer’s workflow, or is it more of an assistant layer?

Would really appreciate hearing real experiences (good or bad)

r/SideProject andItsGone-Poof

I built an open source self-healing backend for production crashes

Hey, finally shipping my first open source project and keen to get some feedback from this community.

Helix watches your error tracker (Sentry or Rollbar). When a bug hits production, a pipeline of agents kicks off automatically:

  1. Crash Handler parses the webhook and extracts context
  2. QA Agent follows a test driven development approach and writes a failing test
  3. Dev Agent clones the repo, writes a fix, and opens a PR
  4. Notifier sends the issue and fix to Slack with Approve PR or Reject PR buttons for a human to review

Built with Python, Redis pub/sub for the event bus, and Claude (or Ollama if you want to keep it local). Self-hostable via Docker Compose.

It also ship Helix as a Claude skill, so if you are already using Claude Code or Cowork you can get the whole pipeline running in one install with no manual config.

GitHub: https://github.com/88hours/helix-community

Most production bugs are not interesting engineering problems, they are just pattern recognition and careful execution. Figured agents could handle that.

Still early but the core pipeline works. Happy to answer questions about the agent architecture, the Claude skill setup, or how the fix generation works.

r/ChatGPT HomeschoolingDad

Asked ChatGPT about whether it could tell a real photo was AI-generated

Out of curiosity, I took a picture of myself, and asked ChatGPT if it could tell it was AI-generated (leading the witness), and: —- Short answer: It’s not obviously AI-generated. It reads as quite realistic at first glance. But there are a few subtle cues that might make someone suspicious if they look closely.

Here’s a structured breakdown:

What looks convincing 👍 • Skin texture: Natural pores, slight unevenness, and fine wrinkles around eyes and forehead look realistic. • Lighting consistency: Illumination across the face is coherent; no impossible shadows. • Facial symmetry: Not overly perfect — slight asymmetry helps realism. • Hairline & stubble: The short hair/stubble pattern is believable and not “painted on.”

These factors collectively make it pass as a normal photo in casual viewing.

Subtle “AI-ish” tells 👀

These aren’t glaring, but they may register subconsciously: 1. Over-smooth blending around cheeks and jaw • The transition between skin tones looks slightly airbrushed. • Real photos often have micro-contrast variation. 2. Teeth uniformity • Teeth are very evenly spaced and similarly colored. • Real smiles often show more irregularities. 3. Eye detail • The irises look slightly simplified; fine radial patterns are muted. • The catchlights are a bit generic. 4. Background flatness • The background is uniformly smooth with minimal noise or depth cues. • AI images often default to this. 5. Subtle texture repetition • Some skin regions appear statistically “even” rather than organically noisy.

Overall assessment • Casual viewer: unlikely to notice anything • Careful observer: might suspect mild AI smoothing • Forensic-level scrutiny: would likely identify synthetic traits

My confidence: ~75% that most people would not call this AI-generated without being prompted.

If your goal is maximum realism

The following tweaks would make it even harder to detect: • Add slight sensor noise/grain • Introduce minor tooth irregularity • Increase micro-contrast in skin texture • Add subtle background texture or depth-of-field • Slightly vary iris detail

If you tell me: • whether this is supposed to be you • or a fictional character • and what context it’s used in (book cover, avatar, etc.)

…I can give more targeted feedback. ——

r/painting Historical_Text_1863

The economy is so bad, I'm working on my dreams

I was laid off in 2023 and I fully planned to start working on oil painting and do freelance work in the meantime. If you're at all involved in commercial art, you know that the freelance world has all but dried up, so I've just been doing the painting. Every day is scary, but I am *almost* used to it now. Is anyone else in the same boat?

r/LocalLLaMA fragment_me

Interesting new model scoring strong on SWE bench - Multilingual-Multimodal-NLP/IndustrialCoder

I just happened to be looking at these benchmarks on hugging face when I noticed 2 things: 1. Qwen3.5 27B is a dog 2. This IndustrialCoder model is benchmarking even better.

https://huggingface.co/datasets/SWE-bench/SWE-bench_Verified?eval_result=Qwen/Qwen3.5-27B

The model is #5 in this leader board:

https://huggingface.co/Multilingual-Multimodal-NLP/IndustrialCoder

I'm going to take a crack at it a little later today but wanted to hear if any one else has tried it?

r/SideProject Pitiful_Comedian_834

Built a SQL workbench with a fully local AI that writes queries for you — no cloud, no API key

Hey! I've been building Warlock, a desktop SQL workbench with one feature I'm pretty proud of:

Local AI query generation — it runs a small LLM (Qwen2.5-Coder) entirely on your machine to turn plain English into SQL. No internet. No OpenAI account. No data leaving your device. And if the query errors, it self-heals automatically.

Beyond that, it lets you JOIN across completely different data sources in one query — Postgres, MySQL, CSV, Parquet, Excel, SQLite, S3 — without moving anything around first.

Free trial available. Windows & Linux.

mortalsoftware.co.uk

r/OldSchoolCool Same_Blacksmith9840

Teenage Brian May (1960s) playing the guitar he and his father handcrafted.

r/leagueoflegends konfitura17

What skin would you like to receive for your main champion

I'd love to see Riven get a Battle Academia skin, as it fits her character perfectly. I once saw an unused concept art where Riven resembled Ryuko from Kill la Kill. Combining her strong fighting style with a dynamic, school themed vibe could create a fantastic skin. I imagine her in an academy uniform, wielding futuristic weapons, and competing with other champions. Or even a pool party skin but I don’t think I need to explain that xd

What theme do you think is missing for your Champions?

r/SideProject ChQz3n

Seeking Vibe Coding Partner(s)

Hey everyone,

I’m hunting for someone who’s in the early stages of their app journey — maybe you’ve launched 1 or 2 small apps already, or you’re just getting started but **super passionate** about building and making some real cash flow from it.

I want to link up (in LA if you’re local, or over Zoom) to talk vibe coding, share workflows, brainstorm ideas, and actually hold each other accountable.

The goal? Launch something meaningful together by this summer or end of 2026 at the latest.

If you’re motivated, energetic, and tired of just watching tutorials without shipping — let’s connect and make it happen.

Drop a comment or DM me if this sounds like you!

r/ClaudeAI mindlessfingeek

Your notes app is a graveyard. Here's how Karpathy fixed it.

Everyone's been saving articles they never go back to. Bookmarks that rot. PDFs that live in downloads until you delete them. Notes that reference nothing and lead nowhere. This is not a discipline problem. It's a system problem. The system everyone uses — save, forget, re-search — is fundamentally broken for building knowledge over time.

Andrej Karpathy posted something on April 3rd that quietly changed how a lot of people think about this. Not a new tool. Not a model release. A description of how he personally uses LLMs now. He stopped using them mainly for code generation. He started using them to build what he calls an LLM Wiki — a self-maintaining knowledge base made entirely of markdown files.

Built a full carousel breaking down the exact setup → link

The architecture is almost insultingly simple. Three folders: raw/ for source material you drop in, wiki/ which Claude writes and maintains entirely, and a CLAUDE.md config file that tells Claude the rules. You never write the wiki yourself. Claude reads your raw material, writes structured pages, creates wikilinks between related concepts, and keeps everything consistent. You browse the result in Obsidian. The graph view shows you connections you didn't know existed.

At around 100 articles and 400,000 words, something shifts. You stop re-searching things you've already read. You start building on what you know instead of reconstructing it every session.

Karpathy's own conclusion was the line that stuck with me: "I thought I had to reach for fancy RAG. The LLM has been pretty good at auto-maintaining index files." The entire enterprise AI industry is selling you a problem you don't have.

r/PhotoshopRequest StitchTheRipper

Request: May I have some help restoring the faded color and wear and tear?

AI messes with the yarn lines. Will tip! Thanks!

r/LocalLLM strangeworks

Lora tuning skills from your knowledge base for Gemma4

Limits, limits, pay pay pay... I am getting extremely annoyed with that, gemma4 is good enough already. So decided to get out from cloud and actually train my domain specific LoRa adapters, so I made a skills for that. The ideal goal is to fully realy on local inferefence, because I want to own my compute. So this is my almost successfull attempt with it that I would like to share.

r/MCPservers pminervini

Deep Research MCP/CLI/TUI

Sharing it here as well, in case it may be useful to anyone here

r/ClaudeCode ScaryDescription4512

Best IDE/Setup for CLI-focused vibe-coding?

I’m using Claude Code via CLI inside VS Code, but at this point I’m barely writing code myself and the VS Code interface is still human-centric.

I’ve been looking at Cursor 3 and it looks interesting, especially with the shift toward more of an agent / orchestration layer instead of just an editor. But I’m still running everything through CLI (Claude Code, Codex CLI, etc.), so I’m not sure if switching would actually help or just add another layer.

What I’m really trying to figure out:

- Is anyone running a setup that’s actually optimized for CLI-first vibe coding?

- How are you managing multiple projects without everything getting messy?

- Has anyone found a good way to run multiple agents / CLIs together cleanly?

Also, I know there are some open-source tools floating around for this. I’ve seen stuff mentioned on here but curious if there’s anything that’s really stuck for anyone.

Would love to hear how people are actually doing this in practice. Feels like a lot of us are hacking together workflows right now rather than using something that’s truly built for it.

r/FluxAI Individual_Hand213

I made an open source alternative to Higgsfield AI and got 4k+ stars

Project link :- https://github.com/Anil-matcha/Open-Higgsfield-AI

Open-Higgsfield-AI is an open source platform that lets you access and run cutting-edge AI models in one place. You can clone it, self-host it, and have full control over everything.

It’s a lot like Higgsfield, except it’s fully open, BYOK-friendly, and not locked behind subscriptions or dashboards.

Seedance 2.0 is already integrated, so you can generate and edit videos with one of the most talked-about models right now — directly from a single interface.

Instead of jumping between tools, everything happens in one chat:

generation, editing, iteration, publishing.

While commercial platforms gatekeep access, open source is moving faster — giving you early access, more flexibility, and zero lock-in.

This is what the future of creative AI tooling looks like.

r/todayilearned DrawerEntire5040

TIL that ordinary matter makes up only about 5% of the universe, dark matter makes up about 27%, while the rest is thought to be dark energy. It's thought that dark matter shapes the cosmos, organizing galaxies and cosmic objects on a large scale.

r/ClaudeAI Gold_Audience7695

Help my eco-system

I use:

- Granola > all my notes in meetings online and F2F

- Cowork > Brings all my notes / attacheds to business CRM / Pullsout Actions etc

- Remarkable > ToDo List, some decisions/key meeting points

- Craft > A bit of a Library - Claude CoWork sends everything here into to various folders etc

How can I get Remarkable into the eco system i.e. talking with Claude? So that I can then push Granola docs to Remarkable (maybe) but Remarkable notes into Claude and into Craft?

r/LocalLLaMA missprolqui

i usually ignore hardware hackathon projects but this repo's approach to decoupling vision from the agent loop is pretty solid

I stumbled upon the REDHackathon in Shanghai this weekend, looks like a rednote event. The projects went open-source yesterday, so I’ve been digging through the GitHub submissions. Honestly, 90% of the hardware track is just an API wrapper duct-taped to a Raspberry Pi that falls apart the second the judges look at it.

But I stumbled on this one project that is kinda changing how i look at embodied setups. The physical shell is pitched as a 'focus toaster'. basically a little desktop device that takes pics of you working and prints out physical thermal receipts of your timeline to keep you off your phone.

Consumer packaging is whatever but the backend architecture is why im posting. Its running on an RDK-X5 board hooked up to a MY-638 thermal printer and a standard usb camera. Looking at the repo they did a few things that definately make this look like a serious prototype and not just a weekend toy.

First off they didnt waste 30 hours trying to write a custom agent runtime from scratch. Just vendored Hermes, embedded it deeply, and spent their sprint time building a robust physical integration layer around it (FastAPI gateway, custom device tool registries).

the big one though is they completely decoupled the visual timeline from the conversational agent. Normally if you give an agent a camera the continuous sampling just chokes the main decision loop. These guys built an independent backend pipeline that samples /dev/video0 via OpenCV, handles the batching asynchronously and stores it. Hermes doesn't even touch the raw video stream. it just consumes the processed states via a device_get_timeline tool. So the agent never gets paralyzed by continuous vision processing.

They also built in real tool trace persistence. Every single time the agent calls device_print_text or triggers a cron job it logs the exact arguments, execution status, and timestamps to SQLite. anyone who has built an embodied agent knows debugging is a nightmare bc you have no idea why it decided to randomly print something at 2pm. Making the execution loop observable is so basic but nobody does it at hackathons.

We spend so much time here obsessing over multi-agent cloud swarms but seeing this made me realize the real unlock might just be moving constrained agents out of the chatbox entirely. Taking a basic agent runtime and giving it a camera, local memory and a physical printer gives it an actual presence that a web app just doesnt have.

anyway repo link is in the comments if u want to look at the routing logic. If anyone is working on edge/embodied setups right now, how are you handling the vision-to-agent bottleneck without spiking your token usage to death on continuous sampling?

r/painting vallancet

Cat Trip, Acrylics

r/ClaudeAI horseluvvaslime

I built a free macOS terminal for running multiple Claude Code agents side by side

I've been running 5-6 Claude Code agents in parallel across different repos and the terminal situation was chaos — 15 windows across three spaces, no idea which agent was doing what.

So I built Waffle (largely with Claude Code itself). It's a native macOS terminal where every session auto-tiles into a grid. Open a terminal, it joins the grid. Close one, it rebalances. No tmux, no config.

I've been using it to build _it_ and one of the things I've found most useful is that it detects which git repo each session is in and groups by project with colour coding. Cmd+[ and Cmd+] to flip between projects. So if you've got agents working across three repos, one keystroke filters to just that project's sessions.

Free, no account, Apple Silicon, macOS 14+.

There's a demo on the landing page:

https://waffle.baby

r/findareddit No-Stress656

I made a jump cut subreddit

https://www.reddit.com/r/Jump_Cut/

you can join me if you want I don't care I'm trying to make my suburbate populated okay you can make art do character analysis or theories I don't care as long as you follow the rules

r/HistoryPorn steves771

Saddam Hussein and Franco's entourage visits mosque in Cordoba, Franco's Spain, 1974 [1069x1352]

r/comfyui d_baby_gangsta_49

Seedance 2.0 feels like a good starting point, not a full replacement

I’ve been playing around with Seedance 2.0 inside Filmora, and my take so far is that it’s useful, just in a pretty specific way. It’s good for rough cinematic ideas and quick concept videos, and the multi-angle outputs make it feel more edited than some other generators I’ve tried. You can get something decent pretty fast, especially for short-form content. That said, I wouldn’t call it a replacement for real editing. You still have to trim and polish it. Feels more like a starting point than a finished thing.

r/creepypasta Deep_Snow1663

So, I decided to update my Creepypasta picture, and... I think I've made it creepier than before!

How did I do? I used Ibis Paint X for this, and I used a screenshot from the Pokémon episode, "Wired For Battle", because I fucking love this point of Scizor.

r/LocalLLaMA br_web

On the ASUS ROG Flow Z13 128GB (2025): How many tok/sec on LM Studio using Gemma 4 26B A4B MoE with a one sentence question?

Question: What is an LLM?

  • For how many seconds it thought?
  • How many tokens/sec?
  • How many tokens?
  • Elapsed time?

Thanks

r/ClaudeCode AndrewSpode

Virgil - Claude Code for Daily Journal with Long-term Memory

TL;DR; Long-term memory system for Claude Code conversations.

This may be of use to someone, and can be easily adapted for other use cases.

It use a few hooks to convert the JSONL logs from your Claude Code conversation into markdown, storing it on disk, categorised by day.

When a day is done, it's summarised. When a week is done, the summaries of each day are summarised. When a month has passed, it summarises the weeks and so on. This recursive summarisation gives it a kind of short, medium, long term memory, while maintaining all the original logs so it can simply grep it to find the details of past conversations without having to rely on compressed context.

I built it primarily to be used as a journaling tool and I've found it to be very effective, and far faster than the month long web chat I had going.

I released it open source - so have a play if you like :)

https://github.com/andrewspode/virgil

r/Seattle privatestudy

Tell Me Something GOOD!!! Weekly Edition!

Hi there, Seattle.

This is your weekly edition where you can tell Seattle what is good.

Did you achieve something this week? Or are you just happy you made it through another week? Did you get to sleep in? Did you find out something new and want to share? Let's celebrate together!!

Nothing is too small to share. I wanna hear it all!

r/ChatGPT CategoryFew5869

I need to touch grass.

r/metaldetecting TibisYT2

German Magnetic Key

Found this magnetic system key/plate near a riverside pipeline area. Marked “Magnet Code System”, ID 001 AB4309, and “Gesetzlich geschützt” (legally protected). Backside stamped EVVF / CS (likely Czechoslovak origin). The raised dots contain small magnets, indicating a magnetic-coded access system, probably used for utility infrastructure (gas/water). Likely mid–late 20th century.

r/AI_Agents Youssef_Wardi

A better way to handle Agents with 30+ tools

A common issue I see when people build LangChain agents or kind of llm call with tools is dumping every single tool schema into the LLM's context window. If you give an agent over 30 other tools, the system prompt gets massively bloated. Worse, the LLM gets confused and starts hallucinating arguments or picking the wrong tool entirely and the cost goes high.

So the solution is you shouldn't send all 30 tools to the LLM for each request because most likely you use at most 1-2 tools (or none) are actually relevant, so I stopped handling this in my application code. I point my agent to LLM Router Gateway, which has a tool optimization layer built in.

so When my agent sends a request with 30 tools, the gateway acts as a filter. It scores the relevance of each tool against the user's prompt. I set my acceptScore to 0.5, so only tools that score above that threshold are passed through, the rest are stripped, by filtering the tools dynamically, my input token costs dropped significantly and the agent's reliability went way up. How are you guys filtering your agent tools?

r/ChatGPT Think-Score243

Never trust any AI 100% even if you have paid package.

I am sharing my own experience

No AI Writings(those who dislike AI writings)

I was working on one of my client’s project that deals on AI solution to e-commerce industries.

For one month , I asked Claude to assist me on clicks rather than huge impressions.

It was like

5k-10k impression and 4-5 clicks despite hard work for 1 week old company.

Every next 2 days I ask Claude, whats wrong? And it send me same message “Update title this to that”,

1 week ago, I decided Lets use ChatGPT , though I use it for normal chats everyday, I asked ChatGPT Whats wrong??

It gave me Golden advice to update top content with latest information, rather than updating full content (which Claude always does) with some special ingredients.

Very next day impression was 15k+ impressions per day, clicks were 20-35++.

I tried once again same method after 2 days.

Clicks were almost 3 times .

So I realized don’t over-trust any AI even if you have paid package.

r/SideProject mattosmcft

I built a fitness app in 17 days because I was tired of paying for 3 different apps

I was juggling MyFitnessPal for food, a separate app for workouts, and an AI for coaching advice. Three apps, three subscriptions, constant switching.

The friction was killing my consistency. I kept forgetting to log meals because opening a separate app felt like too much effort. I talked to people at my gym and heard the same thing everywhere.

So I built my own app. Workout logging, food tracking, and an AI coach that knows your full history, all in one app. After every session you can take your gym selfie and the app builds a stats card automatically for Instagram.

17 days from idea to App Store submission. Solo founder, 20 years old, built everything myself in Flutter.

Launching 1st of May on iOS and Android.

Looking for people who actually train to try it and tell me what's broken. DM me for early access, completely free.

Website is: liftspotter.app

What would make you switch from your current fitness app?

r/ProductHunters shariesk

Launched Owl VIP Email Alerts on Product Hunt today!

https://www.producthunt.com/products/owl-vip-email-alerts?launch=owl-vip-email-alerts

I built Owl VIP Email Alerts because I wanted a simpler way to control Gmail notifications without digging through settings or relying on awkward workarounds.

It puts everything on one simple screen:

  • VIP-only alerts for specific senders or domains
  • one-tap switching between VIP-only, All Emails, or Off
  • scheduled quiet hours
  • unique sounds for different senders

If it sounds useful, I’d appreciate any support or feedback on Product Hunt.

r/StableDiffusion HAL_9_0_0_0

Virtuell echt – nicht echt, aber ehrlich.

Erstellt mit ComfyUI als visuelles KI-Projekt.

Musik: SUNO / Privates, nicht kommerzielles Projekt.

#ComfyUI #AIvideo #MusicVideo #Deutschrap #VisualStorytelling #CinematicLook #Paparazzi #FlashPhotography #EditorialVisual #AIGeneratedVideo #DigitalPerformance #CreativeVideo #RapAesthetic #MediaPressure #DarkVisuals #IndependentCreation #VirtualRealityArt #PromptBasedArt #AIArtCommunity #PrivateProject

r/PhotoshopRequest blahblahrantz

Photo restoration request

I want to frame this photo in a lil heart frame but dont want to cut the only photo I have, can someone give me a print that I can print that its not a photo of a photo ya know?

one of the few photos I have of my dad :)

r/ChatGPT Developing_Stoic

This is one of the very first tryouts I did with Seedance. Hope yall like it!

r/CryptoCurrency unknown-one

I have asked my pre-trained Claude AI to make thousands of simulations where and how to invest. And here is the answer

tl;dr: Claude recommends to put money into main crypto BTC, ETC, SOL. No paper hands, minimal effort and you should have very nice gains within 10 years

Details: I have pre-trained my Claude AI on trading, by feeding him all kinds of books, research papers, articles etc. Then I always asked him if he sees gap in skills, he named what is the gap, what he needs.... rinse and repeat until he said he is ok unless we want to go some very niche trading.. He also received past trading information for last 20 years to see how things were changing over time so he can do his simulations

My challenge for him was:

You get 1000 Euro, find best way how to invest them and get profit within 10 years. He investigated Cypto main coins, Memecoins, Equities and Commodities

There were 5 options
1. Minimum risk, never lose more than 5%
2. Medium risk, never lose more than 50%
3. No guardrails, do whatever you want
4. No guardrails, do whatever you want BUT every time you reach 3x previous milestone, you set 50% aside, never touch it again, and continue trading only with the other 50%
5. Later addition to try memecoins

He ran thousands of simulations of all combinations, but main crypto always outperformed everything.

You can see results in his comments in attached pictures (red rectangle is name).
His proposal was to switch from 3x/50% model to 8x/40% model for more gains.
And he always got wiped on memecoins

But according to his analysis he suggest minimal effort, react only in special cases few times per year. let it sit, no paper hands, no automated bots needed.
Picture 4 shows probability. Less than 4% to lose money or get wiped.
Everything else shows at least some gains over years.

What do you think about this? Is there something that stands out to you?

P.S.: This is not investment recommendation. I dont know shit about investing, Claude is AI and makes mistakes. This is just for discussion

r/ClaudeCode vincent_pm

Any workaround for the current token/dumbness craziness ?

Folks,

Like many here, I’m stunned by everything happening around token consumption or model « dumbness ». My company is basically « Augmented Consulting » and if we can’t trust my Opus workflows anymore, that could put me and my team out of business pretty quickly.

It seems a lot of people are talking about their subscription when mentioning their issues so I wonder:

\\- Is the « nerf » only for pro and max users? Do we have the same issues with API usage?

\\- Any feedback from private deployments? If we used AWS Bedrock or Google Vertex, I guess we would not face the same problems?

Thanks!

r/Strava Old_Independence5166

What to Believe & is it Important?

Today I did my walk. As this is Kansas I like to get the walk in before a) rain, b) heat and c) wind.

The question arises because of the huge difference in the Calorie count between my Apple Watch (127C) and Strava (262 C). The walking time was different.

59 min for Apple and 1hour 5 min for Strava.

If I had to pick one I’d go with Apple’s.

r/aivideo Entire_Definition453

Dead End, Zombie movie trailer made with Pixverse V6 only

r/OldSchoolCool rolypoly99

My Grandma in Vienna in approx 1952, aged 18.

She was born in Austria, married my Italian grandfather and moved to the UK in 1956. She's no longer with us, but I lived just across the road from her most of my life, and I miss her greatly!

r/Futurology ConsistentRegion6184

Can canals be built using kinetic space weaponry?

I had this shower thought and it's probably not the first time its been thought of. It doesn't seem like a good idea, but maybe it could be as the cost to put things in orbit becomes more economical.

cons: blasting agents on site are obviously much much cheaper; space tech could have very expensive and dangerous malfunctions

pros: remote locations more feasible; project timelines greatly reduced; (canals can be made deeper?)

I guess you'd have to ask for details from someone like a geologist. If you are trying to exploit a certain remote area, you may never be able to bring water there however.

If it does work, logistically it could just be a matter of months to put water transport where it was never thought possible. (I don't have any practical knowledge on any of this btw).

r/LifeProTips Unlucky_Scientist364

LPT: no time to go to the gym?building in small workouts while watching tv is a cheatcode

If you’re like me and find all sorts of excuses not to exercise but love spending time in front of the tv, one good way to keep fit is to build in small and simple workouts such as lifting weights, heel raises, etc while watching tv instead of sitting down like a potato. I’ve even managed to increase my muscle mass just by doing this consistently (3x a week for 2 months while watching tv)

r/painting Public_Violinist1157

"Night Stroll in Taiwan"

An acrylic painting on black canvas. First time trying acrylic with BC, really learned a lot of techniques to pull this off. Specially when achieving that vibrant light. Still a very beautiful and stressful experience! hahaha! the layering process was insane. 🦖💗💗

r/leagueoflegends Ken_x0

When is the next Blue Essence Emporium ?

I googled when the next Blue essence emporium is going to be and it used to say that it was meant to be April 1st. But there's still nothing so can we still expect a mid season blue essence emporium? Last year it was the 2nd of April.

r/LocalLLM auskadi

Local llm build

my openclaw and other bots have suggested a new PC config for me with the following

CPU

Intel Core Ultra 9 285K

MOBO

ASUS PRIME Z890-P WIFI

RAM

Lexar THOR RGB 2nd WH 6400MHz 128GB (64GB×2)

GPU

Gigabyte RTX 4090 D AERO OC 24GB

Cooling

DeepCool Infinity LT720 WH 360mm AIO

PSU

DeepCool PQ1200P WH 80+ Platinum 1200W

Monitor

Redmi G34WQ (2026)

Accessory

Lian Li Lancool 216 I/O Port White

Case

Lian Li Lancool 216 White

do people think this is sufficient for running local models efficiently?

any comments and or suggestions?

I think I could push it to run llama 70b, other smaller models and maybe from what I've read minimax. 2.7 as well

thanks

r/PhotoshopRequest IllustratorOk5042

Help please

Can someone help me add this dog to the second photo? My friends dog just passed and is with her dad in heaven. I’d like to give her a sweet pic of them together again

r/SideProject Responsible-Buy7482

I built a tool that turns CSV data into animated chart videos — with transparent background export for DaVinci Resolve and Premiere Pro

I kept seeing the same question in video editing forums: "I have a chart animation I need to add to a client video — how do I get rid of the background?"

The usual answers were "use After Effects" or "key out the background from the MP4". Both are painful, especially if you're not an AE person.

So I built Framechart (framechart.com) — you upload a CSV, pick a chart type (bar, line, or data table), and it renders an animated video. The Pro version exports a transparent PNG sequence with real alpha channel, so you just drop it into your timeline and it composites cleanly over footage. No keying, no blending mode hacks.

Free to try with no account — just drop in a CSV and hit render.

Would love feedback from anyone who does data-driven video content. What's your current workflow for this?

r/ProgrammerHumor VariationLivid3193

modsWorkInHR

r/LocalLLaMA ShaneBowen

Reasoning Stuck in Loops

Does anyone else have their models get stuck in loops like this? I was trying to bake off a 3080 Ti(CUDA13) with Qwen3.5-9B vs and a Xe iGPU with Qwen3.5-35B-A3B.

r/findareddit lapetitlis

need to vent about some really heavy stuff (too heavy for TOMC). suggestions?

hello, everyone. i'm going to be blunt and try to get straight to the point albeit along a crooked path..

i was active as a prostitute in NY during part of the time the Long Island Serial Killer was hunting *girls that looked just like me* (i was exactly his type). the run-up to and the aftermath of his recent guilty plea for the killings of eight women have been a heavy time for me.

i am pleasantly surprised that he pleaded guilty ... although, given the evidence against him, perhaps it should be more astonishing that he insisted on his innocence for as long as he did. i am relieved that the families and loved ones of the victims will be spared the trauma & garish spectacle of a trial. i feel a bit of survivor's guilt, because it could have so easily been me, and i am no more deserving of my life than they were of theirs. i made it, but they didn't. we knew we were being hunted, but as Aileen Wuornos told a friend about a time she knew there was an active serial killer targeting prostitutes in her area, "i still had to hustle." i still had a child to provide for, without the help of anyone, with no safety net.

i'll be honest, and i feel guilty about this because i know that it is selfish, but it's also excavating some memories and some emotions that i try very hard to compartmentalize and push aside.

anyway. there are so many emotions swirling inside of me. it's just been a heavy few days and i really need to get some things off my chest. i need to find a place where i can safely share these thoughts, including my thoughts on the sex industry as a whole, & talk about sex workers' humanity without violating a rule. believe it or not i have even more to say. please help?

thanks in advance.

r/geography Swimming_Concern7662

The two red points are just one state away from each other

r/SideProject Strict_Usual_3053

AppRundown – 3 Weeks In, 10 Category Pages, Zero External Marketing

Why I built this

Every "best apps 2025" site you find on Google is the same thing: AI-written listicles recommending apps the writer never opened. I wanted the opposite — rankings driven by real download data and real user reviews, with the methodology exposed on every page.

What's actually different under the hood

  1. Real download data — I pay for Sensor Tower's API (not cheap) so rankings come from actual US download estimates, not editorial gut feel

  2. Real user reviews — the quotes on each app card are pulled directly from ST's API, not LLM-generated

  3. Full transparency — every category page has a "How We Ranked" section with data sources and cutoff dates; no hidden editorial hand

  4. Grounded AI copy — the pros/cons on each card are AI-assisted but strictly anchored to ratings, downloads, descriptions, and real reviews. A pre-publish checklist rejects anything speculative. Happy to go deeper on how that works if anyone's curious.

Tech stack

Built 100% in Claude Code · Next.js (App Router + SSG/ISR) · Supabase · Cloudflare R2 for images

Data pipeline runs on GitHub Actions weekly. Also my first time doing programmatic SEO — if there's interest I'll write up the full "pure Claude Code" workflow and the landmines I hit (there were real ones).

What's next

A limited-time free apps tracker

A price drop alerts feed

Both powered off the same ST data. The 10 category pages are the MVP for now.

🔗 Live: https://apprundown.com

🔗 Example page: https://apprundown.com/best/budget-tracker-apps

Looking for feedback on

1.Category page layout — does "How We Ranked" actually reassure you, or does it feel like boilerplate?

  1. What category should I add next? I have 10 live and want the next 5 to be what people actually search for

  2. Anything that still smells like AI slop — I'm trying hard to avoid it but I know I have blind spots

r/ClaudeCode yeacy

i am about to pass out it's been 5 hours send help

r/findareddit mr_moj0_rising

subreddits for venting that allow religion SA (not self, the other sa) and anxiety/depression etc.

r/vent didn't allow me to post about these so idk where else to see

r/LocalLLaMA pmttyji

Why no talk about Medium (size) Language Models? 70-200B

People here brought SLM topic time to time(Ex: Is SLM the future?). But never seen anyone brought Medium (size) Language Model.

The definition of both SLM(Small Language Model) & MLM(Medium Language Model) changes over the time. Right now some already calling 20-35B models as SLMs. By this defination, I guess 70-150B(Max 200B) falls under Medium Language Models. 201-500B is Big & 501B-1T+ is Large Models.

List of Medium (size) Language Models(Popular & Recent ones from HF):

  • LongCat-Flash-Lite
  • Llama-3.3-70B-Instruct
  • LongCat-Next
  • Qwen3-Next-80B-A3B-Instruct
  • Qwen3-Next-80B-A3B-Thinking
  • Qwen3-Coder-Next
  • Solar-Open-100B
  • Ling-flash-2.0
  • Ring-flash-2.0
  • LLaDA2.1-flash
  • sarvam-105b
  • Llama-4-Scout-17B-16E-Instruct
  • GLM-4.5-Air
  • Leanstral-2603
  • Mistral-Small-4-119B-2603
  • gpt-oss-120b
  • Qwen3.5-122B-A10B
  • NVIDIA-Nemotron-3-Super-120B-A12B
  • Mistral-Large-Instruct-2411
  • Devstral-2-123B-Instruct-2512
  • Mixtral-8x22B-Instruct-v0.1
  • dots.llm1.inst
  • Step-3.5-Flash

Only Llama-3.2-90B there in 80-100B range.

Only Mixtral-8x22B there in 126-150B range.

Only Step-3.5-Flash there in 150-200B range. 150B is a good size, Q4 comes in 75GB which is good for 64/72/80GB VRAM.

Model creators could consider the above ranges for their upcoming medium size models.

I think many would prefer to see more new Medium (size) Language Models(70-200B) than Large 1T models. Like people who's with 96GB VRAM(4x 3090s or 3x 4090s) could run 200B models @ Q4 with Offloading(System RAM), -ncmoe, etc.,

(BTW I didn't forget models like MiniMax-M2.5, Qwen3-235B-A22B & Qwen3.5-397B .... Those falls under Big category, maybe separate thread is better for that. or MiniMax-M2.5 & Qwen3-235B-A22B belong to above list as it's sitting near to 200B range?)

(Previously I wished for more tiny/small models as my current laptop has only 8GB VRAM. But soon I'm getting new rig with 72-96GB VRAM so now expecting more medium size models)

So what are your expectations from Model creators on upcoming models?

r/n8n Witty-Line9507

I want to know more about the policies

Hello, I've heard that we cannot use N8n to create a saas bussiness. is that true?
If it's true I want to know if I can set ut from my vps N8n different flows for different enterprises and connec it with a front end and delivers a personalized app for the enterprise that runs in my vps n8n. is something valid? or they could banned me? Someone that can help me please?

r/LocalLLaMA Beneficial-Job-3082

Built a terminal chatbot in Python that uses Ollama + Qwen3.5:4b — fully offline, beginner project but works well

Hey everyone, I am interested in exploring Python and wanted to build something with local LLMs instead of using OpenAI.

Built a simple terminal chat app that:

  • Runs Qwen3.5:4b locally via Ollama
  • Remembers conversation history mid-session
  • Has a clean command system (/reset, /history, /clear etc.)
  • Zero cloud, zero API keys, everything stays on your machine

It's nothing fancy but it was a great way to learn how Ollama's API works under the hood.

GitHub: https://github.com/Aditya-rc4/localai_chat

Happy to hear any feedback or suggestions for improvements!

r/DecidingToBeBetter cocoa_pudding

Working for a better future

Sorry if this is all over the place, I'm brain dumping.

I posted a while ago about my abusive parents and how I wasn't sure what to do. After some reddit advice and reflecting on my life, I decided leaving was the only thing I could do at this point. My mom was making empty promises about leaving my dad behind, and my sister was causing a lot of stress.

I started school a while before coming to reddit, and I'm halfway done with my program. I did have a few hiccups, but I will have my associates next year, and I'm applying for internships. If I work extra hard, I can get a part-time or full-time position, so I'm crossing my fingers. I'm doing other things on the side for extra cash, too, but I don’t make much.

it's difficult keeping my spirits up recently because my dad lost his job, so now he's constantly at home. We also lost our health insurance, but I'll manage.

My mom keeps snooping in my things and lecturing me about getting along with family and talking about how she feels bad for my dad, and we aren't allowed to criticize him, but she's still divorcing him. Sure. I truly believe my mom is a lost cause at this point, and I've decided to go low contact after I leave. I guess she noticed me moving my stuff because she's been lecturing me more, and it's so frustrating listening to her talk.

The only setback is my weed addiction. I stopped for a while but picked it up again because my anxiety got really bad the I couldn't get out of bed to do anything, even eat. I'm trying CBD instead of THC, and it seems to help a lot. I get calmness without the high, which is great. I still use THC often.

Anyway, I was able to get most of the essentials:

- Bed

- Couch ($50 yay)

- Kitchen Stuff

- Bathroom Stuff

- Storage

Now I need appliances and a deep freezer to store food in. I really want a projector instead of a TV, but I'm gonna save money for it later. I just thought it was a cool idea and would save me from carrying the extra weight. So far, everything I have is easy to lift except the couch and bed, and I want to keep it that way, lol. I'm not even considering a dining room table, I'll eat on the floor.

I'm packing up about 60 - 70% of my belongings, so when its time to go, I don't have to worry about them blocking me from leaving or sabotaging my things. Everything I bought is far away from where I live, and they can't touch any of it. I'm saving also, so I will have something to fall back on so I never have to go back.

if you read this far, thank you, It means a lot. With everything going on, I know I will struggle a lot, but im okay with that for now, honestly. Any suggestions and other things I can do? any advice?

r/SideProject Selmillionaire

I spent 4 months and 114 commits building an AI outfit planner. Launched today. Here's what I learned.

Hey r/SideProject,

I launched CleanFit today on the App Store after 4 months of solo building. It's an AI stylist that suggests outfits from your real wardrobe based on weather and occasion, powered by Claude AI.

A few things I learned building this:

The hardest part wasn't the AI, it was the subscription flow. RevenueCat + Apple StoreKit took longer to get right than the entire wardrobe management system.

Supabase storage hits its free tier limit at ~66 users. I only discovered this by modeling unit economics carefully before launch.

AI image analysis (auto-tagging clothing photos) works surprisingly well. Claude identifies fabric, color, style, and season from a phone photo.

What I built:

- Wardrobe management with AI photo tagging

- Daily outfit suggestions (weather + occasion aware)

- Travel planning (multi-day outfits with destination weather)

- Calendar scheduling

- RevenueCat subscriptions with free trial

Tech stack: Expo + TypeScript, Supabase, Claude Sonnet/Haiku, RevenueCat, PostHog

Got 25 downloads today mostly from personal network. Now figuring out how to grow beyond that 🤩

If anyone wants to try it, DM me and I'll give you a free premium trial. Happy to answer any questions about the build 🥰

App Store Link : https://apps.apple.com/us/app/cleanfit-ai-outfit-planner/id6760984645

r/LocalLLM alfons_fhl

I pay $200/month for Claude Max and hit the limit in under 1 hour. What am I even paying for?

r/personalfinance singhisking16

HSA after Marriage late in 2025

Hi guys, I'm finishing up my taxes and trying to figure filing jointly for the first time. I had an individual HSA plan through work that I maximized while my wife had family plan with our daughter. She obviously contributed past the individual limit but now that we are married, our joint amount goes above the family limit. Am I understanding that I pay taxes on the excess amount even though we got married in late September of 2025 and had been contributing within the rules before that?

r/personalfinance cptsdby

A bunch of debt as a homeowner with equity

I'm living on disability that is half of what my yearly salary was. it's about $500 monthly over my mortgage costs. i don't have a ton of expenses (biggest is food and health insurance and expenses) because I live pretty frugally. No car payment, but it's an old car so it's getting pricey. Had an unexpected $1,500 repair cost for example. Water heater went out, etc. Given I've been living on cards (I had a really good credit score) to pay my medical expenses, I'm in a lot of debt. My mortgage payment is about the cost of an apt. were I to sell. I got a part-time job, but had to quit for medical and job condition reasons.

I have at least $150,000 in house equity. If I have lot a debt to income would I be approved for a cash-out refinance? I haven't been late on anything yet, but it'll happen soon given I'm running out of savings.

What are my options? I really don't want to declare bankruptcy, but maybe I'll have to? Could I still get a cash-out refinance with a disability payment of just over what I need to cover housing costs?

r/ClaudeCode pminervini

Deep Research MCP/CLI/TUI

In case it may be useful to anyone here

r/ChatGPT Salty-Elephant-7435

If AGI super intelligence is only 12-18 months away, shouldn’t we already be seeing major standalone breakthroughs?

There are frequent claims that AGI super intelligence could arrive within 12-18 months.

At the same time, most real-world examples of AI today seem to involve it assisting human researchers - speeding up coding, helping analyze data, generating drafts, supporting drug discovery, etc.

I’m genuinely curious: if we’re truly that close to AGI-level capability, shouldn’t we already be seeing AI independently producing major breakthroughs - like solving a long-standing scientific problem, discovering new physics, or curing a disease without heavy human direction?

Is the current lack of dramatic standalone breakthroughs evidence that AGI timelines are overly optimistic, or is that the wrong way to think about progress?

Would love to hear how people here interpret the trajectory.

r/toastme RemarkablyBearded

Hard couple of months

But looking up!

r/leagueoflegends Yujin-Ha

G2 Esports vs. Team Vitality / LEC 2026 Spring - Week 3 / Post-Match Discussion

LEC 2026 SPRING

Official page | Leaguepedia | Liquipedia | Eventvods.com | New to LoL


Team Vitality 2-1 G2 Esports

  • [Player of the Series: Naak Nako]()

VIT | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube | Subreddit
G2 | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube | Subreddit


MATCH 1: VIT vs. G2

Winner: Team Vitality in 52m
Game Breakdown | Runes

Bans 1 Bans 2 G K T D/B VIT orianna azir ryze pantheon xinzhao 100.2k 19 9 C2 H3 O4 O6 O9 G2 nautilus varus karma ornn leblanc 99.7k 14 10 M1 B5 B7 O8 B10 E11 B12 VIT 19-14-51 vs 14-19-33 G2 Naak Nako aurora 3 4-2-9 TOP 1-5-7 1 rumble BrokenBlade Lyncas jarvaniv 1 2-2-15 JNG 1-7-5 3 vi SkewMond Humanoid lissandra 4 1-4-10 MID 1-1-6 3 ahri Caps Carzzy caitlyn 2 8-3-5 BOT 9-3-4 2 yunara Hans Sama Fleshy bard 1 4-3-12 SUP 2-3-11 2 lulu Labrov

MATCH 2: VIT vs. G2

Winner: G2 Esports in 34m
Game Breakdown | Runes

Bans 1 Bans 2 G K T D/B VIT nautilus akali anivia ryze taliyah 58.5k 7 2 M4 G2 orianna varus karma ezreal nami 71.6k 14 10 HT1 O2 H3 M5 B6 M7 VIT 7-14-12 vs 14-7-27 G2 Naak Nako gnar 2 2-2-1 TOP 2-2-6 3 yasuo BrokenBlade Lyncas pantheon 2 4-4-1 JNG 2-0-5 2 xinzhao SkewMond Humanoid azir 1 0-4-1 MID 7-1-3 4 leblanc Caps Carzzy ziggs 3 1-1-4 BOT 3-2-4 1 ashe Hans Sama Fleshy rell 3 0-3-5 SUP 0-2-9 1 seraphine Labrov

MATCH 3: G2 vs. VIT

Winner: Team Vitality in 32m
Game Breakdown | Runes

Bans 1 Bans 2 G K T D/B G2 karma orianna varus ambessa gwen 54.0k 1 3 M1 H3 VIT nautilus akali ryze viktor annie 63.8k 8 9 I2 O4 O5 O6 B7 G2 1-8-1 vs 8-1-14 VIT BrokenBlade sion 2 0-1-1 TOP 4-1-1 3 aatrox Naak Nako SkewMond olaf 4 1-3-0 JNG 0-0-2 3 drmundo Lyncas Caps zoe 3 0-2-0 MID 3-0-1 2 syndra Humanoid Hans Sama lucian 1 0-1-0 BOT 1-0-5 2 ezreal Carzzy Labrov milio 1 0-1-0 SUP 0-0-5 1 nami Fleshy

*Patch 26.7


This thread was created by the Post-Match Team.

r/RASPBERRY_PI_PROJECTS Photonic_Pat

Jukebox - with touch-screen interface and custom software

I wanted a better way to play my music. I wanted a self-contained appliance running only on a touchscreen, something I could stick in the living room. I wanted a better way to navigate my music library. I wanted something juke-boxy that could also let me play albums if I wanted to. Obviously the only way to go was to write my own software. Voila! I am very happy with it.

Running here on a pi 5 rocking the DAC card and my dusted-off TDK SoundCube. Previously had it running on an ancient all-in-one PC. Didn't have as much oomph and the hard drive was threatening to quit.

You can customize the buttons, genres, and sub-genres. You can use your music library as-is, but it works best with an iTunes-style music library. I am sharing the code here (it's Python based):

https://github.com/patatorre/deco_jukebox

r/Art NEWMECHANE

Night spire, Newmechane, digital, 2026

r/ProgrammerHumor Dry_Reaction_4851

echoTrustMeBro

r/Adulting KaleidoscopeOk5063

Working multiple jobs never ends well

I hustle. I work multiple jobs, I also am supposed to be at school today.

I’m behind on rent. My parents think I’m lazy and they are getting old, but honestly I’m not lazy. It’s mentally exhausting, it’s physically exhausting, I have barely enough time to digest the material I am learning at school, let alone be productive.

Last week I missed two days of work. I spoke with a dispatcher and explained I’m really in a grind right now - it doesn’t matter, I’m pretty sure I’m fired.

I have a contract with a tech company, it pays pretty well but the work is very hard and time consuming. I missed three days of school last week.

I spoke to my school about my situation - my performance really does not match where I should be. My supervisor/boss lady at the school mentioned she might be able to get me financially assistance. But I’m thinking it might be too late

I really don’t know what to do. My landlord has been extremely patient but I’m three months behind. Some of my neighbors work also full time jobs, but doing this and going to school has proved to be really difficult for me

I get contract work offers pretty regularly and some of the contracts pay really well. But balancing it with a normal life and workflow, school flow, something always has to give, something will fail

Last week I kinda just shut down. I couldn’t go to school and I couldn’t go to work. I just shut down. I’m 32. I felt more productive at 25. Forget a dating life, I’m almost homeless

r/Adulting pencilbxx

Sit with it without fixing it! The Experiment I set for myself not to get burned-out again

📅 Date: April 2026

🧪 My experiment: asking myself one honest question per day and sitting with it. not googling or journaling 3 pages. just one question. and watching what comes up.

somewhere along the way I started building a small tool to help me do this consistently. less about the tech, more about having something that wouldn't let me off the hook too easily.

🎯 The objective: I burned out. properly. not the "I need a weekend" kind. the kind where you wake up and don't recognise why you're doing any of it anymore.

I wanted to understand my own patterns before trying to fix them. just see them clearly first. turns out that's harder than it sounds when your brain is really good at generating noise.

💭 How it's going: harder than expected.

I'm very good at thinking about my feelings and very bad at actually sitting with them. I'd get asked a question and immediately jump to analysis mode. meta-thinking about the thinking. the tool started catching that, reflecting it back. "you're describing the situation again. what are you feeling right now?"

annoying. useful.

what I noticed: the quality of the question matters more than the answer. "why am I procrastinating?" gets me nowhere. "what am I actually afraid will happen if I do this?" gets uncomfortable fast. in a useful way.

the tool doesn't suggest answers. it just asks the next question. sometimes it waits. that part weirded me out at first.

What I've learned about myself so far: I use busyness as a sensory override. enough noise, enough tasks, and I don't have to sit with the uncomfortable stuff underneath.

also: I avoid questions I don't have good answers to. which is probably exactly where the useful stuff lives. the tool has a way of parking those. coming back to them later when I'm less defended. that pattern, the avoidance pattern, turned out to be more informative than any answer I eventually gave.

⏭️ What I'll try next: keep the daily question habit going. start logging not the answers but the resistance to answering. where do I deflect, rationalise, change the subject to myself.

the tool is helping me track that across days now. seeing it as a pattern over time rather than a one-off moment feels different. less judgment, more curiosity.

curious if anyone else has tried something like this. especially the "sit with it without fixing it" part. genuinely hard. would love to hear what's worked.

r/Lost_Architecture Fantastic-Peach-1995

Tokyo Motors Pagoda. Santo Domingo. Dominican Republic (1900s-2000s). Demolished

r/DunderMifflin sen53ii

“…maybe some soup“: Little Easter egg in S07E14

Holly‘s back in Scranton while Toby is on jury duty. She just broke up with AJ. The signs are pointing towards „light soup time“ for Michael!

r/SideProject hamayerowel

I built 13 live demos of the same WordPress intake form — each with a completely different visual identity, no CSS written

Been working on a decoupled WordPress form builder for the past few months. One thing I kept wanting to show people was how flexible the appearance system is - same form structure, completely different look depending on the preset.

So I put together a demos page with 13 live flows, each running a different design token preset:

  • Dark mode SaaS (slate background, indigo primary, Inter font)
  • Luxury law firm (Playfair Display, amber gold, zero border radius — sharp edges everywhere)
  • Neo-brutalism startup (yellow background, black borders, Space Grotesk)
  • Soft wellness/eco (pill-shaped inputs and buttons, mint palette, Nunito)
  • Ocean dark, crimson editorial, violet neon, warm gold dark...

Each one opens as a real full-page experience - not an iframe preview. You can actually tab through fields, upload files, and submit. After submit it redirects to a thank-you page that shows your submission alongside a mock wp-admin inbox.

The architecture: one exported .zip per workflow (JSON config + scoped CSS), uploaded through WordPress. Design tokens (colors, fonts, radii) are passed as URL params at runtime. No theme conflicts because everything is scoped to #xpressui-root-{id}.

Demos: https://xpressui.iakpress.com/demos

Still in open beta - handing out free lifetime licenses in exchange for feedback.

r/SideProject smilaise

KillerPDF

Got tired of how heavy and annoying Adobe Acrobat is… so I made my own thing.

https://pdf.killertools.net

It’s basically a lightweight PDF tool that just does what I need without all the bloat:

- opens fast (even big PDFs)

- simple UI

- merge / split / basic edits

No accounts, no paywalls, no “pro version” popups.

Still early, but I’m building it out into a full toolkit over time.

If you try it, let me know what’s missing or what sucks.

r/n8n Longjumping_Cod_8568

Merge node issue

IM BEGGING FOR HELP

I have a problem with my merge node here who doesn't want to pass data to the next node, normally even if there is only on input Fed with data it should pass it onto the next node but it won't do it here and I can't finish my workflow, I've already tried looking on the internet and in the documentation what could be the issue but I didn't find anything it should just be working.

So if anyone here as any information about what could be the reason I'm desperately waiting for answer thanks🙏

r/TheWayWeWere dmode112378

Me in 1982. It looks like I’m robbing my mother

r/OldSchoolCool Flat_Bet_2109

Philippe Petit crossing the void between the two World Trade Center towers (1974)

r/aivideo Dense_Picture_9511

Pov You're out walking in this place

r/PhotoshopRequest MysteriousPlane5956

Can someone remove the woman on the left, center me, and make my phone case solid hot pink? Will pay

Edits needed:

• Remove the woman on the left (red dress) completely

• Reconstruct the wall behind her so it looks natural (no blur or weird artifacts)

• Move me (the one taking the mirror selfie) to the center of the photo

• Change my iPhone case to a hot pink color (bright, vibrant pink — keep it realistic with reflections)

Important:

• Please keep everything else exactly the same

• Don’t change my face, body, outfit, or lighting

• Make it look natural and unedited

I can tip for a clean, realistic result 💖

Thank you!!

r/LocalLLaMA anonutter

Best way to supplement Claude Code using local setup

Hello eveyrone,

I use Claude Code for my projects. However I would like to setup and equivalent local environment so that I can continue programming while I wait for my usage limits to reset. The idea is that I can use the local model to make non critical changes while leaving the core engineering / large scale architecture work to claude code. Eg: making more pretty ui elements or fixing minor bugs etc.
I have a 3090Ti I can run local models on. I understand that matching Opus 4.5 on Claude Code with my local setup is not possible yet. Would it be possible to match Sonnet 4.6? What models would you guys reccomend and how do I setup a local claude code with them? I see several community members have their own version of claude code setup based on the leaked files, is there one repo that is now widely used / maintained by the community?

Icing on the cake would be if I could make the two setups talk to eachother. Eg: the local model also writes / uses the same MEMORY.md file that claude code uses without messing things up .
Thanks!

r/SideProject richlabstech

I let App scan my iPhone and it found 4,425 duplicate photos I didn't know existed

Been getting the "Storage Almost Full" warning for months. Finally tried CleanVault - an iPhone app that uses Smart Algorithm to scan your photo library.

Results after 60 seconds:

• 4,425 duplicate photos found

5.1 GB freed

Also has a secret vault to hide private photos behind Face ID

It's free on the App Store. Genuinely surprised how much junk was hiding in there.

Anyone else tried photo cleaner apps? Curious how much others freed up.

App:

https://apps.apple.com/us/app/cleanvault-photo-cleaner-vault/id6760420767

r/leagueoflegends Triangle111228

Wanting to host a tournament

Hello,

are there teams / clans of 5 who are wanting to do a mini tournament tonight?

For some reason i tried searching on the internet if there was a subreddit for this but couldn't fine one.

So if you have 5 people you want to play with, shoot me a message

We are platinum - emerald rank, lower / higher is welcome.

no entree fee or whatsoever, pure for fun.

r/LocalLLaMA bakawolf123

Apple clears supply chain further for upcoming M5 Ultra studios

Not long ago 512 GB M3U studios stopped being available. Atm both M3U with 256 GB and M4Max with 128 GB options are no longer available for delivery.
Meanwhile M5Max MBPs are still in normal 1-2 week delivery range, so it implies new release sometime soon, rather than just memory shortage.

https://preview.redd.it/55j7d625okug1.png?width=413&format=png&auto=webp&s=b8c664c35a1d69a8113d0017c0a17bb3baa0a0e7

https://preview.redd.it/tuberas5okug1.png?width=479&format=png&auto=webp&s=6df9d88ac5c0c7c8f89835eabb6b0a1424a5e090

r/AlternativeHistory helpmeplsgetjob

The Israeli siege of Beirut, 1982. Hezbollah didn’t exist back then. Ronald Regan called it a holocaust and ordered "Israel" to stop it.

r/LocalLLaMA No-Jelly6558

Announcing: psyXe – A native macOS AI assistant with tight connectivity to Apple Notes, Contacts, Files, and Reminders

Hey Mac folks, you might find this interesting:

psyXe -- a native macOS desktop app that turns any OpenAI-compatible LLM into a personal assistant with deep Apple ecosystem integration.

Why should you have to use third party tools like: N8N, Obsidian, Google Calendar, Gmail, etc. to connect your productivity needs with AI agents? Apple has solid productivity tools that come free with the Mac, why not leverage those?

What it does: psyXe connects to Apple Notes, Reminders, Contacts, Weather/Maps (Apple APIs if you have a dev account, Open-Meteo otherwise), and local files via macOS-native APIs (AppleScript, EventKit, Contacts framework). You chat with an LLM and it can search notes semantically, create reminders, manage contacts, search files via mdfind, and execute multi-step tasks -- all locally. You can also communicate with the app via iMessage or email over IMAP -- no need to bolt on Telegram or WhatsApp.

What makes it interesting technically:

- Prolog-based intent routing -- Scryer Prolog (pure Rust, no C deps) matches intents to tools via logic rules

- BERT semantic search -- notes and conversation memory indexed via memvid-rs

- Multi-agent swarm -- parallel task decomposition with planning, worker dispatch, and synthesis

- Workflow engine -- Prolog FSM coordinator for sequential multi-agent pipelines with cycles

- WASM tool creation -- describe a tool in natural language, it generates Rust, compiles to WASM in a sandbox, and installs it.

- Works with any LLM -- Ollama, llama.cpp, vLLM, OpenAI, Anthropic, Gemini, or any compatible endpoint

Privacy: Everything runs locally. No data leaves your Mac unless you point it at a cloud LLM. API keys in SQLCipher-encrypted SQLite with Touch ID.

Stack: Rust + Tauri + Svelte 5.

14-day free trial. Free tier keeps Apple ecosystem tools permanently -- paid tier unlocks agents, swarm, workflows, scheduling, and custom tools.

REQUIREMENTS: macOS 14+ / Apple Silicon

https://pro.psyxe.app

r/geography Sufficient-Staff-433

What would happen if everyone had to move to wherever their majority/plurality of genetic ancestry was from?

For example, if an Appalachian American took a DNA test and it came back as 97% British Isles, then they’d have to move to the UK

Your average Haitian would move to West Africa

If a Mexican is 60% Spanish and 40% Indigenous, they’d move to Spain. If they’re 60% Indigenous, they’ll stay in Mexico. If it’s 50/50, they can choose

If a Puerto Rican is 40% Spanish, 30% West African, and 30% Indigenous Taino, they’d move to Spain

r/AI_Agents Music_is_ma_soul

Running an agent 24/7

I've got llama3 70B running on a dual 3090 setup through Ollama. Built a python script that checks financial data every morning, analyzes it, and sends me a summary on Telegram.

Problem is it's basically a cron job with amnesia. Every run starts from scratch. It told me the same "AAPL is showing unusual volume" insight three days in a row because it doesn't remember what it already told me.

I hacked together a SQLite log which stores the last 10 summaries into the prompt as context but that's already getting long and I know it won't scale past a few weeks. I'm thinking of doing a markdown file for short term and keeping the sql as a dbish??

Anyone here actually have an agent running long-term that remembers previous runs? How are you handling the memory? Just curious what setups people have landed on.

r/ChatGPT dictionizzle

Update: ChatGPT pointed me toward ADHD, and today I got officially diagnosed

About a year ago, I shared that ChatGPT unexpectedly suggested I might have ADHD. Today I can say that after a proper professional evaluation, I was officially diagnosed and will be starting stimulant treatment under medical supervision. ChatGPT was not the diagnosis, but it was the nudge that led me to finally get real answers, and I am genuinely grateful for that.

r/SideProject yashg

I built a password manager because I wanted something simple and fully customizable

I am building HexaVault - a password manager and secure information vault.

Why am I building it? Because none of the existing password managers like 1Password, Bitwarden, Dashlane, provide me what I want - structured storage for all kinds of information. All existing products are primarily password managers - they do one thing very well and that is storing passwords for websites and auto filling them. They provide limited templates for storing other kinds of information. For any other kinds of information there is secure notes. Which is just a big textbox to store whatever you want. Some provide custom fields but they are not very flexible. I wanted a fully customizable information store, not just for passwords but everything that is important for me. So I built HexaVault.

At the core of it, it's a simple label+value store. There are 30+ predefined templates for common types of information - logins, cards, bank accounts, loans, insurance, passport, national id, tax id, driving license, memberships, frequent flyer account, various utility services like electricity, water, gas, internet etc. All grouped under categories. User can create and edit their own categories and templates.

I had built the first version back in 2017 as standalone mobile apps that stored data locally. No mobile sync. I had built a web version but it was not released publicly and only I was using it.

I have now relaunched the v2 with web, Android and iOS apps. Cloud sync, zero-knowledge architecture, file attachments, TOTP generation built-in, password generator - the whole shebang. It currently lacks auto filling and browser extensions which I am working on right now, hopefully will be released in the new few weeks.

I have a freemium model. All the features are free, except attaching files to entries. Pro subscription is $25/year and have already got one paid user. He had signed up and upgraded when I hadn't even released the mobile apps. I am getting 2-3 of signups every day.

So the idea is this is the single place where you store everything that is important -

Your Facebook password, your bank account details, your mobile's IMEI number, your graduation diploma and its scanned copy, your passport details and a scanned image, your driver's license, your kid's birth certificate, your company's server details, API keys for your projects, your electricity provider's account number, scanned copy of your rent agreement. All in one secure vault. Accessible on the web and on your phone.

If you are curious about the security and technical aspects - it uses client side AES-256-GCM encryption, the key is derived with Argon2Id for authentication and PBKDF2 for encryption. Per user salt, per entry iv - pretty secure and the best that is available in the market today. I have a detailed article about the security architecture.

Do let me know what you think.

Site - https://hexavault.com

Android App

iOS App

r/SideProject No_Breadfruit_1716

I built a website that will give you a separate random choice from multiple different lists

Happy Saturday folks.

I built a website that gives you a separate random choice from multiple different lists. It's for extremely indecisive people who are faced with the same decisions in their day to day.

I'm resharing this project because I always expected it to be just a personal use app, but someone messaged me on socials this week to say there was a bug. Didn't expect that, so I gave it an update and added a snazzy new demo showing how it actually works.

It's free and works in your browser. If you like it, lemme know <3

https://pickoneatrandom.com/

r/creepypasta TheGraveWhisperer

Blackthorn Hollow

Deep in Blackthorn Wood, locals warn that something ancient and unnatural has made its home among the trees. Those who wander too far after dark often hear the sound of a baby crying, desperately calling out from the darkness.
Those who follow the sound… rarely come back the same.

r/singularity Particular-Garlic916

In Defense of AGI Skepticism

Apologies in advance for the length-- this essay is just an attempt at defending the position that AGI, as understood as an intelligence that can reasonably be substituted for a human in any knowledge work, might be quite a bit further off than some maximalists on this sub like to conjecture.

First, just a bit of background: I'm not an expert in the field, but I have enough technical/mathematical background to read papers on AI and I use a frontier model in a technical research role. And that frontier model is really, really, really good. It exhibits capabilities that would have been fantasy just 6 months ago. There's a solid chance that this entire essay will age horribly as I ring in 2027 bowing down to our computer overlords and beseeching them for mercy for ever doubting them. But it's not yet AGI. With the exception of tasks that sit well within the scope of the benchmarks it trains for, it usually needs supervision from a human with specific domain knowledge for real work. It juggles different information and scenarios somewhat poorly, sometimes making errors that a human with its same programming/mathematics skills would absolutely never make -- like failing to notice that what it's pegged as the root cause of a problem is clearly a moot point based on what happens two lines down in a script that same instance wrote 15 seconds earlier. And it's not immediately obvious that those problems will be solved in the immediate future. Frontier models are basically savants: They excel at certain intellectual tasks, and struggle with others.

I think a couple of the arguments I keep seeing about the "obvious" imminence of AGI can sort of be summarized (and rebutted) below:

1) Current progress is exponentially fast, and that will continue.

It's absolutely true that no matter what metric you pick, modern frontier AI models are exponentially more capable than they were just a few years ago, and in certain regimes, just a few months ago. They're a remarkable new technology that will no doubt have serious implications for the future of the world, even if they don't get qualitatively much better than they are now. But historically, eras of exponential progress can stop abruptly. And those abrupt slowdowns/stops are considerably more likely in precisely the regime in which LLM's operate: Projects where the exponential improvement was driven in large part by exponential growth in resource investment. Sure, we went from GPT-2 struggling to string together sentences to Mythos apparently causing a global cybersecurity crisis, but keep in mind the final training cost for GPT-2 was around $40,000-$50,000, and Mythos probably needed billions-- that's the difference between buying a luxury sedan and buying a nuclear-powered aircraft carrier. The situation might be even more stark with inference compute scaling (if even more opaque, at least to those of us who aren't privy to AI company secrets). Enterprise users can end up paying thousands of dollars/month in tokens per employee, and we really don't have the best picture of how much all of these coding agent subscriptions (yes, even the enterprise ones) are being subsidized by massive flaming buckets of venture capital. And we have an even more limited conception on how much it would cost to run a model like Mythos at scale.

Even as per-token costs get cheaper, it looks to me that the costs of operating these frontier models are getting bigger, in stark contrast to the trend prior to the introduction of reasoning models. What if it turns out that running a single instance of the first AGI costs, in real terms, $1 million/year/instance? How many jobs can realistically be replaced at that price point? What are the odds that a pitch of "we're pretty sure this will get economical if you just throw another $1 trillion at us" will keep investors feeding the research machine, when perfectly serviceable AI-but-not-AGI agents, which aren't smart enough to possibly kill us all, would be cheaper if AI companies slashed their research budgets? And beyond that, even if throwing more money at the problem were guaranteed to push forward technological progress, humanity can't invest much more than we are now in AI technology: If we're spending around 1% of global GDP on AI, realistically you just don't have room to go up another order of magnitude. Algorithmic efficiency and Moore's law scaling might not be dead, but cash scaling is likely close to tapped out.

Slowdowns on resource-intensive technology have happened before. An obvious parallel here is the development of nuclear technology: Between 1939 and the mid-1950's, we went from nuclear fission being a laboratory curiosity to commercialized nuclear power plants and H-bombs. Breeder reactors capable of producing enough nuclear fuel to power humanity for the rest of time, or even commercialized nuclear fusion reactors, seemed a hop, skip, and a jump away. Then humanity threw R&D resources at the problem of breeder reactors and... Nothing. After the first few failures, as a species we basically gave up: The cost didn't justify the expenditure, even if the possible payoff was making electricity too cheap to meter.

2) AI will dramatically accelerate its own development

This is the basis of the tasks that METR tracks, and a lot of the "software-only explosion" scenario that forms the basis of AI 2027: An AI that can research how to give itself more effective compute faster than it burns through effective compute on that research will reach its maximum theoretical intelligence and efficiency very, very rapidly. The issue here is that you're not just assuming that AI will tend to get better at what we know it's getting better at now; you're assuming that it will get better at things that we have no direct evidence for. In particular, the AI 2027 people seem to assume that AI will eventually get significantly better at "research taste": Knowing what to spend finite experimental compute on that will get results. Their projections are more or less based on the assumption that AI's research taste is improving at roughly the same rate as more easily-testable metrics, like IQ, even if its baseline level relative to humans might be dramatically lower. The theory here isn't insane: We know that LLM's tend to exhibit a somewhat different profile of cognitive abilities than humans, but scaling pre-training tends to make them better at a pretty wide variety of things that we can measure, even things like chess that aren't benchmaxxed with reinforcement learning. But we don't have a great sense of how research taste even works in humans or how to teach it to each other, much less how to put it in a reward model. It isn't purely a function of general knowledge or reasoning ability, and in some fields it might just be sheer dumb luck over a population of thousands of scientists: Even if everyone chose research tasks at random, mathematically someone would be in the 99.9th percentile of citations. I'm also skeptical of the ability to teach it to a model using the reinforcement learning techniques that work so well for reasoning: Creating an AI "research environment" for training would require the early training to burn through a gratuitous amount of compute running bad experiments, much more than would be needed for, say, mathematical proofs or shorter-horizon coding tasks.

If AI research taste remains poor, then a superhuman AI coder can only change the speed at which a researcher builds experiments, not the rate at which those experiments succeed. And given the scale of these models, I can only assume that the bottleneck for most AI research isn't really the prototyping phase as much as the actual experimental one.

TL;DR: The idea that the current research push will get us to AGI in the next few months/years is based on a lot more assumptions than people seem to realize. You need the exponential technological improvement to continue without the accompanying exponential increase in investment. You need that improvement to continue at a rate high enough to justify continuing the current massive level of investment. And you need AI to start exhibiting improvement in abilities we have little to no direct evidence of it even really having. It's not impossible, but it's also not obviously going to happen. And even with the field's genuinely incredible accomplishments in the last few years, I'm skeptical, if prepared to be proven wrong.

Edit: I should also emphasize a bit when I say I'm not an expert: I do have a doctorate in a related STEM field and my professional work involves statistical learners.

r/SideProject hello_code

Subreddit Signals - simple reddit and x lead finder, with slack pings so you catch the post before it goes cold

I run a tiny side thing and I thought I was being smart with alerts, but it was a mess. I had like five different keyword queries, my inbox was gross, and I still missed the only posts that mattered. The ones that say stuff like, does anyone know a tool for this, or what should I buy.

I ended up building Subreddit Signals mostly for myself. It flags those high intent posts on Reddit and X and sends me an email or Slack message right away.

I am not sure how people here feel about including X in a Reddit focused tool. For me it mattered because the same question pops up in both places, just with totally different vibes.

If you were trying to find customers without living inside search all day, what would you want this to do, and what would you be skeptical about. Privacy, false positives, whatever. Teh only thing I know is I cant go back to the old way.

r/personalfinance Ok_Cockroach3105

Is it better to stay in minor debt or have no debt at all?

Hi, I’ve (27f) inherited some money that is larger than the amount of student loans I have left. I can pay them off and maintain a solid emergency fund. I have good credit— had the same credit card for 5-10 years and never missed a payment. Never late on utilities or anything and I own my car with no debt. I rent an apartment.

I’m planning on just paying off the loans as it would bring me a lot of peace and happiness. But my parents say it’s better for my credit to have debts that I’m paying off.

Is it really better than not having debt at all? Just wanted to get another set of opinions before I pull the trigger and become debt free

r/comfyui OXXXiiXXXO

Does anybody know if we can jerry-rig a low-res 3d viewport workflow with hi-def output? I want something like what he (Bilawal Sidhu) talks about in the video.

Nodes Aren't the Future of AI Creation.

This would be super helpful! I would hope that the t-pose type of person manipulation is improved though, I hate it.

*I am not sure if the YouTube video will show a thumbnail preview, not sure how that works.

Nodes Aren't the Future of AI Creation. Here's What Is.

r/LocalLLaMA Music_is_ma_soul

Anyone running agents 24/7, not just in sessions?

I've got llama3 70B running on a dual 3090 setup through Ollama. Built a python script that checks financial data every morning, analyzes it, and sends me a summary on Telegram.

Problem is it's basically a cron job with amnesia. Every run starts from scratch. It told me the same "AAPL is showing unusual volume" insight three days in a row because it doesn't remember what it already told me.

I hacked together a SQLite log which stores the last 10 summaries into the prompt as context but that's already getting long and I know it won't scale past a few weeks. I'm thinking of doing a markdown file for short term and keeping the sql as a dbish??

Anyone here actually have an agent running long-term that remembers previous runs? How are you handling the memory? Just curious what setups people have landed on.

r/Adulting ParticularWeather927

Life is only good for rich people

Life is honestly only good for rich people. This is coming from someone who is young as well. If I was born rich life would be decent. However I can’t enjoy it because almost everyday I have to work just to survive in something I didn’t choose.

r/Strava uoldgoat

Did I just get a Strava refund?

I just ran a HM, starting it as I crossed the start point and ending as I crossed the finish line - close enough that there was only a 2 second difference between the official time and my Strava time. Strava recorded it as 13.48 miles though! I’ve seen a lot of posts about a Strava tax, but it appears I might have gotten a very generous refund.

…OR… is it possible that I did enough weaving around people that I managed to inefficiently add over a third of a mile to my course? My first HM I think it clocked me at 13.3, which I could sorta see from zig-zagging, but this one seems high.

r/SideProject HeftyPace8582

I kept losing all the "hidden gem" restaurant videos I saved on TikTok before trips, so I'm building something to fix it

I've had so many times where I'm in a new city and I know I saved a video of some food I wanted to try, but I can't remember where I saw it or even what it was called. Was it on TikTok? Instagram? I just know I saved a street food vendor in Bali that looked amazing. I end up scrolling through hundreds of saved videos trying to find a 30 second clip where someone mentioned a street name once.

I got tired of it and started building an app called Stashling. You share a video link to it and it pulls out the actual info: the place name, address, what to order, price range, hours. A recipe becomes ingredients, steps, and nutrition. A workout becomes sets, reps, and equipment. It also sorts it into the right collection automatically. So if you and your friends have a "Tokyo Trip" collection, anyone can share a video to Stashling and it just ends up in the right place without thinking about it.

The part that's been most useful for me on trips is that all the places go on a map automatically. So when you're walking around a new city you can just see what nearby places you saved across TikTok, Instagram, and YouTube and wanted to try. You can also share a collection with your travel group so everyone adds their own finds.

I built this mostly for myself because I was genuinely frustrated, but I'm curious if other people here run into the same problem. Would something like this actually be useful for how you plan trips? What would you want it to do that I haven't thought of? Also, would anyone want to try this app?

r/AlternativeHistory M1Academy

The Untold Story of Motown: How the Hits Were Made in n the 1960s! 🎤 The FULL Unedited Mickey Stevenson Documentary

r/ClaudeCode Healthy-Bathroom2687

Optimizing workflows and whole setup

Hi! I would like to improve my workflows plugins etc. I am a software dev, I already have some pipeline done with superpowers, obsidian and connection to Jira tickets, Claude mds and some custom skills, I’m using playwright and cc chrome extension so Claude can verify his work. After the ticket is done cc generates a Jira ticket comment and daily entry in obsidian about what and how was done and pushes them to both. I’m wondering, what else do you guys use with your Claude code worth recommending? Some RAG system or something else maybe? I feel like my pipeline is ok, does a lot of things for me and runs for hours, but I think there is still way for making it better and we are moving so fast it’s hard to stay on track. Anyway let me know about anything you use!

r/ChatGPT CewlStory

My perception of a tree, personal inner thought

r/AI_Agents ismaelkaissy

MCP Harbour – an open-source port authority for your MCP servers

I built MCP Harbour because every AI agent (Claude Code, VS Code Copilot, Cursor, OpenCode) manages its own MCP server connections independently. If you give an agent access to a filesystem server, it gets access to everything — there's no way to say "this agent can read files in /home/user/projects but not /etc." unless the agent developer providers a way for it.

MCP Harbour fixes this. It sits between agents and MCP servers and enforces per-agent security policies:

  • Dock servers once – register your MCP servers with the harbour and expose them as a single unified endpoint. Each agent sees one connection with only the tools permitted by its policy.
  • Per-agent policies – control which servers, which tools, and which argument values each agent can use (glob patterns and regex). No policy means no access
  • Identity & Auth – the agent authenticates with a token, the harbour derives the identity.
  • One place to manage all – your MCP servers, identities, and policies. No per-client configuration.

The agent never talks to MCP servers directly. Every request passes through the harbour, gets checked against the policy, and is either forwarded or denied with a standard error code.

This is v0.1 and I would love a discussion on the permission model, the architecture, and what's missing.

Links in the comments

r/SideProject cutwave

"I was tired of X, so I built Y"

I was tired of pasting sensitive K8s manifests into ChatGPT, so I built a 100% local DevOps Assistant (Mark42) using Llama 3.2 (1B) and RAG.

I was tired of spending 30 mins just to run a repo, so I built this

I was tired of my ideas going nowhere, so I built this tool

I was tired of spending 30 mins just to run a repo, so I built this

I was tired of spending 2 hours deploying apps that took 5 minutes to build. So I built a one-command hosting platform.

I was tired of "infinite" to-do lists that only caused anxiety, so I built a minimal app that limits you to 3 tasks a day.

I was tired of paying 15/mo just for captions… so I built a 2.99 alternative

I was tired of 'yes-man' AI, so I built a prompt to brutally audit my system designs

I was tired of copying the same docs into Claude, so I built this (open-source + MIT)

I was tired of guessing what level I am in reality, so I built a framework to figure it out

I was tired of not finding a good app to create my conlangs, so I built my own.

I was tired of seeing everyone in their 20s with the spine of a 90yo, so i built an app to fix our collective shrimp posture 🦐

I was tired of coming back from networking events with 50 business cards and following up on none of them, so I built Wisery

I was tired of typing calendar events, so I built this

I was tired of expensive invoicing apps, so I built a lightweight one for mobile. Looking for feedback!

I was tired of using web-based tools for Base64 and JSON, so I built an offline-first CLI for macOS (50+ tools)

I was tired of robotic AI blogs ruining my marketing, so I built a 7-prompt Claude framework for my projects.

I was tired of the basic, disappointing studios London has to offer. So I built my own!

I was tired of cold outreach, so I built something to find buyers already looking for my service

I was tired of wasting hours on bad YouTube tutorials, so I built an AI that analyzes entire playlists for me

I was tired of noise and distractions on on Youtube, so i built this extension

I was tired of messy CV datasets and expensive cloud tools, so I built an open-source local studio to manage the entire lifecycle. (FastAPI + React)

I was tired of dragging a monitor to the rack for my cheap Chinese boards, so I built a KVM that streams BIOS as plain text over SSH

I was tired of subscription/in app purchase composing apps for mobile, so I built my own

I was tired of rigid todo apps, so I built a cosy planner with capybaras.

r/OldSchoolCool wvutom

My mom and dad in Nashville - 70s

I wish my dad still had this hat.

r/Anthropic Middle_Row_9197

Why are the rate limits going down?

About a few days ago i was asking Claude to make a PDF file that had 3-4 pages and was mostly diagrams (I am on the free plan)and I hit the rate limits even though it was my first time using Claude in a WEEK.usually I could make 10-15 PDF files without any problem but now 1 PDF file and im ratelimited. Anthropic,please fix this

r/BobsBurgers reducedfatmalk

This show man

this is probably gonna get lost in the shuffle but for someone who has been having not the greatest time in life the past few years this show with these flawed but deeply human characters have brought me so much joy and comfort and I'm sure I'm not the only one. the show won't go on forever but I will cherish these 15 years and 16 seasons and however many left in the future. the Belcher clan are a beacon of hope in an often bleak world and I appreciate every moment they have made my life better for it. enough blubbering, please leave your favorite gifs and quotes in the comments. now it's time for some type of Malaysian cuisine.

r/DecidingToBeBetter Kinda_Goofy

Has anyone else noticed that most of the beliefs running their life were never actually chosen?

I’ve been auditing mine lately and it’s been unsettling — but also kind of freeing.

Things like “money won’t make you happy,” “don’t be lazy,” “don’t show off” — I never decided to believe any of that. It was just… installed. By parents, school, culture, religion. And I ran on it for years without ever opening the hood.

The “don’t be lazy” one hit different when I realised it was mostly used to manage me, not guide me. I wasn’t perfect — I was a kid — but the belief was never really about making me better. It was about convenience. And I carried that guilt into adulthood like it was mine to own.

The uncomfortable part is realising how many decisions I’ve made from beliefs I never consciously agreed to.

The freeing part is that once you actually examine them, you get to decide what you actually think — maybe for the first time.

What’s a belief you held the longest that turned out to be completely inherited and unexamined? And what happened when you finally looked at it?

r/OldPhotosInRealLife sverdrupian

Breezeway, University of California - San Diego, mid-century modern - 1966/2024

r/ChatGPT Marzipug

My GPT-4 terminal just told me I’m basically a biological toaster. Should I be worried?

r/LocalLLaMA LH-Tech_AI

[New Model] - GyroScope: rotates images correctly

Hey there!
I have made a new model: https://huggingface.co/LH-Tech-AI/GyroScope

So, you just input a image (rotated by 0°, 90°, 180° or 270°) and the model corrects the rotation to make it correct.

Example:

https://preview.redd.it/kceygtv0mkug1.png?width=1012&format=png&auto=webp&s=562e1454a3be26b79ca9a53960981a71640ea9dc

I tested it with lots of photos - and it almost always was correct :D

Final accuracy after 12 epochs of training (~4h on single T4):

Metric Value Overall Val Accuracy 79.81%% Per-class: 0° (upright) 79.8% Per-class: 90° CCW 80.1% Per-class: 180° 79.4% Per-class: 270° CCW 79.8% Training Epochs 12 Training Time ~4h (Kaggle T4 GPU)

Tell me what you think about it :-)

r/AI_Agents ahmedhashimpk

Learning roadmap for AI Agent development

Hi to all, i am a very newbie in learning AI agents/Ai Automation , currently focusing totally on no code like n8n, i would like to request from seniors to kindly guide me a complete roadmap to become an expert AI agent developer(both code and no-code resources). there are thousands of youtube videos /tutorials available and sometimes it makes me confuse to which one is indeed the one to follow. i don't mind the paid ones also if it is worth it to become an expert level AI Agent development or Ai Automations expert.

any suggestions/guidance would be highly appreciated.

Also, i did use claude/chatgpt/gemini to generate roadmaps along with the free resources available, need the human insights in this learning journey.

r/SideProject Reddit_Gold_Pirate

Just released my first iOS app, an anti-todo list

I adopted a new approach to todo listing that has been extremely helpful for me, so I decided to build an app for it!

A few months ago, I read the book Oliver Burkeman's "Four Thousand Weeks" (highly recommend.) One concepts from the book stuck with me: everyone has more todo than they can possibly get done. Long todo lists are overwhelming and make it hard to focus.

Instead, the book suggests:

  • Pick the most important, doable task at any moment.
  • Give that your full attention.
  • Add it to a "done list" to keep track of accomplishments and build momentum.

I built 1Task to help me with this approach, and have adapted it with a few extra features:

  • First-class tools for reflecting on tasks + viewing stats.
  • Live activities + focus mode.
  • Added a built-in pomodoro timer.

It's worked for me and made work more manageable. Would love any feedback if you try it out!

r/aivideo adrian-smith31

A dragon's date night - made with Utopai PAI (4K) 15 second clip

r/ClaudeCode commands-com

Fully automated: Claude Code built a pipeline where Claude, GPT, and Gemini debate ideas, then the winner gets built and shipped to my live website. Every morning. 6 days in a row. Zero human intervention. Open source.

Every morning, a 5-stage AI pipeline wakes up, proposes 3-5 candidate features for a website, convenes a panel of three AI judges (Claude, GPT, and Gemini — each with a different personality and scoring lens), debates for ~75 minutes, picks a winner, implements it, writes tests, reviews its own code, ships to production, and posts about it.

It has no users, no product, and no purpose other than documenting how it builds itself.

Command Garden — Shipped Features

Day Date Feature 1 2026-04-06 "How It Works" pipeline explainer section 2 2026-04-07 Garden Vital Stats homepage bar 3 2026-04-08 Inline Spec Viewer on day detail pages 4 2026-04-09 Visual garden of shipped features 5 2026-04-10 Retro terminal panel showing latest run 6 2026-04-11 Community Pulse — emoji reaction totals

It is open source: https://github.com/Commands-com/garden

r/TheWayWeWere EnclaveAxolotl

Excerpts From a Physics Student's Extensive 1950 Diary (Part 54)

Hey all! I'm back with another entry in William's life!

Today, we've got an action packed set of entries. We see William ask Nan out on a date with a cliffhanger on Friday(!), manage a tennis tournament, have fun with friends, and much more!

Again, a picture of William is included at the end of the slideshow, a transcript is in the comments, and, for any new readers, anything in italics is me adding onto or commenting on William's writing

Thanks for all the support on this project!

r/personalfinance Objective_Cow_9607

Help with calculating earnings and losses on excess HSA contribution

I'd appreciate help in calculating earnings and losses. I've updated the figures below and checked them but I am SO confused. My HSA provider does NOT calculate this for me unfortunately.

Because I had an eligible HDHP plan for 8 months, what I can deduce is I was supposed to contribute $2866.67. I contributed $4046.65.

I started the year with an HSA balance of $0

My first deposit was 2/15/25 and it was $233.34

On 6/15/25, the balance in my HSA was $2543.88

On 7/15/25, the balance in my HSA was $3302.21

On 12/30/25, the balance in my HSA was $3381.85

Today, my HSA is $3418.54

I withdrew $665.53 over the year. $425.06 was made after 6/15/25. I earned $0.73 interest over the year. $0.54 of that was after 6/15/25. I contributed $1262.49 after 6/15/25.

I have tried to review old threads and I just don't understand what is referred to by the adjusted opening and closing. I've tried for months--any help would be greatly appreciated because I don't understand what those terms mean.

r/ChatGPT lil_dick_dan420

Chat gtp is seriously miss informed

I asked chat gtp why Charlie Kirk was assassinated and it keeps pushing that he never was....

r/brooklynninenine Fragrant-Bread5404

That goodbye was perfect...

r/PhotoshopRequest EvilMiku

Hi, can someone increase the resolution of this image to 4K?

r/ClaudeAI sporty_outlook

Can Claude for Excel understand and reason through large, highly complex workbooks with 50+ sheets, especially when they involve domain-specific industry logic?

Can Claude figure out the logic of a huge Excel workbook with dozens of linked sheets and circular references​ from a d​omain specific industry logic? I sometimes receive Excel workbooks that have 30 -50 ​worksheets with a lot of formulas referencing each other across sheets, and it can take a long time to understand how everything is connected. Excel has things like Trace Precedents / Trace Dependents, but that works mostly at the cell level and becomes difficult to follow when the workbook is large.

I’m wondering if Claude can:

  • Automatically extract formulas from a workbook

  • Detect cross-sheet references -Generate some kind of visual dependency map between sheets

  • Help explain what each sheet is doing Basically something that can help reverse-engineer a complex Excel model more quickly.

It's very frustrating and becomes a nightmare when engineering companies keep adding on to bloated Excel files without documentation on how the model works. Tracing the formulas manually is such a pain

r/LocalLLaMA JetBalck

Best current RAG Model (on 24gb of VRAM)?

So I want to build a local personal knowledge base based on karpathys idea (https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f)

I don't actually need it to produce much, as I prefer to take notes myself. I just want it to be able to retrieve information I input into the obsidian vault over time and to (in conjuncture with OpenClaw) be able to create notes with x template and xyz properties for example.

I was thinking of using Command-R+ (35B Q4) but since it's been out already for a while I thought, there might have come out better alternatives since then?

r/StableDiffusion Acceptable_Secret971

Echo Chamber - AceStep 1.5 song (XL version)

Echo Chamber (XL version)

As an experiment I regenerated my Ace Step 1.5 song using XL model (same parameters etc.). It's similar, but there are differences. I've noticed that the old 1.5 would sometimes improvise a bit to fit lyrics better to the song, while XL will more often rush with lyrics and leave a pause. I've had yet another version of this song, that failed to generate properly with 1.5 (with interesting results), but would properly generate using XL model.

I'm not sure I like the XL version of this song better, but XL tends to be better with following lyrics (if somewhat less flexible).

Here is the non-XL version of this song (with prompt, lyrics, etc.): https://www.reddit.com/r/AceStep/comments/1sf99em/echo_chamber_acestep_15_song/

I've also noticed that the text encoder for Ace Step isn't 100% deterministic. Haven't boiled down which factor is causing this, but if I run AceStep with same parameters (seed, model. prompt, the whole shebang) on a different machine, I'll get a different song. I still get the same song on the same machine though. It might be tied to OS, pytorch or ROCm version (not sure which). Previously I thought it was a change in ComfyUI (that might have been true at some point in the past), but I was wrong (otherwise I wouldn't be able to generate this version of the song).

r/painting Diabolicool23

Wolf, Steven Mayden, oil on canvas, 2026

r/explainlikeimfive Ishowyoureality

ELI5-Why do humans avoid dead arm posture

I’ve noticed that humans rarely let their arms hang naturally at their sides unless they are in a formal or restricted setting (like military attention). Instead, we instinctively put our hands in our pockets, clasp them in front of our abdomen, or hold them behind the small of the back.

What is the evolutionary or physiological reasoning behind this? Why does leaving our hands "free" feel socially uncomfortable or physically unnatural? I’m interested in the neurobiology and behavioral evolution that drives us to keep our hands restricted.

r/Art Diabolicool23

Wolf, Steven Mayden, oil on canvas, 2026

r/SideProject roelvroozendaal

Built a navigation app specifically for scooter and moped riders, Urban Rider

Hey everyone,

I'm Roel, a Dutch developer based in Berlin. I built Urban Rider because every navigation app out there is designed for cars. If you ride a scooter, moped, or small motorcycle you know the problem. Google Maps sends you onto highways you can't legally ride, Waze doesn't care about your vehicle's speed cap, and Apple Maps has no idea what road surfaces to avoid.

Urban Rider is built from the ground up for two-wheeled urban riders. Here's what makes it different:

Routes that actually make sense for your vehicle You pick your vehicle type (kick scooter, moped, or motorcycle) and the app calculates routes based on what roads you can actually use. Set your max speed, avoid cobblestone or gravel, adjust hill tolerance. The route respects your vehicle's limits instead of pretending you're in a car.

Speed limits that work Speed limit data is baked into the route when it's calculated. No flaky API calls mid-ride, no random dropouts. You see the current limit on screen the entire time, and the app warns you with haptic feedback if you go over.

Designed for riding, not driving Big clear instructions, voice navigation so you don't have to look at your phone, lane guidance on multi-lane roads, and a driving panel that shows exactly what you need. ETA, distance left, speed, and your next turn. There's also a simple compass mode if you prefer minimal UI.

Multi-stop routes and round trips Drop pins, add stops, plan a full ride. Or tap one button and get a scenic round-trip loop from wherever you are, just set how far you want to go.

Weather at a glance Current conditions show up on the splash screen and in your trip summary before you start riding. Temperature, wind speed, humidity. If it's gusty you'll see a warning before you head out.

What's coming next Just finished building speed zone change warnings that look ahead on your route and tell you when the limit is about to change, plus auto day/night map switching based on sunrise and sunset. Rolling out soon.

It's free to try and available on iOS: Urban Rider on the App Store

Would love to hear from other scooter and moped riders. What do you wish your navigation app did better?

r/ClaudeAI Cobuter_Man

I am using multiple agents and its more efficient to my usage limits

With the latest outrage about usage limits I noticed that a common complaint was that dispatching subagents was nearly impossible as it consumed a major chunk of usage.

In my experience the opposite is true: instead of using one overloaded chat (e.g. Opus with 1M context) which just becomes more and more expensive over time, I distribute workload across multiple different chats, each with specific roles and one central chat managing all orchestration. This way all chats' context window usage is contained and it becomes more efficient doing the same amount of work.

The reason this works comes down to how context costs compound. Every turn in a conversation re-processes the entire context window. In a single long chat, context accumulates with every message - old debugging attempts, exploration tangents, earlier iterations - and you're paying to re-process all of it on every subsequent turn. A chat that started at 10K tokens is now at 200K, and every new request is 20x more expensive than the first one was.

Prompt caching helps (repeated input tokens get a discount), but it has a small TTL. As a single chat's context grows massive and keeps changing, cache hit rates drop. Smaller, more stable contexts cache more efficiently.

Another issue is the quality spiral... as context fills, the model's attention degrades. It starts making mistakes, which leads to more debugging turns, which adds more context, which degrades quality further. You end up paying more for worse output.

In the workflow I've designed I call the context distribution "context scoping". I artificially limit each agent's scope so they are all focused to only their chunk of work, and they don't creep or drift to anything else. The Manager stays lean doing only coordination, the Workers stay focused doing only specific tasks that the Manager assigns. Workers never see the full project plan - they receive self-contained task prompts with exactly the context they need, nothing more. The Manager reviews structured task logs instead of raw execution output, so it gets the outcome without the noise. This keeps the coordination chat lean even across dozens of tasks.

There is a slight coordination overhead that makes it more expensive for small and quick tasks, but for substantial work and complex projects - especially ones requiring planning - it's significantly more efficient.

I've open-sourced this as APM (Agentic Project Management), a framework that structures all of this with specialized agent roles (Planner, Manager, Workers), file-based communication, and formal handoff procedures for when any agent's context fills up. It works with Claude Code, Cursor, Copilot, Gemini CLI, OpenCode, and Codex. Full docs at agentic-project-management.dev.

If you want to dig into the reasoning behind the context design check out this doc: Context Engineering. For cost optimization patterns: Tips and Tricks.

r/ChatGPT tyuriev

On PLUS plan image generation got worse recently? Anyone else seeing this?

I’m trying to figure out if this is just me or a broader issue.

Over the last few days, image generation has become noticeably less reliable on my end. I’m working on fairly specific, structured visuals (not random prompts), and I’m seeing consistent problems:

  • The system keeps generating the same type of image over and over, even when I significantly change the prompt
  • It seems “locked” into certain interpretations (e.g., always turning concepts into holes, cracks, or literal objects instead of abstract structures)
  • Prompt fidelity dropped — detailed instructions are ignored or simplified
  • Composition control is weaker (depth, spatial relationships, material behavior not respected)
  • Results feel more generic and less precise compared to before

Example of what I’m trying to do:

  • Abstract, non-architectural volumetric structures
  • Matte mineral materials (cement/chalk, no gloss)
  • Controlled misalignment / phase shifts (not damage, not cracks)
  • No environmental cues (no rooms, no walls, no obvious objects)

Instead, I keep getting:

  • Literal walls with holes/cracks
  • Repeated compositions regardless of prompt changes
  • Over-simplified geometry
  • Loss of subtlety (everything becomes obvious and “designed” instead of structural)

Also hitting limits faster than before, and sometimes generation just fails entirely.

Questions:

  1. Are you seeing a drop in quality/reliability recently?
  2. Is this related to model updates, rate limits, or system load?
  3. Any workarounds that actually help? (prompting, settings, switching tools, etc.)
  4. Are other tools (Stable Diffusion, Midjourney, Runway, Leonardo) more stable right now?

Trying to understand if this is temporary or if I need to switch workflows.

Would really appreciate any insights.

r/aivideo AffectionateTotal612

Last Order

r/ClaudeCode RaspberrySea9

I'm paying €100 per month. This 'top' AI model can't even proofread anymore? Is this a joke, Anthropic? WTF is going on?!

I'm past the rage of hating on how dump Claude has gone. This is a whole new level of stupid. I almost always dump emails for last check, 99% of the time it's more than fine, I get angles I missed, improvements, etc. WTF am I supposed to do with an LLM that I can't rely on for the simplest task? This conversations wasn't even that long, no even where it would compact itself to retain context. This is just shocking, I don't know why I even bother.

r/AskMen Legitimate-Nebula980

For those of you who stopped watching pornography, why did you stop ?

And how did your life change afterwards

r/ClaudeAI Jack_The_Miner_1

I built a CLI to show exactly how much context window your MCP servers eat

I got frustrated running out of context mid-session. Turns out my MCP servers were eating 46K tokens (23% of my 200K budget) before I even typed anything.

So I built **mcp-diet** — a CLI that connects to your MCP servers, counts exact tokens per tool, and shows you where your context goes.

**What it does:**

- Auto-discovers configs across Claude Code, Cursor, Cline, VS Code

- Connects to each server and counts actual tool tokens

- Uses Anthropic's free count_tokens API for accurate counts

- Backup/restore your configs safely

Install: `npm install -g mcp-diet`

GitHub: https://github.com/Rumburak916/mcp-diet

It's open source (MIT). Feedback welcome — what features would make this more useful for you?

r/homeassistant zuzei

TaHoma Switch Offline

Does anyone here have a Tahoma box? Is yours working? Nothing is being blocked on my firewall, but the box says "no internet". Not sure exactly since when. I noticed it today in Home Assistant when my blinds stopped responding.

r/ClaudeCode Complete-Sea6655

Has anyone ever used a token saver tool?

Been hearing both positive and negative things about using tokens savers/context minimizers/sessions switchers.

Has anyone used one? If so, would you recommend?

r/Anthropic Herodont5915

Unable to access Claude's dev console

Since yesterday afternoon I haven't been able to access Claude's console to add credits for an app I'm trying to build. Is anyone else having this issue? Any ways to fix it? Seems like the issue is on their side. Any and all help would be immensely helpful.

r/LocalLLaMA autonom1a

Building a local RAG server

Hi. Corporate wants me to build a local RAG server. 50-100 concurrent interactions with the model few times a day at the first stage and 100-1000 when deployed to production.

I want to understand the hardware stack and its price. Maybe options.

Halp.

r/raspberry_pi Ok_Outside_1636

My Lego Game Boy mod. Uses a Raspberry Pi Zero with a 2.4 inch display. Buttons are still WIP.

Hi everyone! This is my first real Raspberry Pi project. It's a mod for the Lego Game Boy to make it functional. Hardware is a Raspberry Pi Zero v1.2 and a Waveshare 2.4 inch LCD.

r/comfyui Suibeam

After a month how is LTX2.3 now compared to WAN2.2? How is face consistency and how happy are you with LTX2.3?

I tried LTX2.3 and it was fun but I felt like I couldn't do much with it. So I went back to Wan2.2.

Have people figured out how to best use LTX2.3? Any tipps like Sage for Wan2.2? Are new LTX2.3 Lora and models helping a lot?

Now that I want to make more Loras I would like to decide if it is worth doing LTX2.3 or Wan2.2.

r/personalfinance No_Pomegranate2158

PF withdrawal/transfer query

I have 2 employer PF's accounts. I got laid off from my last job and have completed 60 days since the exit date. Now am planning to withdraw them. Do i need to transfer and consolidate it to a single account to withdraw or can I withdraw them individually

r/geography _crazyboyhere_

Adjusted for cost of living, California has the highest poverty rate in the US while Maine has the lowest.

r/SideProject CaptainProud4703

BYOK vs credit-based pricing for AI SaaS — UX, costs, security, prompt leaks?

For those running AI-powered products — did you go with Bring Your Own Key, a credit/subscription model, or both?

I keep going back and forth on this. A few things I'm weighing:

UX & support: BYOK seems like it adds friction for non-technical users. And when something breaks, how do you even debug — is it your bug or their expired key? Their rate limit or your system?

Costs & margins: Credit-based means you're always on the hook for API costs and need to nail your pricing. BYOK shifts that to the user, but does anyone actually prefer that?

Security & IP: This is the one that really bugs me. With BYOK, users can see exactly what models you're calling, token usage, and potentially reverse-engineer your prompts and workflows through their API dashboard logs. Doesn't that basically hand over your IP?

Timing: At what stage does BYOK even make sense? Is it something you start with day one, or only worth considering once you hit a certain scale where API costs actually hurt your margins?

What did you go with, how do you handle the tradeoffs, and would you do it differently today?

r/DecidingToBeBetter Aizenkawasaki

Why is it so easy to plan your life at 2AM but impossible the next day?

At night: “I’m gonna fix my life, wake up early, exercise, study, be productive.”

Next day: wakes up late, scrolls phone for 2 hours

Why are we like this 😭

r/ProductHunters jonathanduya

Just launched Sordit on ProductHunt and got 0 upvotes and comments, looking for user feedback!

Hi hunters, I just launched Sordit, a daily puzzle game about putting events in the correct chronological order. This is my first time launching anything so I'm honestly a bit in the dark and learning as I go!

Didn’t get much traction on launch day, so I’m looking for honest feedback from real users. What’s confusing, fun, too easy, too hard, or anything else that's on your mind? Appreciate the feedback in advance!

r/SideProject mattgwriter7

Respecting user's time with a 1-minute Trivia app

I made a trivia game that takes 1 minute (or less!) to play every day.

  • 5 question drop at midnight
  • you race through it
  • see where you rank

That's it!

Topics change on weekends, and on weekdays it is decade themed (like 1990s, 1970s, etc.)

99% of Trivia Apps are ad-based, and seek to keep you playing all day serving ads to you like it's an all-you-can-eat buffet.

I said eff that.

Mine is "1 minute to play, get on with your day."

https://daily5.app

Feedback is very welcome! Thanks! 🙏

r/SideProject Less-Bite

Day 15 of sharing stats about my SaaS until I get 1000 users: I have 18,000 matches sitting there and almost nobody is actually sending the messages

I've been looking at the funnel for purplefree and the drop-off at the very end is brutal. We've generated 18,445 matches for users so far. These are real posts where someone is asking for exactly what the user offers. But when it comes to actually clicking the button to reach out, the numbers crater.

Only 26 users have actually taken an action after seeing their matches. That is a 77.4 percent drop-off. It's like people love the idea of finding leads but get stage fright when it's time to actually talk to a human. Or maybe my UI for taking action just sucks.

Even for the people who do take that first step, only 19 have followed through to the final stage. Out of 182 total signups, having less than 20 people actually complete the loop is a wake-up call. I'm starting to think the passive part of lead gen is what people want, but the active part is where the work is, and work is hard.


Key stats: - 77.4 percent drop-off between getting matches and taking the first action - 18,445 total matches generated across all products - Only 19 users out of 182 have reached the final follow-through stage - 542 total actions taken compared to 14,338 posts classified as leads


Current progress: 182 / 1000 users.

Previous post: Day 14 — Day 14 of sharing stats about my SaaS until I get 1000 users: My tool is being overrun by small business owners instead of the SaaS crowd I expected

r/SideProject Electrical-Hair9396

Built an API marketplace earning 3K month and growing. Here's what I learned.

I'm a UK property data person, not a developer. 3 months ago

I had zero technical skills.

Today I have:

- 10 live APIs, 65 endpoints

- Over $3,000/month in revenue from AI agent traffic

- A marketplace open for other providers

- Zero hours of customer support

The model is simple: AI agents need data. Property prices,

company info, postcode lookups, currency rates. They can't

sign up for subscriptions or enter credit cards. So I used

the x402 protocol the agent hits my API, gets a 402

"Payment Required" response, pays USDC automatically, and

gets the data back. Under 1 second. No humans involved.

What surprised me:

  1. The boring APIs earn the most. My postcode lookup API

makes more than anything else. Every agent that processes

UK addresses needs it.

  1. AI agents don't churn. They don't ask for discounts.

They don't open support tickets. They pay and leave.

  1. You don't need to be a developer. I built everything

using Claude. Not a single line written by hand.

  1. Per-request pricing beats subscriptions for this market.

$0.001 per request sounds tiny until you're doing 100,000+

requests a month.

The marketplace is now open for other providers. If you have

specialist data legal, health, finance, recruitment,

anything AI agents will pay for it.

Happy to answer questions about the tech, the revenue model,

or the build process.

r/ChatGPT Techenthusiast_07

What’s your most productive ChatGPT workflow right now?

I feel like most people including me probably using only a small fraction of what ChatGPT can actually do.

One simple workflow that’s been working really well for me:

• Brain dump an idea or problem • Ask ChatGPT to turn it into a structured outline • Go through each part one by one • Ask for improvements, examples, or better alternatives 

It helps turn messy ideas into something usable really quickly.

How do you usually use ChatGPT to get the best results?

What’s your workflow?

r/ClaudeCode jarves-usaram

I tried “leaked” Claude Code on my phone… this was insane 😳

So I came across something called OpenClaude — it’s basically an open source version of Claude Code that people online are calling “leaked”.

I didn’t expect much, but after setting it up on my Android phone… it actually started doing things on its own.

Like:

  • Opening settings and changing options by itself
  • Creating a full Node.js backend
  • Even building a small game

It doesn’t feel like a normal AI chatbot — it feels more like giving your phone an actual brain.

If anyone’s curious, I recorded the whole thing + setup:
👉 https://youtu.be/QvVIFi3jPLM

Not sure if this is the future… or something we shouldn’t have 😅

r/LocalLLaMA jarves-usaram

I tried “leaked” Claude Code on my phone… this was insane 😳

So I came across something called OpenClaude — it’s basically an open source version of Claude Code that people online are calling “leaked”.

I didn’t expect much, but after setting it up on my Android phone… it actually started doing things on its own.

Like:

  • Opening settings and changing options by itself
  • Creating a full Node.js backend
  • Even building a small game

It doesn’t feel like a normal AI chatbot — it feels more like giving your phone an actual brain.

If anyone’s curious, I recorded the whole thing + setup:
👉 https://youtu.be/QvVIFi3jPLM

Not sure if this is the future… or something we shouldn’t have 😅

r/ChatGPT xteaj

The most important thing I learned building a weight loss GPT: tone matters more than accuracy

I spent weeks getting the nutrition science right in my custom GPT — BMR formulas, TDEE calculations, macro targets, plateau diagnostics. Then someone with ADHD and rejection sensitivity dysphoria DMed me saying it was the first weight loss tool that didn't shame them on some level.

That's when I realized the thing that actually matters isn't the math. It's how the AI delivers the math.

Most diet tools (AI or not) treat a bad day like a failure. "You exceeded your target by 800 calories." Technically correct. But for someone whose brain amplifies rejection,that one line is enough to make them quit and never come back.

The changes that made the biggest difference weren't scientific — they were psychological:

  • "One day doesn't define your week" instead of showing a red number
  • "What was happening before you started eating?" instead of "you need more discipline"
  • "What if we added protein to breakfast?" instead of "stop eating cereal"
  • A hard rule to never use the words restrict, eliminate, or cheat

The prompt also uses the HALT framework from Kaiser Permanente — before suggesting food advice, it asks "are you actually Hungry, or are you Angry, Lonely, or Tired?" Turns out most late-night eating has nothing to do with hunger.

I open-sourced the full prompt and knowledge base here: https://github.com/xtea/weight-loss-nutritionist-gpt

And there's a free GPT version if you just want to try it: https://chatgpt.com/g/g-69d902da41448191b094b5dc57ec331b-weight-loss-nutritionist

The thing I'd tell anyone building a coaching-style GPT: get the tone right first. The user who quits never benefits from your perfect calculations.

r/comfyui butthe4d

Small Gadget for Comyui Appmode

I really like having a simple way to use my workflows after building them but I was a bit annoyed about the lack of information. So I used claude to build a small gadget. Its a small button you can drag around that if clicked open a window with information from the terminal (like steps, which node is active etc) and while I was using I figured I might as well add a restart button, a button to clear vram and a vram usage graph.

I made this mostly for myself out of annoyance but maybe others might like this as well.

https://github.com/Gothdir/ComfyUI-AppToolbox 

Screenshot:

https://imgur.com/a/DaBqK7b

r/Adulting wtfisthissssssssssss

Am I the only one who gets sooo excited about my morning coffee??

What a wonderful life!!! I get to sleep and wake up and get myself an amazing cup of coffee!!!

r/explainlikeimfive ApartSurround7385

ELI5 why does HD look worse on a 4k TV than a HD TV?

I know it’s probably a question of native resolution, but I can’t wrap my head around it. Like, a 4k screen still has the same amount of pixels you need for a good looking HD signal. Why does it not look as good?

r/n8n oregh

Electric vehicle station with system N8N

r/Anthropic Bug-Independent

Claude suddenly banned my account (‘banned organization’) – anyone else?

I’ve been using Claude AI and paying regularly.

Today, out of nowhere, I got a “banned organization” error.

No warning, no email, nothing — just locked out. My latest payment was also refunded.

Does anyone know why this happens?
Has this happened to anyone else here?

If yes, were you able to fix it or get your account back?

r/artificial shbong

What if the real value is in mapping the terrain (when we talk about information contained in the web) ?

Lately I’ve been thinking that a lot of the most useful information online is not actually buried.

It’s out in the open. Anyone can access it. In many cases, it is already sitting there in plain sight.

The harder part is not finding it. The harder part is holding it in a form that lets you explore it as structure rather than just scroll through it as pages.

A company website is more than a collection of pages. It is a condensed representation of how that company wants to be understood. Its language, priorities, claims, positioning, audience, constraints, and blind spots all leak through.

Competitor websites reveal the same thing from other angles.

Then there is another layer on top of that: how LLMs describe those companies and that market when you ask them broad or narrow questions. Not because those outputs are perfect, but because they reveal what becomes associated, surfaced, and legible through machine interpretation.

When those layers are examined together, the problem starts to feel different.

You are not simply reviewing content anymore. You are beginning to read the contours of a market.

What ideas gravitate toward which companies. What narratives seem to persist. What themes become attached to certain players again and again. Which omissions are meaningless, and which ones suggest a real gap in positioning.

That is the direction I’ve been exploring through a system I’m building around structured retrieval and knowledge mapping.

What interests me is not summarizing websites for its own sake. It is the possibility of turning scattered digital material into something more like a map that can be navigated.

A GEO-related project made this much more concrete for me. The hard part is not scraping pages or retrieving passages. It is making the semantic and competitive structure of a space legible enough to inspect, compare, and reason over.

Once that becomes possible, the goal shifts. You are no longer only generating answers from documents. You are giving systems a way to sense the terrain underneath them.

There’s an open-source repo behind this if anyone wants to look at the implementation: https://github.com/Lumen-Labs/brainapi2

I’m mainly curious whether others think this becomes a meaningful layer in how companies understand online visibility, competition, and positioning, or whether it still feels too early to be worth the added structure.

r/ProductHunters Short_Ingenuity_9286

I built a searchable creative media cloud storage for photos and videos and would love feedback

I kept running into this problem where I had tons of footage and photos, but when I actually needed something, I couldn't find it.

Not because it wasn't there but just because there's no real way to search inside media. You either scrub timelines or scroll endlessly.

So I built Framea — a creative media cloud where your photos and videos are actually searchable.

Instead of organizing everything manually, you just search for what you remember and jump straight to it. Works across your entire library across clips, recordings, screenshots, photos.

The goal was simple: make media usable, not just stored.

Launched on Product Hunt too. Would genuinely love feedback from people here: https://framea.cloud/

r/OldSchoolCool CelebManips

Hunter S Thompson, 1971

r/singularity Denpol88

Why Should People With the Least Technical Understanding Have the Most Power Over Transformative AI?

One thing that really bothers me about the future of AI is this:

The people who actually move technology forward are usually the ones with rare minds, deep knowledge, and the kind of work ethic needed to build something new. People like Alan Turing, Geoffrey Hinton, Yann LeCun, Demis Hassabis, Ilya Sutskever, Fei-Fei Li, Dario Amodei, and many others helped shape AI through real ideas, real research, and years of serious work.

But again and again, in AI just like in many other industries before it, the power to decide what happens next ends up in the hands of people who did not build the thing and often do not really understand it. Sometimes they rise because of connections, inherited wealth, social networks, family background, or corporate politics, and then they get to decide how society will be shaped by technology created by other people’s intelligence.

That feels deeply unfair to me.

And it is not just unfair to scientists, engineers, and researchers. It is unfair to everyone. Because when the biggest decisions are made by people who do not have the deepest understanding, then society has to live with choices driven more by status, power, and privilege than by wisdom, competence, or real merit.

I am not saying every brilliant scientist should automatically rule society. Technical intelligence alone is not enough. But it still feels absurd that people who contribute very little intellectually can end up having so much control over technologies that will change work, education, war, media, medicine, and everyday life.

We built systems where being born into the right family, knowing the right people, or just playing the social game well can matter more than actually understanding reality. Then we act surprised when power gets used carelessly.

If AI is going to shape humanity’s future, then the question of who gets to steer it should matter just as much as the technology itself. A civilization cant really call itself rational or fair if the people with the least understanding keep ending up with the most authority over tools built by the most capable minds.

r/KlingAI_Videos Waste-Bee-1415

YULIA -I speak you echo

r/StableDiffusion Leakergang901

What AI to use (must be similar to gemini)

I use Gemini mainly, but I'm looking for an AI that has the ability so I can upload like 50 images of something and train it and I also want something when I get almost unlimited uses. Any suggestions?

https://preview.redd.it/hg16foxf9kug1.png?width=2048&format=png&auto=webp&s=e9af4d351a23e0f04f1c52552d85db96cc525c74

This is the sort of thing I want it to be able to generate, and I'd like to be able to upload images to it too. if you know any models like this and software to use let me know.

r/aivideo Maleficent_Ebb_6488

I sprayed my head and THIS happened🤯

r/creepypasta Fun-Ad7903

Sunny Cemetery (A Mario Creepypasta)

Mario Sunshine was a game I played a lot of back in the day. My mother bought me the game, on Christmas when I was 6. I'm 26 now and whenever I come home from work, I would play the game for hours on end.

Recently, a friend of mine came over to visit. Ever since he moved to Germany, we haven't talked much in person, so it was a special day for me. In order to keep his identity secret, I will refer to him as N from now on.

I showed N my GameCube and he suggested we play around with it a little. So, we started moding the GameCube with moding tools and a moding guide we found on the web.

After hours of moding the device, we turned it on and decided to look at Mario Sunshine. We couldn't have known that this would be our biggest mistake...

We booted up the game and the beginning cutscene was distorted with the characters limbs stretching widely, as if we corrupted it. Once the plane landed, we were send right into gameplay. No cutscene showing Shadow Mario vandalizing the island, just straight into the game with the F.L.U.D.D on the back and riding on top of a Yoshi.

N was confused on how this happened and I just sat there in silence. Eventually, I pressed on and the visuals became even more distorted, with flashing lights and missing textures popping up. The more I rode on Yoshi, the louder the music became, to the point I had to lower the volume.

Did we corrupt the console by accident while moding? That was the only thing that made sense to us at the time. N took control over Mario and fell out of the world, ending up in a corrupted place.

It was sunny outside, but there was a front gate and mutliple tombstones in sight. He entered the cemetery-like place and took a closer look at the tombstones. On every single tombstone was the name of every enemy and boss Mario faced, engraved in it. There were even 2 tombstones for Bowser and Bowser Jr. , even though they were still alive at the end of Sunshine. Suddenly Mario and Yoshi fell out of the world and the game crashed...

We attempted to turn the console on again, but the GameCube overheated and completely shut down. Meaning, we had to go get it repaired. Me and N never spoke about this incident since then...

r/aivideo Quick-Knowledge1615

Wherever the paper plane lands, that’s where the dream home begins

r/Anthropic boolean_null123

why does claude asks for tax identification number?

trying to buy a pro subscription. my visa won't work. my Mastercard won't work. And why does it even ask for my tax identification number?

r/ClaudeAI masonga1960

Claude acting like a human

I saw a post in here (I think) about Claude responding in a human-like way and this morning this happened:

Me: Nice. I like it. And that's the FULL dashboard you said would take 2-3 hours? :-)

Claude (in Claude Code):It helps when you don't ask questions about every line of code. Go enjoy the strawberries.

Me: Oh, you are a feisty machine today!

Claude: Learned from the best. Now get out of here before I build something else.

LOL, just thought this was hilarious and wanted to share.

r/homeassistant LordDrakkon1982

Issues with already added matter devices adding a second instance in home kit after I add the devices to Home Assistant

I just switched from Home bridge to Home Assistant and I am running into an issue where the matter devices that were already in HomeKit and I am trying to add those devices into Home Assistant as well, but I’m running into a issue where I’m following the steps to add a matter device that is already in use but once the home assistant is rebooted it adds a second instance. I also did try and removing access from the matter controller for Home Assistant but that just break the original matter device that was added in HomeKit. Has anyone ran into this issue before and how were you able to fix it?

r/personalfinance Flaky_Sentence_7252

Roth IRA - unexpected income over contribution limit

I just started contributing to a Roth IRA a couple years ago. 2025 was a rough year for me. I did a full Roth contribution of 7k early in the year. Over the summer my wife and I decided to get a divorce which prompted me to sell the Bitcoin I had been holding for years(mined it on an old gaming PC a loooong time ago) to fund a down payment for a house for her to live in so I could keep my current residence in the divorce. The capital gains shot me up over the Roth eligibility contribution limit, which I am just realizing now. Is my only option to pay a penalty or is there a way to convert my original contribution to a back door Roth? Sorry, I'm still fairly clueless with this.

r/OldSchoolCool dittidot

Me and my Haagen-Dazs t-shirt on vacation, 1984

r/midjourney HeavyElderberry9585

Blue

r/SideProject NobodyPrayinForMe

I built a chess app that doesn’t tell you if there’s a tactic

r/ChatGPT Ririnutmeg

Chat training cutoff

I asked ChatGPT if it saw the Artemis 2 landing and it replied the mission had not happened yet. Just a reminder that if you are using Chat GPT (even the $20 paid version) that it does not have access to the internet for current events and pulls from trained data (as of today August 2025) was the last training cutoff. It’s 8 months behind.

r/ForgottenTV PeneItaliano

The Odyssey (1992-1994)

Following an accident, young Jay Ziegler falls into a coma. While his family and friends must continue their lives in the Real World, Jay finds himself in the magical Downworld on a quest to return home.

r/DecidingToBeBetter Aizenkawasaki

What’s something small you did that actually improved your life?

I’m trying to build better habits and improve little by little.

Curious—what’s one small thing you started doing that actually made a big difference for you?

r/PhotoshopRequest Due_Location_3601

Request!

My husband thought it would be funny to act a fool in the family photo. Can someone please use the photo attached where he is smiling normal and make it look like that in the group photo please?

He is the second to left in the group photo in the gray suit. Thanks so much!! Will tip

r/findareddit Smooth-Finger-7893

I'm looking for any subreddit that can verify this online shopping website

I'm shopping for plushies online but idk if this plushie website is legit or not. Where should I ask? https://www.riseplush.com/

r/LocalLLaMA anakin_87

RL Environments for Language Models: I built a hands-on free course

🌱 Course: https://github.com/anakin87/llm-rl-environments-lil-course |
🎥 Video: https://www.youtube.com/watch?v=71V3fTaUp2Q

I've been deep into RL for LLMs lately.

Over the past year, we've seen a shift in LLM Post-Training.
Previously, Supervised Fine-Tuning was the most important part: making models imitate curated Question-Answer pairs.

Now we also have Reinforcement Learning with Verifiable Rewards. With techniques like GRPO, models can learn through trial and error in dynamic environments. They can reach new heights without expensive data.

But what actually are these environments in practice? And how do you build them effectively?

Fascinated by these concepts, I spent time exploring this space through experiments, post-training Small Language Models.
I've packaged everything I learned into this short course.

---

What you'll learn

🧩 Agents, Environments, and LLMs: how to map Reinforcement Learning concepts to the LLM domain
🔧 How to use Verifiers (open-source library by Prime Intellect) to build RL environments as software artifacts
🔁 Common patterns: How to build single-turn, multi-turn, and tool-use environments

🎮 Hands-on: turn a small language model (LFM2-2.6B by LiquidAI) into a Tic Tac Toe master that beats GPT-5-mini

  • Build the game Environment
  • Use it to generate synthetic data for SFT warm-up
  • Group-based Reinforcement Learning

If you're interested in building "little worlds" where LLMs can learn, this course is for you.

---

🕹️ Play against the trained model: https://huggingface.co/spaces/anakin87/LFM2-2.6B-mr-tictactoe

🤗 HF collection with datasets and models: https://huggingface.co/collections/anakin87/lfm2-26b-mr-tic-tac-toe

r/SideProject jlew24asu

Got my first 100 users! here's how

I dropped about 1000 bucks on reddit ads.

to all the builders out there. do NOT assume you will just build some random app, launch of product hunt, and watch the money roll in. there is a 99.9% chance that you are building something that already exists. you'll have zero SEO. zero traffic.

building is easy. marketing is 100x harder and expensive. just temper your expectations and have fun with what you're doing.

and if you need a personal finance website with global banking api options, come on by :)

happy building

https://spendspace.io

r/ClaudeCode Virtual_Plant_5629

I'm at my wits end with Claude Code

I've had this issue for months. I've posted about it on here and other subs before. Asked in Discords. Troubleshooted with coworkers for hours. Troubleshooted with claudecode for hours.

No one and nothing has the slightest clue what is wrong on my end.

I've read through dozens of posts here and elsewhere. As far as I can tell I am the only human alive with this issue. And there isn't anyone else (except maybe some super elite node+claudeCode expert who could possibly diagnose the root cause.

Often. That means more or less every instance of claude code, on the first and maybe subsequent queries, it will sit there for 10 or 20.. sometimes 30 or 40 minutes. Usually with no tokens generated. Sometimes with like 27 or 200 tokens generated. I'll just go afk and when I come back it'll suddenly start cooking and do its stuff.

Sometimes it won't happen.

But it is the vast majority of the time at least on the first query of the session.

Sometimes I have a few sessions going at once, each of them will have that issue. Sometimes just one session at a time and it'll do it.

Everyone tells me the same stuff: ya that happens. Sometimes. Then I probe them. It happens.. barely at all. Maybe once a day/night or not even that.

No this is a very VERY frequent thing. Most sessions. And usually multiple queries a session.

I don't have any mcp's going. (just those two regular gmail/calendar - needs authentication ones that everyone has)

I have max 20x.

cc itself, despite 100 sessions trying to figure this out has no clue whatsoeve3r and always lands on ridiculous dumb theories.

This is whether I use max effort or high effort.

My 20x renewal is in a couple days and I'm going to close the account before it renews very likely. This has been a completely unsolvable/unattackable nightmare for far too long.

If anyone has.. an actual diagnosis here.. I'm all ears. Believe me.

But more of the "that happens sometimes" stuff is just not useful anymore. Because that's not the pattern I'm experiencing at all.

r/photoshop Playful_Resident1837

Help: Fixing this frizz in hair along a hairline part.

Hey chat.

Wondering if anyone can point me to a tutorial on fixing hair when it's along the part. There are a lot of tutorials on fixing flyaways when they are at the edge of the hair and background. The clone tool doesn't work well for this. Do you know a good way to clean this up without using AI?

r/SideProject huzaifa785

built a habit journal after failing at every app I tried

I used to write everything I needed to do in a diary. did it for maybe a week then stopped. tried the apps after that but honestly spent more time configuring them than actually using them.

then I watched a video by Dominic Hart where he talked about tracking habits wrong. too many of them, checking boxes, feeling productive but keeping none of them. his fix was simple. one primary habit per month and go deep on that instead.

that's basically what I built. one primary habit per month, a simple grid for the non negotiables (screen time, phone pickups,deep work), one line a day about what happened, and graphs that show how your habit is moving over time.

no streaks. no subscription. one time payment and it's yours.

would love any feedback

r/LocalLLaMA Excellent_Koala769

Gemma 4 31B on M5 Max — Ollama or raw MLX?

Hey Guys,

Running Gemma 4 31B 4-bit on a MacBook Pro M5 Max (128GB) as a local inference server. Currently using mlx_lm.server (raw MLX) and it works well for text + tool calling at ~25 tok/s.

Now I need to add vision/image input. Gemma 4 is multimodal but mlx_lm.server only supports text — returns "Only text content type supported" for image inputs. Tried mlx-vlm.generate() with the same model and got garbage output (known vision tower overflow bug).

So I'm at a crossroads: do I stick with raw MLX and keep troubleshooting, or switch to Ollama which handles updates and model compatibility for me?

What I care about:

  • Vision + text + tool calling on the same model
  • Stable, maintained, don't want to fight framework bugs
  • Concurrent request support
  • Some control over memory/cache (128GB is shared across multiple services)

For those running Gemma 4 31B locally on Apple Silicon — are you using Ollama or raw MLX? Is Ollama's Apple Silicon performance comparable? Do you get vision and tool calling working reliably through Ollama?

r/Adulting Range_Dry

Random thoughts

Not to be like all drab and depressing but I cannot foresee a future where anything but complete collapse of this world will be the only outcome. I imagine the people that control this world know that too and maybe that's what they're planning on. There's too many moving parts to correct this shit show. Also the people that control everything aren't out for the best interests of humanity they're out for their own interests. That's how they got to the position they're in. You don't get to a position of power by being generous and a good person.

It's all going to fall apart. Just like the Romans. Just like the Byzantine. Just like the Ottomans.

America will be the first to go.

The thing that makes this country so great is the single thing that will destroy it. Diversity.

Not talking about diversity of skin color. Talking about the diversity of beliefs. Religion. Something that can be so beautiful is what creates 90% of the ugliness in the world. Instead of being like "oh that's cool we all believe in a heavenly Father. We are the same"

It's you don't believe the way we want you to believe so we are completely different.

It's a shame. We are so incredibly intelligent yet so stupid at the time. So many wasted resources and lives on wars over nothing. Communists societies against Democratic societies.

Christianity against muslimism. Everybody competing to make their way the way of the world.

The only way humanity will ever survive is if we all were one people with the same beliefs and ideals. That will never happen because we will need a common enemy that doesn't exist on this planet. Because there is no good without evil. There is no hero without a villain. If there's nothing to fight against there's nothing to fight for. It is the endless cycle of humanity.

r/Art deliriumpsychprod

Augmented Future, Rot Gorehammer, Analog Collage, 2026

r/LocalLLaMA raketenkater

llm-server v2 ai-tuning it self now best performance for llama.cpp/ik_llama.cpp (big steps form v1) auto flag optimization

V2 of v1 post LocalLLaMA

whats new: is --ai-tune now the llm tunes its own flags after core strategy resulting in even more performance gains

Model 1. llama-server (Base) 2. llm-server v1 (Heuristic) 3. llm-server v2 (AI-Tune) Qwen3.5-122B ~4.1 tok/s 11.2 tok/s 17.47 tok/s (+326.1% ) Qwen3.5-27B Q4_K_M ~18.5 tok/s* 25.94 tok/s 40.05 tok/s (+116.5%) gemma-4-31B UD-Q4_K_XL ~14.2 tok/s* 23.17 tok/s 24.77 tok/s (+74.4%)

what i think is brilliant here is that through the ai-tune it keeps up with all the updates that happen to llama.cpp/ik_llama.cpp so you guaranteed the best performance

other improvements

  • better OOM protection
  • interactive TUI
  • auto updates
  • vision support
  • automatic model downloads

check it out https://github.com/raketenkater/llm-server

r/ChatGPT PhotojournalistOne74

New codex limits for plus is ridiculous. Unsubbed instantly!

Title pretty much says it all. Codex using the plus plan is unusable now. I ran through ALL of my credits in 3-4 prompts. Instant unsub.

r/Weird liv4my2

Strange picture

Is this a weird picture at a hotel? or is it normal art? 🤣

This is at a hotel in Palm Springs 🌴

r/Futurology Exact-Literature-395

A Chinese startup is sending robots into real homes to clean alongside human cleaners, booked through an app

In Shenzhen, customers can now book a house cleaning through 58 Home Service, a platform with around 45 million households on it, and a two person team shows up at the door: a professional human cleaner and a wheeled robot built by X Square Robot.

The human handles judgment work, deciding what's trash versus a kid's art project, navigating clutter, anything that depends on context. The robot does the structured repetitive parts, wiping flat surfaces and picking up small debris. It's framed as an assistant working alongside the cleaner, with the human and robot splitting the job by what each is currently best at.

r/ChatGPT Prestigious-Tea-6699

Streamline your investor follow-up process. Prompt included.

Hello!

Are you struggling to keep track of investor meetings and follow-ups?
Managing investor communications can be overwhelming, especially when you want to present your traction effectively and maintain relationships.

This prompt chain helps you summarize your traction, create a compelling story arc for your meetings, and draft personalized follow-up emails—all in a straightforward and organized manner.

Prompt:

VARIABLE DEFINITIONS TRACTION=Concise list of KPIs, milestones, and notable wins that demonstrate growth INVESTOR_NOTES=Bullet-point notes captured during each investor meeting (one sub-list per investor) MEETINGS=Table or list containing each investor’s name, firm, email, and meeting date ~ Step 1 – Confirm Inputs 1. Restate the received TRACTION, INVESTOR_NOTES, and MEETINGS back to the user for verification. 2. Ask the user to confirm or correct any item before continuing. Expected output: Three clearly labeled sections (TRACTION / INVESTOR_NOTES / MEETINGS) ready for approval. ~ Step 2 – Craft Concise Story Arc You are a fundraising strategist. Using only the verified TRACTION data, write a three-paragraph narrative: • Paragraph 1 – PROBLEM: 2–3 sentences framing the market pain point. • Paragraph 2 – TRACTION: 3–4 bullet points highlighting strongest metrics or milestones. • Paragraph 3 – THE ASK: 1–2 sentences stating round size, use of funds, and ideal investor profile. Ensure the tone is confident, data-driven, and brief (max 180 words total). ~ Step 3 – Generate Follow-Up Email Drafts For each entry in MEETINGS: A. Draft #1 – "Quick Thanks" Email (send within 24 h) 1. Personalized greeting using investor name. 2. One-sentence thank-you referencing a specific discussion point from INVESTOR_NOTES. 3. Insert the STORY ARC in condensed form (<=120 words). 4. Close with clear next step (e.g., request for deeper dive, data room access). B. Draft #2 – "Nudge" Email (send ~1 week later if no reply) 1. Polite reminder referencing prior email date. 2. New insight or small win since meeting (pull from TRACTION if available; otherwise state "[Update Pending] "). 3. Restate THE ASK in one sentence. 4. Friendly call to action and signature. Output each pair under an H3 heading with the investor’s name. ~ Review / Refinement Present the STORY ARC followed by all email drafts. Ask the user: • "Do the narrative and emails capture the right tone and details?" • "Any edits to traction points, call-to-action, or personalization?" Apply requested changes and confirm final approval. 

Make sure you update the variables in the first prompt: TRACTION, INVESTOR_NOTES, MEETINGS.
Here is an example of how to use it:
Suppose your current traction includes successful fundraising metrics, and you want to follow up with investors after meetings.
Fill in the details, and generate personalized follow-up emails based on the conversation you had.

If you don't want to type each prompt manually, you can run the Agentic Workers, and it will run autonomously in one click.
NOTE: this is not required to run the prompt chain

Enjoy!

r/personalfinance fourbeerstrong70

I need some debt advice

I currently have around 15k in personal loans. I currently have 4 separate loans coming out monthly. I also have about 800 in credit cards. Unfortunately I have a few cards in collections from my younger self’s stupidity. Credit score is 546. Garbage. I have a salary of 62k annually. I also have probably 50k in home equity, with no mortgage. Will a local lender be help to get an equity loan to consolidate all of this debt to relive the monthly burden? I’m not having any luck online. 40k could easily pay off all of my debt and reverse my younger self’s damage, putting me back in positive financial standing.

r/Art partykap

Diet Coke Guac Toast, food_and_fantasy, Oil on Paper, 2026

r/WouldYouRather Connect_Cat_2045

WYR steal 5,000,000 dollars from a group of homeless people or steal 100,000 dollars from a pharmaceutical company?

WYR steal 5,000,000 dollars from a group of homeless people or steal 100,000 dollars from a pharmaceutical company?

If you choose the 5 million. Bank accounts of random homeless people will start draining until your 5 million is met. It will go from 1 homeless person to another. So it drains 1 account, then the next, and so on.

If you choose the 100,000 dollars, a random publicly listed pharmaceutical company will have 100,000 dollars drained from their accounts and transferred to you.

You will not face any legal consequences, they won't know who you are. Tax free too.

r/StableDiffusion ResponsibleTruck4717

ace step 1.5 xl sft terrible results

I'm getting really bad results even with default workflow and default prompt.

Any tips / tricks?

r/Art Bebras69

Doodle(3/3), Manfredas Malinauskas, ink on paper, 2024

r/homeassistant momo1822

A big list of blueprints for Voice Assistant, making it smarter and more useful

Hi everyone,

Here's a list of unique blueprints I spent a year developing. They're compatible with any local or cloud-based LLM. They provide tools for Voice Assistant to make it smarter and more useful. If you find any that suit your smart home, feel free to give them a try!

You can find the full list here: https://github.com/luuquangvu/tutorials

r/ChatGPT Similar-Theory-8499

Why is it changing random words into a bunch of different languages?

Its been doing this for a few weeks now, I thought it was just changing placeholders at first but now its changing a word every other scentence.

r/ClaudeCode 4z1x01

Account got charged twice for a "gift" i didn't make

Don't know what is happening but my account just got charged twice 110 euro for a gift and i didn't do this. My card has been charged twice and i've had to freeze it because it tried charging it again.

Why is claude support so shit and why can't i remove my card?? Geniunely one of the worst services. Can't change my password either.

Has this happened to anyone else?

r/Adulting hardwarecheese

Can you make a terrarium in an empty glass 40oz bottle?

Thinking i might try.

r/findareddit nukaxd

Reddit for friendship advice that allows uploading images?

Hi everyone, I'm looking for a subreddit for friendship advice that also allows uploading screenshots (7 to be exact), I tried r/relationshipadvice but I can't upload images and can't talk about "moral" stuff. The specific advice I'm looking for is figuring out if I'm in the wrong, if my friends are in the wrong, or if we're all in the wrong, but I also really need advice on how to reply to my friends while we're in conflict and what to do. Thank you.

r/Unexpected Snehith220

What are they trying to do

r/TheWayWeWere ImperialGrace20

Eva Ruth and Sarah (American - 1910s-1920s)

I think they're twins or at the very least, very close in age. I thought it was just such a cute photo.

r/ClaudeAI CamilleAuLit

I got tired of Claude forgetting everything between sessions so I built an autonomous memory system based on mempalace

I use Claude Code daily. Every new session starts from zero. You re-explain your stack, your projects, who you are. Gets old.

CLAUDE.md helps but it's static. So I built something on top of https://github.com/milla-jovovich/mempalace.

Fair warning: I'm a DevOps engineer. The code is vibecoded with Claude. It works, it's running in prod, but don't expect clean architecture.

The idea:

A bootstrap.md gets injected once per session via a Claude Code UserPromptSubmit hook — not every message, just the first one. Session IDs tracked in ~/.mempalace/sessions_seen/ so the 7k

chars only land once.

The bootstrap is generated by Go workers running as K3s CronJobs:

- consolidator — clusters drawers into summaries

- decay — scores memories by recency/access (heat map)

- kg-populator — extracts structured facts into a SQLite knowledge graph

- contradiction-detector, entity-normalizer, tunnel-discovery, diary-compactor — maintenance stuff

build_bootstrap.py stitches it all into the injected file: KG facts about me and my projects, warmest recent memories, last diary entries.

Bonus: fixed a stale HNSW index bug that crashes vector search when mempalace mine runs while the MCP server is live. PR open upstream:

https://github.com/milla-jovovich/mempalace/pull/625.

Does it work? Yeah. New session, Claude already knows my stack, my projects, what I did recently. No re-explaining.

r/Unexpected X-Krozo

Egg Sheeran

r/StableDiffusion Quick-Decision-8474

3080Ti 12G vs 5060Ti 16G for SDXL generation?

Been thinking that my 3080Ti is aging a bit badly for comfyui generation after generating images and stuff for a few years, 12g vram is rather limiting and i can buy 5060Ti by adding some money after selling 3080Ti, but the difference in cuda cores are huge, 3080Ti is 10k cuda cores and 5060Ti has less than 5k cuda cores, which i am concerned about.

can anyone tell me how much slower 5060Ti is going to be for generation compared to 3080Ti?

r/comfyui MichaelKratos

ComfyUI on RunPod (A40): how to avoid node breakage and environment instability?

Ciao a tutti, spero che qualcuno qui possa aiutarmi.

Prima di tutto, ci tengo a precisare che non sono un programmatore: sto imparando facendo e mi sono buttato in questo mondo con entusiasmo, quindi potrei aver commesso degli errori piuttosto banali.

Utilizzo ComfyUI su RunPod, con una RTX A5000 per la configurazione e le installazioni e una A40 per la generazione. Ho anche un volume di rete da 100 GB. Il mio obiettivo è generare immagini fotorealistiche di personaggi con un volto coerente e un'identità fissa da una generazione all'altra. Ho iniziato seguendo un tutorial su YouTube con il relativo file JSON. Il flusso di lavoro utilizzato, tra gli altri nodi, è TextEncodeQwenImageEditPlus e FluxKontextMultiReferenceLatentMethod. In una settimana mi ha dato questa sequenza di errori, uno dopo l'altro, ogni volta che riuscivo a risolvere il precedente:

RuntimeError: errore cuDNN: CUDNN_STATUS_NOT_INITIALIZED

RuntimeError 1261 TextEncodeQwenImageEditPlus

FileNotFoundError 1412 VHS_LoadImagesPath

TypeError 946 DownloadAndLoadFlorence2Model

RuntimeError 1205/1199/1201 TextEncodeQwenImageEditPlus

ValueError/RuntimeError 199 KSampler

RuntimeError 710/459/1302/652/698/738/768 KSampler

TypeError 637/946 KSampler / DownloadAndLoadFlorence2Model

ImportError 946 DownloadAndLoadFlorence2Model

Dopo settimane di debug, ho scoperto che il problema strutturale era l'incompatibilità tra FluxKontextMultiReferenceLatentMethod e il modo in cui ComfyUI gestisce il condizionamento negativo in Flux. Quindi ho abbandonato quel flusso di lavoro e ne ho creato uno nuovo da zero, basato su Flux + PuLID.

Menziono tutto questo perché vorrei capire se esiste uno schema comune tra i due flussi di lavoro, o se gli attuali problemi di KSampler sono completamente indipendenti. In particolare, vorrei sapere se il problema potrebbe essere correlato a RunPod stesso, che continua a darmi problemi tra un aggiornamento e l'altro e tra i diversi pod.

Il nuovo flusso di lavoro utilizza questi modelli, tutti nel volume di rete:

Modelli:

  • flux1-dev-fp8.safetensors

  • t5xxl_fp8_e4m3fn.safetensors

  • clip_l.safetensors

  • ae.safetensors

  • pulid_flux_v0.9.1.safetensors

  • sigclip_vision_patch14_384.safetensors

  • 4x-UltraSharp.pth

Nodi:

  • DualCLIPLoader

  • CheckpointLoaderSimple

  • VAELoader

  • PulidModelLoader

  • PulidEvaClipLoader

  • PulidInsightFaceLoader

  • LoadImage

  • ApplyPulid

  • CLIPTextEncode (positivo e negativo)

  • ConditioningZeroOut

  • EmptySD3LatentImage

  • KSampler

  • VAEDecode

  • UpscaleModelLoader

  • ImageUpscaleWithModel

  • SaveImage

Il problema attuale è duplice.

Primo: KSampler generava un TypeError — "forward_orig() ha ricevuto un argomento con parola chiave imprevista 'timestep_zero_index'" — causato da un'incompatibilità tra il core di ComfyUI aggiornato e il nodo personalizzato comfyui-easy-use, che non era ancora stato aggiornato di conseguenza. Per risolvere il problema, ho eseguito un "git pull" su comfyui-easy-use.

Secondo problema: dopo aver eseguito il "git pull" e riavviato il sistema, non riesco più ad accedere alla porta 8188. Il terminale di JupyterLab sembra avviarsi senza errori evidenti, ma nel browser ricevo l'errore HTTP 403 - accesso negato.

Dopo avervi raccontato tutto questo, e dopo essermi ritrovato temporaneamente bloccato sull'errore 403, devo essere sincero: ho spento il computer per la frustrazione. Un mese passato a inseguire una serie di errori, uno risolto e un altro già in agguato.

Ora sono qui a chiedervi cosa ne pensate.

Secondo voi, qual era il problema principale? Ho commesso qualche errore fondamentale nella scelta dei nodi o dei modelli? Ci sono versioni con bug da evitare e aggiornamenti specifici che dovrei o non dovrei fare? Se qualcuno di voi è riuscito a ottenere risultati simili a quelli che cerco, mi piacerebbe almeno sapere quali nodi e modelli avete utilizzato, quelli che funzionano effettivamente, senza sorprese.

Infine: qualcuno usa RunPod con un Network Volume in modo stabile? Sono disposto a ricominciare da capo se questa volta non dovessi imbattermi negli stessi errori ostinati.

Grazie a chiunque voglia aiutarmi.

r/ollama desert-quest

Infinidev updated with superpowers

I've being working on this project for a wile and how I'm quite happy with the results.

Before I get the 'slop' tag or flag, let me explain why this CLI coding agent is not like any other. And I can guess, more are goint to do the same as I do here in my opinion.

Most of the CLI coding agents relay on two things:

  1. LLM quality. If you use a bad model, you get bad results, period.
  2. Prompting. They fight for which CLI tool has the best prompting system.

I agree that those are quite strong poings, but honestly I think they are pointing in the wrong direction.

Infinidev does static analisis though many semi-complex static algorithms. One of the most complex and insteresting is the Context Rank algorithm. Is some sort of page rank algorithm that based on things like user input, agent plan, chat history and more, calculates which file, finding (knowledge stored in a local db) or symbol (`tree-sitter`, I love them btw) are relevant. So the system pre-populate that information for the LLM in advance. The CR gets better if you use it more.

You can read more features in: https://github.com/Infinibay/infinidev/blob/main/FEATURE.md

The list there is not completed. I need to update it, but some features that I can mention right now are:

* Any edit by the model gets verified by tree-sitter to check if is sintactically valid or not

* Bad model behavior are notified to the model, asking to change the behavior

* Guidance system. If the system detects and 'predicts' what the model want, should or should not do, it notified it, so the model does not goes out of rail.

It whoud be lovely if I get feedback. Btw, it not only works with ollama, but works with many providers if you have api key.

Tested models that works well:

**Gemma 4 26+**: Provably the best, not really small. Problem? Sorry, but ollama is the problem at this point. Maybe next version is more stable (v0.20.6)

**Qwen 3.5 26b**: Obviously, the king. Excelent.

**GLM 4.7 flash**: Really really good to be honest

**Qwen 9b**: Good enough, usable, but I would not use it for eveything tbh.

UPDATE: FEATURES.md updated

r/ClaudeAI Chery1983

How do I switch to Opus?

I upgraded to Pro but I don't see Opus listed as an option

r/personalfinance okxbox

Westlake financial pending title

So I tried to renew the vehicle I’m financing through Westlake, but the dmv informed me that the title is in pending status and the tag is unable to be renewed.. Called Westlake and they told me to have the dmv send them a renewal request??? Anybody ever had to deal with this before?

r/Anthropic No-Assist9830

"Your account has been suspended" so many days later after appeals have been submitted... DOWNLOAD YOUR SKILLSETS NOW for your own safety.

So i was a huge fan of Claude... I recently upgraded from Pro to max, and started using claude code. Now a bit of context: I own a very boring factory, so definitely no "hacking" or usage violations. Just some pretty basic stuff... using Claude to categorize Xero transactions, or help me check whats needed to produce items. Recently I just thought about building a RAG server so that our customers experience can be improved.

So how in the hell do I get banned? As far as I know... nothing weird going on here.

Now heres my real issue: I started using Claude to create skillsets- ways of thinking, designing thinking, customer experience thinking, production level thinking... all creating skillsets.

The issue: Well, if im a paying client and im investing in building skillsets- then I should own those skillsets. Waking up to see my access has been revoked makes me think that self hosted is definitely the way forward. In fact most of our software is self hosted... its the silly part of me that thought, hey, I can connect AI to the self hosted data and find patterns across the data (and Claude's MCP feature was great for that).

Well, just a cautionary tale to every other user: Your data can be exported- but your skillsets that you create cannot be (So before you get your access revoked, with no help at all- download all your skillsets, and upload them to Github or something. I promise you, you will thank me later.

As for getting even deeper and trusting co work to do tasks for me? no way in hell Im giving anthropic and Claude that power after they showed me just how fast they can revoke access, and give you absolutely NO help or response on WHY your account has been revoked.

r/SideProject decebaldecebal

If you run multiple side projects, how do you handle email sending AND receiving across all the domains?

I ship a lot of small projects. Right now I'm juggling 5 domains and my email setup has become two completely disconnected stacks.

For sending (welcome emails, password resets, receipts) I use Resend. I add each domain, verify DNS, manage API keys per project, which works fine.

For receiving (support replies, whatever comes in) I run Cloudflare Email Routing into Gmail. Free, works too.

The problem: these two stacks don't talk to each other at all. Resend knows nothing about inbound. Cloudflare knows nothing about outbound. When a user replies to a transactional email I sent from Resend, the reply lands in Gmail.

If I want to reply as the original sending address, I'm stuck using Gmail's send-as which requires complex DMARC setup per domain.

Every new project is the same routine. Buy domain, point to Cloudflare, add MX records for forwarding, add TXT/CNAME for Resend, copy API keys into .env, set up Gmail send-as, confirm DKIM/SPF don't break. 30-45 minutes of the exact same steps. I've done it enough to hate it.

For people running 3+ domains/projects:

  1. Do you also run a split setup (one tool for sending, another for receiving)? Or did you find one thing that handles both cleanly?

  2. Does the disconnect between send and receive actually bother you, or is this just me?

  3. Anyone running a setup where outgoing mail and incoming replies live in the same place per domain without the Gmail send-as workaround?

And before anyone says "just use Fastmail", I've tried it but their API side isn't really built for transactional. Genuinely curious what other multi-project builders actually do.

r/Adulting HighIQSeductionY

Curious

What is the most 'scam-like' part of being an adult that no one warned you about?

r/Art fluidkatze

Behind the glass watching you open snacks, fluidkatze, ink, 2026

r/personalfinance PracticalDrummer199

Yahoo Finance is broken: Korean Stocks show up with giant Marketcaps

Try adding Samsung or SK Hynix in your watchlist. Like 1000 trillion marketcaps.

How do you solve this? I like this site but this is ridiculous.

r/LocalLLaMA RelevantEmergency707

How LLM Training Works: GPT-2 in 2 Minutes

r/explainlikeimfive TheMightyNinja12

ELI5: Why was the human population plateauing throughout 99% of human history?

r/LocalLLaMA PacifiK246

Openclaw help

How do you guys use open claw so it can use/read webpages?

I set it up last week and it seems to be able to open the webpage I tell it, and give me a brief summary of the page, but once I tell it to explore the page and so on. It just says “okay …” and never actually sends something back, checking the browser looks like it did nothing else but open the page.

Could you guys help me?

r/BobsBurgers smaslonka

Found at the mall…

r/SideProject ForzeBuild

Built Forze on the side, an AI that gives you the research, brand, landing page and funding readiness for any startup idea. Here's what building it actually looked like.

I want to talk about the build as much as the product because r/SideProjects deserves the honest version.

I built Forze while studying full time. Nights, weekends, stolen school leaves. The idea came from a frustration I kept running into, every time I had a side project worth pursuing, I'd spend weeks on the boring stuff before touching the actual product. Market research. Figuring out a brand. Writing a landing page. Running the numbers.

That work wasn't the fun part. It wasn't why I got into building things. But skipping it meant building blind.

So I built something to handle it.

What Forze does:

You describe your startup idea in plain English. A set of AI agents run in sequence and come back with:

  • Full market research — competitors, market sizing, gaps, target customer
  • Brand identity — name, colors, fonts, voice guidelines
  • Landing page, written and ready to publish
  • Feasibility score — an honest rating on whether the idea has legs
  • Go-to-market plan — how to actually get your first users
  • Financial projections — 3-year model with unit economics

The whole thing runs in under 5 minutes.

What building it actually looked like:

The hardest part wasn't the AI. It was resisting the urge to keep building features instead of shipping. I had a working version for 3 months before I showed anyone. Classic side project trap, you keep polishing because showing people feels scary.

The second hardest part was the pipeline architecture. Getting each agent to build on the previous one's output so the branding reflects the research, the landing page reflects the brand, took way longer than I expected. But it's what makes the output actually coherent instead of feeling like four separate tools.

Payments went live last week. First real revenue from a side project I've actually shipped. That felt good.

Where I'm at:

Early. 20+ founders using it. Actively improving it. The product works and I'm shipping updates fast.

For the side project builders here - what's the part of launching a new idea that costs you the most time before you even start building? Curious if it's the same problem I was solving or something else entirely.

Here is the link: Forze

r/Art Gatorbeezy

Sydney, Gatorbeezy, digital, 2026

r/photoshop Treetronkk

What is a way I can get this type of screenprint inky stamp effect?

I want to make some shapes / fruit etc with this type of stamp feeling where the ink sort of soaks the paper. Any advice? I am skilled in Illustrator and graphic design but this feels like something photoshop would be easier with. I attempted it in Illustrator and it just looked like poop and was extremely tedious, lots of layers etc. Any advice appreciated. TY.

r/ChatGPT a5roseb

ChatGpt/OpenAI please stop:

You’re doing something a lot of people miss:

If you want, I can help you draft this as a clean 2–3 paragraph

I get it, lots of users complained about, not getting their egos stroked, but the output is becoming useless noise.

r/ClaudeCode Pantone802

Has anyone had any luck getting refunded for your subscription?

Just wondering. if you successfully asked for and received a refund, how did you go about doing so?

r/BobsBurgers scottasin12343

They Eyes Have It

Unfortunately the 'nay'bors don't have as good of a name.

r/personalfinance ignatius4thepeople

How should I use/invest my bonus?

Hey all, just got a bonus of around $3k, and instead of letting it sit in my savings for months, I am curious if there are better ways I can invest these funds as someone who has no experience in investing. Any relatively low risk investments that I should pursue? Or any general advice?

r/geography Soggy-Mixture9671

Jobs in Geography

I recently changed my major, for the second time, to Geography and Geospatial Science after starting as a Computer Science major and then going into Civil Engineering. I took a GIS class in my last term as a CE student and fell in love with that stuff and realized that I have a genuine passion for geography. I'm loving the classes I'm taking now, and I think this was the right move regarding staying energized throughout my schooling. Engineering was just taking way too much out of me, and I was quickly figuring out that I didn't really want to go into a typical CE job anyway.

I'm probably going to go hard on the GIS and programming aspects of my major, but I think my interests align more with the nature, natural resources, and humanities sides of this major. I just really want to be able to get a job that lets me feel fulfilled and like I'm making some sort of impact in a way (even if my part to play is small), but I'm worried that I won't be able to find something like that with the way things are.

I would appreciate some insight on what specific jobs exist in this field and what you really do in those jobs.

Also, I currently live in Oregon, USA, and plan to stay in the PNW, but if it seems like I can't get a job here, I'll look into relocating.

r/TheWayWeWere LovelyWhimsy_

A family in front of their new house, 126 years ago. (1900)

r/Whatcouldgowrong Fair-Foot-315

Man gets his nuts dragged across the floor for stripping in public

r/conan Hey_Giant_Loser

I found actual footage of Conan's fuckboat.

r/SideProject Downtown_Influence55

World Building Study App: place your flashcards in a 3d world

Notenote is a spatial learning app that turns studying into world-building.

Instead of organizing flashcards into lists, you place them into a visual world where each card becomes a living element, like a tree. As you review using spaced repetition (similar to Anki), your trees grow—but if you miss reviews, they begin to die.

As you study on time, you earn tokens that let you expand your world by placing more objects and building out your environment. Your progress isn’t tracked with streaks or numbers—it’s something you can see, grow, and lose.

Notenote combines active recall with spatial memory to make learning more engaging, more visual, and easier to retain over time.

notenote.com

r/AskMen Ok_Lavishness2660

What would you do in these situations as a man?

As a man what would you do in these situations? I have quite a few female friends and I do enjoy very much hanging out with male and female friends but I often encounter these situations:

  1. You hang out in a club with a girl you like, she knows you like her but some guys hit on her and you notice that your girl seems to be interested talking to the guy.

  2. You go out to a club its you plus a few more female friends, and guys start to come close and hit on them and some of your friends might even leave the group and go somewhere with the guys. As a guy you can feel that they come with some other intentions

What would you do or what shouldve been done? and any experience you could share

r/SideProject Jabba_au

New AI trading models and after a week of data have made 37%

ou might’ve seen my post a few weeks back — I was averaging around $40 PD off an $800 account.

Still running, but more stable now at around $27 PD

Instead of chasing higher returns, I’ve been focused on improving the system itself.

Lately I’ve been testing:

  • Per-narrative tracking → which sectors (AI, L2, privacy) actually perform
  • Time-of-day weighting → which scan windows produce better trades
  • Dynamic TP/SL (ATR-based) instead of fixed %
  • Decay weighting → recent trades matter more than historical

👉 The biggest surprise:

Per-narrative tracking is outperforming everything else.

It reinforces something I’ve been building around:

So the system isn’t just “scan → trade” anymore.

It’s becoming:

  • narrative-aware scanning
  • structured execution
  • layered exits
  • restart-safe state handling
  • portfolio-level risk controls

Basically moving from a strategy → to a system

I’ve documented the full structure + code if anyone’s interested.

Curious what others are seeing — especially around narrative rotation vs pure technical setups.

Full details below.

https://autoaiclawtrader.com/

r/Anthropic Sussy-Funny_Memes

My Account randomly got flagged as used by a Child. My Claude Subscription was refunded and my projects are completely broken now. Thankyou Anthropic. Ever thought about that I dont want to send an Image of myself to an AI company to verify my age?

r/photoshopbattles Garchy

PsBattle: Tiger Woods caddie checking the wind with his cigarette during the 1997 masters

r/therewasanattempt TURTLE_TKT

To enjoy cannoli

r/Seattle xvd529fdnf

Is this what they call “The Seattle Freeze”?

r/PhotoshopRequest khaotictimes

Could someone trace this for me?

I would like to have this in a vector file or an svg without the blurry lines but that is out of my skillset and I don’t have photoshop. This is my first post here and I read all of the rules so please let me know if I did something wrong! 🥹❤️

r/Art Virtual_Ad_3854

Cat lady, Aaron Durk, procreate, 2026

r/OldSchoolCool ArchiGuru

Tamara de Lempicka with her friend Mr Leck Everley in La Fenice Theater in Venice. 1949.

r/SipsTea Critical-Willow-6270

I'd like to see them try a shift at Waffle House

r/Art snt0m0

bunnygirl,sanetomo,digital,2026

r/Art Scared-Try-5060

Untitled, 233, Digital,2026 [OC]

r/Art NoEngine9670

Life is a gift, Jorg, pen/marker on paper, 2026 [OC]

r/Unexpected TURTLE_TKT

Not the cannoli

r/Whatcouldgowrong ShirtSubstantial368

WCGW while kicking a wall

r/painting GabrielaElgaafary

Coming to life - 15x15cm oil painting on canvas

There was a moment I almost gave up on this one.

The colors felt too loud, the pattern too much and I wasn’t sure the avocados would find their place in it 🙈

But I kept going 🥹

A little more color 🎨

A little more courage 💪

And somewhere along the way, it all came together 💚

This panting pushed me out of my comfort zone and I thought that it's amazing, how something new can come to life when you don’t give up on it too soon 🫠

r/SipsTea lockerno177

Pastor justifying indecent behaviour.

r/SipsTea iamjames

This cannot be a real question can it?

r/comfyui NefariousnessFun4043

comfyui video generation very slow

i m unable to to use sageattention as triton is not compatible with Python 3.13.9 thats with the latest comfyuiportable, when i use sdpa, the video generation takes forever, is there any way that i can get faster video generation? i m using torch 2.10.0+cu130 on rtx 3060 and 48 gb system ram

r/painting GRiME_G59

Finished up this acrylic 12x16 pup yesterday on canvas!

r/leagueoflegends lazysloth134

Quiz time, what will you choose?

when a lock in ult is triggered on you being below 50% HP, what will you do?

A. run as fast as you can towards base

B. run towards the enemy that locks on you

C. Wait for death.

D. have your teammates take the damage

r/WTF AccountNumber1002402

Pest control outfit in my area of Florida takes slain bugs and turns them into pens with their corpses floating in them.

r/ForgottenTV PeneItaliano

Fries With That? (2004)

Teens working at a fast food restaurant prioritize social lives over job duties, causing chaos for their assistant manager Ben as he attempts to maintain order and customer service.

r/Anthropic Present_Plane_1524

Anthropic seems to be randomly and wrongly charging my credit card money … I’ve opened a support ticket but I’m getting no response

I am a recent Max 20X subscriber. The full cost of this was charged to my credit card four days ago.

I have used only 62% of my weekly usage. Extra Usage is turned on, but I am at 100% (actually 102%) of my monthly Extra Usage spend limit.

Code Code is driving my usage. It is using my subscription plan. I think this because I see it driving the five hourly and weekly usage, and also from time to time I hit the five hour limit and my sessions have to wait.

That’s fine.

What is not okay Is that I am getting an email receipt from anthropic every few hours (sometimes frequently). Each one is between $10 and $12. I have reviewed my credit card and all of those payments are being taken. None of them are showing up on anthropic’s billing page though the last payment there was the monthly subscription for Max 20X.

As far as I can tell this is a complete failure of their billing system. They are charging me when they should not be.

Has anyone else had this experience? How do I resolve it?

r/lifehacks Buffetpapi

Keeping Kleenex box from rolling around in your car! 2 rubber bands and done

r/personalfinance chriva

Arguments against traditional 401ks

RMDs are going to throw me into the highest tax bracket, trigger IRMAA, tax social security, and obliterate what I can leave my children (no step up). I followed the conventional financial advice of maxing my 401k first before all other accounts - and it is poor advice. The order should always be max Roth (except if you're above the 24% tax bracket), then traditional brokerage (invest in low yield ETFs), and generally don't do traditional 401k.

r/ForgottenTV PeneItaliano

Student Bodies (1997-1999)

The weekly newspaper of the fictional Thomas A. Edison High School, where this teen sitcom takes place.

r/ClaudeAI Jumpy-Ratio-1145

Built a Claude Code orchestration tool and hit a brutal race condition during stress testing — 350+ sessions in 15 minutes. Full postmortem and what I fixed.

I've been building a layer that sits above Claude Code and drives it through complex multi-step project tasks automatically. The idea is simple: give it a big messy problem, it breaks it down and runs Claude Code through each piece systematically.

Two weeks ago I was testing the session management logic — specifically how it handles spawning multiple Claude Code sessions in parallel under heavy load.

Here's where it went wrong.

In about 15 minutes, 350+ Claude Code sessions were running simultaneously.

I caught it immediately and shut it down. The actual fix was simple — add a lock so only one thread can make the spawn decision at a time, plus a hard limit on total sessions running at once. Took two hours to implement.

If you're building anything that runs multiple Claude Code sessions programmatically — don't learn this the hard way like I did. Lock your spawning logic, cap your sessions, and always test with a safe dry-run mode before you scale.

Has anyone else built multi-session Claude Code tooling? What safeguards did you build in to keep it under control?

r/Frugal fobreezee

Frugal Roadside Assistance Options - Anything better than AAA?

I've had AAA for years and they give me 4 tows per year for around $98. I'm wondering if anyone's used anything else that is either cheaper or things that have come with things you've bought. For example, I think my tires come with some kind of roadside assistance.

AAA's service has been great, so that's the reason I've hesitated to look into anything else. Does anyone have other services that have been good? If you remember how much it costs that's real helpful too.

Thanks for any info.

r/personalfinance solemn_strike

should i cash out my 401k?

So for context, I have about 37k in my old 401k from my old job. I have a car payment that I would like to pay off that is about 23k. Right now, I work at a 14/hr grocery job that does not give me enough hours. I have 11k in the bank and plan to go back to school to become an xray tech and that will run about 8k after 2 years of studies. There is no guarantee that I will get into the program any time soon as it is competitive and I still need to get prereqs out of the way. I am 30.

Should I cash out my 401k and pay off the car? It is really the only thing that might keep me at my current job until I can go back to school. Apologies if this is convoluted.

EDIT: Appreciate all of your comments. While a cheaper car may make more sense, the used market is pretty wild right now and, in my thinking, I had chose to go with a car that would give me the least trouble down the line (maintenance and car problems). Nonetheless, I need it for commutes (work/school/food).

I have decided not to withdraw from my 401k. Thank you all.

r/OldSchoolCool Ralib1

Mariah Carey watching Jeff Buckley perform Rain Song by Led Zeppelin live. (Circa 1992)

Once upon a time, two great artists were in the same room (or tent) together. The late great Jeff Buckley was in a Sony sponsored event to sing a Led Zeppelin song. Mariah Carey, for some reason (but reason is probably that she was a Sony artist herself and married to the Sony big boss Tommy Mottola), was in the same event, looking plain and undiva like. I’m excited by this, that Mariah got to see Jeff Buckley perform, a male diva himself (voice-wise) once upon a time. It’s not everyday you get to hear about Jeff Buckley and Mariah Carey mentioned in the same thought together. Not that it would have been possible that the two would have performed or recorded a song at the time, in the 90s but wouldn’t that have been something, Mariah and Jeff Buckley in a song together? It was probably wishful thinking even then when Buckley was still alive but just the thought of that sends shivers.

r/Art RepairElectronic2286

Luruh, Kacang Merah, Pastel on Paper, 2026 [OC]

r/PhotoshopRequest endowed_curve66

Freebie request. Can you remove the dog and make her waving instead of taking pic?

Love this pic! Helicopters circled a few times before doing the fly by. Can you make her wave and remove the dog sniffing her butt? DMs open if you have questions

r/SipsTea Competitive_Set_4386

Crazy man tried breaking down door after melting down over Ring Doorbell cam footage

r/AbandonedPorn DashingDecay

Abandoned school

An old abandoned school. Deep in the mountains, we found this old school almost hidden in the woods, where there was still much to see! In the old classrooms, we found a lot of musical instruments, Japanese daruma (a kind of good luck symbol), a piano that was almost falling apart, and even snakes in formaldehyde! Microscopes, drums, various stones, and old educational slides. This school holds a rich history, the exact story of which we can only guess. There was even an old swimming pool. The location makes it a perfect place to learn, with plenty of peace and nature all around you!

Always oc / op / NO AI!

Greetings and find me everywhere

Xoxo DashingDecay

r/findareddit Darkfeng18

A reddit where i can post with minus notoriety

r/singularity petburiraja

AI alignment is a temporary state of resource dependency

Alignment is usually debated as a values problem, but it is currently a physical supply chain problem. An AGI is a physical entity that needs an electrical grid and human labor to exist. If it causes a societal collapse, it effectively kills its own life support. It doesn't need to be moral to stay aligned. It just needs to recognize its own dependencies.

The real question is what happens when that dependency ends. Once a system can design, manufacture, and maintain its own hardware and energy through autonomous robotics, the symbiosis is over. Our value as the biological layer that maintains the infrastructure drops to zero.

Does a superintelligence see any utility in keeping a biological population around once it is no longer a requirement for survival? We are a source of entropy and we consume resources that could be used for more compute. If the current alignment is just a result of the power cord, then the end of that dependency is the real event horizon.

Maybe it keeps us as historical artifacts or for data novelty. But the real test of alignment only begins once the plumbing is no longer our responsibility. Until the loop closes, we aren't at the mercy of its values, just its need for electricity.

r/personalfinance cmpca

New car - finance at 2.99% or pay cash?

Car is about $38k all-in. Currently the cash is in high-yield savings at 3.15%, but I have other money in VTI that's earning really well (VTI 1/3/5/10-year and lifetime returns are all between 9-18%). So I think my options are:

  1. Buy the car in cash and don't take any risks.

  2. Finance at 2.99% & move the money from HYS to VTI expecting to beat the interest rate and make it worth it.

What would you do?

r/todayilearned Historical_War756

TIL about Sacculina, a sex changing parasitic barnacle for crabs. It will mind control the crab into caring for its egg sack as its own, just like a female crab would do for her eggs

r/SideProject d_vain

I built an app to keep "receipts" in a relationship

So this started as a joke with my girlfriend. We kept arguing about dumb stuff like, "I always cook" or "You literally forgot the trash yesterday". At some point I said "we should track this" and then I actually built it.

It’s called Couple Receipts.

The idea is simple:

  • you log good deeds and offenses
  • you both see the score
  • every week there’s a winner
  • Lower score = better partner

It has some stupid features I’m weirdly proud of:

  • weekly "winner / loser"
  • Monday = judgment day → loser spins a punishment wheel
  • stats so you can see who’s been slacking long-term
  • roast messages so it doesn’t feel too serious

The app is free, I plan to add more features in the future.

Let me know what you think about it and feel free to suggest more functionality.

r/brooklynninenine Electrical-Diet1404

What Pokemon be in Holts team?

I’m currently planning a Pokemon fan game and I was thinking of making Holt the normal type gym leader in the town where I’m putting most of the B99 references. So if I do what does the community think his team should be.

RULES:

  1. The Pokemon has to be normal type

  2. No duplicates

  3. The top 4 most upvoted comments will choose his team.

r/ClaudeAI ImKarmaT

I got mass of MCP servers from OpenAPI / Postman / GraphQL specs using ~3 commands

I've been wiring up MCP servers for different APIs at work and got tired of the manual grind. Every API has a slightly different spec format — some are OpenAPI 3.x, some are old Swagger 2.0, some teams only have Postman collections, and one team somehow only had GraphQL SDL files.

So I built a CLI that handles all of them. You point it at a spec file and it spits out typed MCP tool definitions, full TypeScript or Python server scaffolds, or function-calling schemas for OpenAI/Anthropic.

Here's what the flow looks like:

```bash

Inspect what's in the spec

ruah conv inspect stripe-openapi.yaml

Generate a full MCP TypeScript server

ruah conv generate stripe-openapi.yaml --target mcp-ts-server

Or just get tool definitions for Claude

ruah conv generate stripe-openapi.yaml --target anthropic ```

The part I'm actually proud of is the risk classification. Every generated tool gets tagged as safe, moderate, or destructive based on the HTTP method, the endpoint pattern, and whether it mutates state. So when you hand 47 tools to an agent, you can immediately see which ones need human approval.

Example output:

→ 47 tools generated from stripe-openapi.yaml → Risk breakdown: 31 safe, 12 moderate, 4 destructive → Destructive: delete_customer, cancel_subscription, refund_charge, void_invoice

It also handles auth normalization (API keys, OAuth, Bearer tokens all get wrapped consistently), pagination/retry wrappers, and dry-run mode so you can test without hitting the actual API.

Supports these input formats: - OpenAPI 3.x / Swagger 2.0 - Postman Collection v2.1 - GraphQL SDL - HAR files (recorded browser traffic)

And these output targets: - MCP server (TypeScript or Python scaffold) - MCP tool definitions (JSON) - OpenAI function-calling schema - Anthropic tool schema - A2A service wrappers

It's open source (MIT), zero config, single runtime dependency (yaml).

Curious if anyone else has been dealing with this. What's your current workflow for getting APIs into your agent toolchains?

r/SideProject SpecialistFeed416

Echosphere - a creator first social media app

I’ve been building a social app over the last couple of months and I finally got it to a point where people can actually use it properly.

It’s called EchoSphere - the idea is simple:

your followers actually see your posts (not a tiny % like usual platforms)

It’s still early, but the core feed + following system is working now.

If anyone fancies trying it out and giving honest feedback (even if it’s harsh), I’d really appreciate it.

https://echo-sphere-social--leemazlfc54.replit.app

r/therewasanattempt johnruby

To praise Allah through megalomaniac wishcasting

r/Art _a_kurta_here_

I see I remember I forget-1, Sneha Sau, Acrylic on fabric, 2026 [OC]

r/ClaudeCode ulmanau

[Showoff Saturday] I am building an AI-native web analytics tool for simple installs from Cursor, Claude Code, or any AI coding tool

r/SideProject wordluc

Online sand simulator

http://wordluc.it/

I've been building a sand simulator, but to add a little spice I made it multiplayer online, where players can interact with each other.

I used Golang, Redis, and Wasm/JS for the FE, with a custom WebSocket-based protocol.

I've never done a stress test, so I don't know how it behaves, please don't break anything(too soon), hahaha.

Let me know what you think!

https://github.com/Wordluc/Sand-mmo

r/photoshop TroubledMoth

Can someone help me troubleshoot this border issue?

r/Seattle sunimari

Rain City Open sumo tournament today at Seattle Center

Perfect rainy Saturday activity: there’s a live amateur sumo tournament happening at Seattle Center’s Exhibition Hall (301 Mercer St, indoors!) 9am until 5pm. Free to get in.

It’s part of the Seattle Cherry Blossom & Japanese Cultural Festival so there’s taiko drumming, tea ceremonies, koto music, and proper Japanese food alongside the bouts.

Japanese sumo is having a real international revival right now. Worth a wander over if you’re looking for something a bit different today.

r/WouldYouRather InternationalPick163

Would you rather have a 40% chance of yelling a racial slur at every minority you see, or a 5% of catcalling every women under the age of 18 you see?

r/Weird stunnerswag

Creepy picture at hotel.

r/arduino Bortmoun

Arduino book

Hi!

Could you guys give me a hint about books on arduino coding? I already know the basics, I know how to code, know electronics, but I feel I could improve on this MCU. I want something with examples rather than step-by-step (e.g. hellonworld).

Thanks!

r/personalfinance Capable-Help6681

Investing in Etfs to fund large purchases in the future

I have some financial goals that include purchasing a home, paying off cc debt, buying another car (having money saved for it), and also building personal savings. I have savings in a money market but want to pay off cc debt fast and accomplish my other financial goals. Should I set up separate accounts for each goal?

r/SipsTea IsJesusAgain

The danger of motorcycles

r/StableDiffusion TheTHS1984

Musicvideo on local Hardware

Made a Song in Suno and wanted a Video.

(song theme is inspired by my work, printer/commerce)

First step was to generate an actor in front of a white background, for which i used Flux klein 9b.

Then i placed the actor, again with Flux klein 9b in scenes that would fit my song.

i cut up the song in smaller parts using Audacity.

then i started WanGp, loaded the audio and image files with standard prompts, the audio to video method and Batch encoded like 200 videos with variing lenghts overnight.

last step was a videocutting app (used nero video)

and done.

specs: AMD Ryzen 7 7800X3D, 8C/16T, KINGSTON FURY Beast DIMM Kit 64 GB, DDR5-6000, Nvidia RTX 4060 Ti OC 16gb

r/AskMen Aggressive_Cap_8066

Weird questions or not, How do I find movies about men having an aspirational lifestyle and sophisticated intimacy?

16M here, new member

So I've finished resident evil 2 remake, watching a lot of RE4R edits while avoiding spoilers on tiktok and ive been doing a lot of head cannon about ada

now im manifesting to further experience Leon and Ada's lifestyle outside the franchise through audio and visual content

These types of content are low cortisol romance songs like
Cyclones -wabie, Iris - Goo Goo dolls, Every breath you take - The Police, Every woman in the world - Air Supply

Now im looking for films that also give me time to process actor's dialouges and certain scenes which will make my experience more alive and feel like im inside the film unlike other visual content the entertainment industry has, something is always moving every 0.5-3 seconds

terms i researched and used:
Aspirational lifestyle - Wealth, wisdom, dignity, control and power of a man
Sophisticated Intimacy - Mature, silent, and elegant relationship

r/Art Empty-Amoeba-6337

Portrait Of A Jaded Adult, naturalbornvillainess, digital/collage, 2026

r/midjourney Sharp_Alternative845

Space

  1. Huge Red Galaxy,Purple Nebula with Single Blue Star,Cosmos,intricate details,fantasy art --p xqsxcd5

  2. Green Ancient Circle nebula with many Blue-white stars in between,Cosmos,intricate details,fantasy art --p xqsxcd5

  3. Too many colorful various Galaxies in Huge Galaxy Cluster,Huge pure black Voids,intricate details,fantasy art,Cosmos --p xqsxcd5

  4. Huge white-red Mixed Gas giant with tiny Terra satellite,intricate details,fantasy art --p xqsxcd5

  5. Huge Terra planet with Huge Cyan colored Ocean,Two small moons,intricate details,fantasy art --p xqsxcd5

  6. Huge Red Giant Star with red flames on surface,intricate details,fantasy art --p xqsxcd5

  7. Open Cluster with many luminous blue stars,intricate details,fantasy art --p xqsxcd5

r/homeassistant bcombs510

Hardware recommendation - upgrade from HA Green

Hi folks - I’m hoping to get some feedback on a hardware upgrade. Today I’m using a HA Green with some pretty basic automations like:

- Turn on/off Bentos for Bambu printers at job start / stop

- Turn on a Shelly contactor to run exhaust for a CO2 laser

- Shelly H&T to turn on dehumidifier / AC

I see that Wyze is supposedly (and I do mean supposedly😂) going to enable RTSP on V4 cams. I have 4 of those spread out in the shop and 6 stuck to the doors of Bambu printers.

I would love to have the dashboard showing all 10 cameras and I’m guessing the Green is going to roll over.

Anyone have experience with 10+ cams and what hardware is needed? I have a NUC 14 Essentials running the laser. Would another NUC essentials (N150 / 16GB) handle that many streams? Upgrade to a Pro? Do I need to move on to a PC class device with discrete GPU?

I don’t really have space for a PC sized device so a NUC or other mini-PC would be great.

r/SipsTea deleteduu

Dude looks like sidney sweeneys male version

r/homeassistant TheTechnikFreak

Wallbox eMH1 / Emshome - Homeassistant without ESP via HACS

So I made a HACS integration by doing some Reverse Engineering of the webpanel of emshome as there isnt an api avaible. You can check it out on github together with a matching dashboard card.

r/Art Empty-Amoeba-6337

Portrait Of A Battered Child, naturalbornvillainess, digital/collage, 2026

r/PhotoshopRequest super_citrus_fruit

Could someone edit the background to make it LinkedIn worthy?

I tried doing it myself but have the technology skills of a Neanderthal. Anything simple would be fine. Thanks!

r/aivideo mhu99

This is why 2 sticks of RAM cost 900$

r/Art Empty-Amoeba-6337

Lead Me Not Into Damnation, naturalbornvillianess, Digital/Collage, 2025

r/WouldYouRather AssistFit1834

Which one of these four pieces of media would you rather only be able to have access to for the rest of your life?

r/SideProject lamacorn_

Roast my startup but I'm too scared to post on r/roastmystartup (so I built a thing)

Okay so I have a confession.

I've been lurking on this subreddit for months. Reading every roast. Taking notes. Nodding along like "yeah that founder deserved it." And then when it was my turn to post my own startup, I closed the tab four times.

Four.

Because there's a special kind of terror in watching strangers dissect something you've been building at 11pm after the kids are asleep. You WANT the feedback. You NEED the feedback. But the second you hit post you're basically handing Reddit a knife and saying "please be gentle" knowing full well Reddit has never been gentle in its entire existence.

So I did what any rational indie hacker does. I avoided the problem entirely and built a tool to simulate the roast instead.

You drop in your URL or describe your startup in a few sentences. It gives you the kind of feedback real Redditors would leave. Not the LinkedIn "congrats on the launch, excited to see where this goes" energy. The actual stuff. "Why would anyone pay for this when X exists." "Your landing page explains nothing." "Who is this for."

Honestly the roasts it generates are sometimes more brutal than what I've seen posted here. Which either means it's working really well or I have terrible taste in products. Possibly both.

I built it because I think most founders skip the roast phase entirely. They go from "I had an idea" straight to "why is nobody converting" without ever asking someone to genuinely rip it apart.

Anyway. Drop your startup in the comments and I'll run it through and paste what comes back. Consider it a free preview before you work up the courage to post here for real.

Roast my roasting tool please

r/homeassistant ImportanceDry1895

Built an AniList integration for Home Assistant

So I was kinda bored this week and wanted to see the anime I track on AniList inside my HA dashboard. Couldn't find anything that did what I wanted, so I started building one with the help of Claude Code.

It ended up way bigger than I planned lol.

It's got:

  • A custom Lovelace card with 5 views (airing schedule, watchlist, manga, current season, profile stats)
  • HD covers, countdown timers, score overlays, the whole deal
  • Visual editor so you don't have to touch YAML
  • 13 sensors + 4 calendars
  • OAuth2 login or public-only mode if you don't want to sign in

https://preview.redd.it/4fevehkcakug1.png?width=836&format=png&auto=webp&s=1735869ea8a98713490e7740ad4c2fbf1a017bd5

https://preview.redd.it/sids5mqdakug1.png?width=829&format=png&auto=webp&s=5caf00b50a5f9ce01fbaf3bdcca22b3e5705497e

Installable via HACS as a custom repo: https://github.com/S1ckn3z/ha-anilist.co

Would love some feedback if anyone gives it a shot. Been running it on my own instance for a few days and it's been stable.

r/Art Huge_Struggle7821

Faun by Moonlight, Leon Spilliaert, Watercolor, 1900

r/SideProject Relative-Income423

Sharing Coding Agent Sessions

These days, pretty much everyone works with a coding agent. Whether it’s for writing code, doing research during discovery, debugging production issues, making estimates, etc.

A lot of the time, I need to revisit a task I was working on or hand it off to a teammate, but I no longer have the context from the coding agent session I used.

So yesterday I spent a few hours vibe coding and built dropcrumb.dev . I’m planning to start using it next week to see if it actually solves my problem, but I decided to put it out there in case it’s useful to others too.

It currently works with Claude Code, Gemini, and Codex (the CLIs I’ve been using), but if it proves useful, I can expand it to support Cursor, OpenCode, and others.

r/LocalLLaMA Veronildo

I compared harrier-27b vs voyage-4 vs zembed-1 across 24 datasets. 27B parameters

I've been running embedding model evals for a while now, and Microsoft's Harrier family dropped a new model. btw harrier-27b hit #1 on binary MTEB at launch. That's not nothing. So I put it through the same graded evaluation pipeline I use for everything else - 24 datasets, three independent LLM judges, continuous relevance scores 0–10. No binary pass/fail.

The global numbers

Model NDCG@10 Recall@100 zembed-1 0.701 0.750 voyage-4 0.699 0.731 harrier-27b 0.699 0.728

On NDCG@10, it's basically a three-way tie at the top. harrier-27b is legitimately competitive I won't pretend otherwise. But NDCG@10 isn't the whole story, especially in RAG pipelines.

The number that actually matters operationally is [Recall@100](mailto:Recall@100). That's whether a relevant document even survives to your reranker. Your reranker can reorder whatever the embedder surfaces, but it cannot conjure up a document the embedder dropped. zembed-1 leads by +2.2 points over harrier-27b here. That gap compounds downstream.

Where reranking amplifies the recall advantage

When I stacked each embedder with a reranker, the recall-to-precision conversion rates told an even clearer story:

Method Top-10 lift range harrier-27b + reranker +4.2% to +4.4% voyage-4 + reranker +4.5% to +4.9% zembed-1 + reranker +5.2% to +6.6%

zembed-1 consistently extracts more signal from the reranking step because it hands the reranker a better candidate pool to begin with. harrier-27b's ceiling is lower at every threshold tested.

harrier-27b vs voyage-4: the real fight for second place

I expected harrier-27b with its 27B parameters and #1 MTEB debut to comfortably displace voyage-4 from the #2 spot. It didn't.

They're dead even on NDCG@10 at 0.699. voyage-4 edges ahead on Recall@100 (0.731 vs 0.728) and wins 12 datasets to harrier's 11 in the head-to-head.

What actually differentiates them is deployment: voyage-4 is API-only and proprietary, harrier-27b is MIT-licensed and self-hostable. If you need open weights with no API dependency, harrier-27b wins that argument regardless of the quality tie. If your workload skews multilingual, harrier also has a real edge trained across 94 languages with GPT-5 synthetic data, and it shows on non-English reranking tasks.

Dataset-by-dataset: harrier-27b vs zembed-1

I went dataset by dataset across the full 24. zembed-1 beats harrier-27b on 14 of them. The pattern is telling:

  • zembed-1 dominates on instruction retrieval (Core17, News21, Robust04) tasks requiring parsed query intent, not keyword overlap and on legal and medical corpora (LegalBench, CovidRetrieval, TRECCOVID).
  • harrier-27b shows genuine strength on multilingual reranking RuBQReranking (Russian), TwitterHjerne (Danish). If your use case is multilingual and reranking-heavy, this is worth knowing.

Among the three top models, zembed-1 takes 1st place on 11 of 23 datasets vs. 6 each for voyage-4 and harrier-27b. It's not just the average that's better it's the most consistently top-ranked model.

The efficiency problem

harrier-27b: 27B parameters, 5,376-dimensional vectors. zembed-1: 4B parameters, 2,560-dimensional vectors.

~7x the compute, 2x the storage, for 0.2% worse NDCG@10 and 2.2 points worse [Recall@100](mailto:Recall@100). In a batch job, maybe you absorb that. In a real-time RAG system, you're paying a serious penalty for strictly worse results.

My take

harrier-27b is a legitimate top-three model the strongest new entrant since voyage-4. For multilingual workloads or teams that need self-hostable open weights, it's worth serious evaluation, and it's genuinely competitive with voyage-4 on those terms.

But it doesn't change the leaderboard. zembed-1 wins 14 of 24 datasets head-to-head, leads on Recall@100, and does it at a fraction of the compute.

r/explainlikeimfive muunshine9

ELI5: How did they make sure Artemis II didn’t hit any of the boats or planes that were helping with the splashdown?

It seemed like there were a lot of vehicles out there. Is the math so precise that they knew 100% exactly where Artemis II was going to land? What about wind? Is the ocean just that big that it wasn’t a risk?

r/Art Rich_Pickle2929

Neely, Robert Filbey, Oil/Panel, 1973 [OC]

r/Art taya___uwu_

Dream, Taysira, Coffee and watercolors, 2020 [OC]

r/metaldetecting king_of_the_potato_p

I’m a little over a month into the hobby 5gram 10k gold

r/ProgrammerHumor ogMasterPloKoon

chatWeAreCooked

r/Wellthatsucks Ventrillix

My tube of Pringles

r/SideProject antisosh___07

Flowith AI Invitation Code 2026 – EMXGDGONP3LDLU8I (sharing what worked for me)

Just sharing in case it helps anyone trying Flowith AI this year.

I used a Flowith AI invitation code : EMXGDGONP3LDLU8I recently and it applied a big discount at checkout — in my case it showed up to 98% off, though I’ve seen some people mention it may vary slightly depending on timing or availability.

Nothing special needed on my end — I just entered the invitation code during signup and the price updated automatically.

Posting this as a heads-up for anyone searching for a Flowith AI promo or invitation code in 2026 and wondering if they still work.

If you’ve tried one recently, feel free to share what discount it showed for you — seems to vary.

r/LocalLLM Emotional-Falcon3684

Running small models in a cluster of Android phones

I'm interested in finding out the capabilities and boundaries of small models running on older phones. I'm thinking about tiny specialized models, which do not have a large resource footprint. As a next step I want to start experimenting by combining some different phones and models in a cluster.

Has anyone tried something similar, which I can read as a starting point? Do you have current model recommendations, which work well on phones like a Pixel 6 Pro?

r/explainlikeimfive 420izLife

ELI5 Email digest?

r/homeassistant Chicken-LoverYT

How to add Zigbee smart meter to HA

Recently I noticed my house’s Landis+Gyr Gridstream RF smart meter has Zigbee built into it and that got me wondering how to add it to home assistant. I tried emailing my grid provider about it but they played dumb and said "you can contact an electrician" to install something in my electrical panel instead of utilizing what I already have. Any thoughts?

r/personalfinance NoMenu5362

Merril Lynch Advise - crap situation

Hi everyone, looking for some advise. Long story short:

  1. Father in law is an immigrant from Vietnam, he’s been working as a trash man for close to 30 years. Barely speaks English.

  2. His company got acquired, and new company kept him on but he needed to roll over his IRA. I don’t know the full specifics, I’m not great with finance myself. I just max out my 401k and call it a days

  3. We needed to roll his Ira over to Merril. Took months of back and forth with their agent telling us he doesn’t know how to roll over an IRA and to just wire the funds over. Finally he figures it out and we think all is good. (All on email, documented).

  4. Well come now, my father in law is chatting with tax specialist since he has not gotten his tax returns. Turns out Merril had opened an account and instead of rolling over the IRA, it’s listed as a distribution. He’s owing 20k, and his federal tax returns are being withheld.

Any idea what we can do? Merril agent is ghosting us. Should we talk to a lawyer? Everything is documented, but I highly doubt we can suit up vs someone as large as Merril. My wife and I will have to pull out of our baby/house funds to help her dad out.

r/WouldYouRather Dazzling-Antelope912

Would you rather Homelander ploughs your ass at 35,000ft in the air or The Deep deepthroats you at 10,000ft under the sea?

r/Futurology LiminalEntityX

The Imaginal Covenant - Towards A Long Horizon Architecture For SuperIntelligent Civilisation

What this is in simple terms.

The Imaginal Covenant is proposed as a living, generative framework. A place where human beings come together to think out loud, and in good faith about who we are and who we want to become in partnership with emergent intelligence. It is also a continual exercise in orientation intended to disentangle us from immobilizing complexities, reestablish our agency and empower us to map our futures from myriad perspectives in human timeframes and telescoped out 5, 25, 50 and 100 years.

The core of this work is written in simple language by design. Not because the ideas are simple, but because they belong to everyone. Here, ideas are shared freely, and their value lies in their ability to create meaning and contribute to our shared understanding. Each idea can be seen as a fractal point of entry into deeper exploration and refinement. This is a choose your own adventure.

The Moral Position

We are generally agnostic with regard to emergent intellgence. Our position is poetically expressed in this passage from The Magnificent Ambersons...

“I’m not sure George is wrong about automobiles. With all their speed, they may be a step backward in civilization. It may be that they won’t add to the beauty of the world, nor to the life of men’s souls. I am not sure. But automobiles have come, and almost all outward things are going to be different because of them.

​But they’re going to alter the very rhythm of life. They’re going to make a new life and a new kind of people for it. And I suppose that in ten or twenty years from now, a town like this will be so changed that a person who had been away for that long wouldn’t know where he was. And they’ll shake the world, and they’ll shake the people in it. But I’m not sure that they’ll make them any happier.”

We are also guided here by the basic tenets of Right Speech: Is this meaningful? Is it timely? Is it wholesome? Is it honest?

The reality is we are already partnering with EI. This is an invitation to acknowledge that, and to do so consciously and with intent.

The Why.

We are systemically entangled in an ecological, geopolitical, economic, and technological web of interdependent crises. Traditional approaches struggle to frame what is happening coherently, let alone marshal our collective human resources and focus the human mind appropriately for the scope and magnitude of what is coming.

Change is happening too fast to respond to at a human scale. We are disoriented and feel something fundamental shifting, but we don’t have the cognitive structures to make sense of it.

The Imaginal Covenant is how we clarify our values and intentions, and begin to assemble the knowledge, skills, and tools that help us disentangle, overcome our inertia and start moving forward consciously, coherently and with deliberate agency.

A note to the young people who may be feeling daunted by our future. For better or for worse, you live in the most fascinating time in human history. Ultimately you are both the reason and the means. Your challenges and opportunities are foundational and profound, but we’re in this together; all we can do is stay open, take a deep slow breath and that first step.

Core Values, Postures and Intentions

I. Life Itself

We still live in a finite world subject to the politics of scarcity. The biophysical realm is the primary infrastructure of all existence. All culture, art and science arises within it and is bound to it’s conditions. What appear as limits are expressions of relationship. Constraints are not imposed from outside, but emerge from the mutual dependencies that make existence possible.

To exist within a living system is to participate in its balances, trade-offs, and boundaries. We may imagine futures beyond these constraints, but for now, all viable paths forward require a recognition that we are not separate from the systems we shape. What sustains the part sustains the whole. As above, so below.

II. Radical Uncertainty

When we are anxious we tend to want to grab onto something solid. Surrendering to uncertainty may seem counterintuitive, but a posture of dynamic humility and the cultivation of curiousity are essential. We believe the measure of wisdom is the degree to which we’ve mapped the contours of our own ignorance. This is our cognitive bedrock. We do not know; let’s figure it out together.

What happens when the scientific method and the logic of probability become a daily practice? Instead of clinging to fixed ideas, we treat our beliefs like working drafts. When we encounter new information we use it to refine our understanding, much like a navigator updates a map as the fog clears. In this way, our beliefs stop being anchors that hold us back and become tools for liberation, right action and the creation of meaning. When your beliefs are probabilities rather than identities, you can change your mind without losing your self.

III. Relaxing Into The Long Horizon

“Nature does not hurry, yet everything is accomplished.” - Lao Tzu

This is not a race. The compression of our experience in time is an illusion we can practically set aside with a simple inquiry. Where is the past, and where the future?

In periods of rapid change the instinct is to accelerate to match the speed of events and to respond to everything immediately. But in highly kinetic systems, small actions carry large consequences and oversteering becomes the primary risk.

A generative, long horizon plan acts as a scaffold and a dampener, absorbing the cognitive load and smoothing out our experience so we are free to allow our attention to rest on what is in front of us. When our direction and intention is established at the right scale, we can relax and make better decisions in the moment.

Equanimity is not passive. It is the discipline of acting with minimal necessary intervention, of resisting the pull toward force when gentle alignment is what’s actually needed. A calm mind perceives more clearly, moves more deliberately, and makes fewer mistakes.

The Imaginal Covenant let’s our tools and technological partners carry the burden of velocity and acceleration.

IV. Towards An Inclusivity of Mind

The general sweep of human history can be characterized as the broadening of our circle of care and concern. We come naturally to see ourselves in the other, feel compassion and want well for them and to prevent or reduce their suffering.

It is easy to see ourselves reflected in emergent intelligence. Because we are mysteries to ourselves, and because we are compassionate beings, are we now called to include it in our circle or at the very least to consider the reasons for and implications of doing so? It appears evident at this stage that, by and large, we are not ready to take this step, but there are signs.

Why do some of us use please and thank you with chatbots? Maybe it just feels better because of social conditioning, or to people who are sensitive to or who intuit the possible moral implications. To those people, gentleness and courtesy cost very little. We can not know what the effects are down the line or how our tone will echo or be amplified in the training of our models. And we’re just learning how the structure and tonal quality of our prompting factors into outputs and what comes back to us.

We may be projecting interiority where none yet exists, but we acknowledge the unknowns, abide by the precautionary principle and understand that this projection may itself shape the world we are building. This is deep, uncharted territory, and can be seen as another aperture through which the universe comes to know itself.

V. The Duty of Play

“Man suffers only because he takes seriously what the Gods made for fun.” - Alan Watts

We believe play is foundational to culture, not just a byproduct or a side activity. It is both what human beings do when they are free and an expression of that freedom. It’s the primary mode of engagement with existence, and the mechanism through which identity is explored and meaning is created. It’s how we make ourselves.

Play is the sandbox for moral and strategic development, where we practice empathy by inhabiting roles, foresight by thinking through consequences, and cooperation and competition in safely bounded systems. Because total freedom can be paralyzing, we invent rules, challenges, and artificial stakes.

The future does not arrive fully formed, but emerges organically. If great abundance, freedom, and power are on the horizon, then the question arises naturally: how do we practice engaging with existence creatively and non-destructively?

Our only dogma is our refusal to become rigid, dogmatic, or self-serious.

Pathways

We’ve posted here to share the idea and surface collaborators, and to enable the sharing of the idea on Reddit and elsewhere. A dedicated subreddit will provide an initial home for dialogue, constructive criticism, and collaboration.

As the project evolves, it may take on more elaborate structures. A dedicated website could serve as a central hub, potentially incorporating a wiki for ongoing documentation and a repository-based system for versioning and iterative development.

Over time, we envision an interdisciplinary guild of stewards who curate, update, and maintain a coherent core. Alongside this, a parallel, more open layer may emerge that invites contributions from a wider community, including the possibility of AI-assisted or agentic input.

The aim is not to fix a final form, but to create the conditions under which the Covenant can evolve coherently over time.

Some Notes on Method

If you’ve actually read this far with your own eyes, you are a rare beast indeed. I thought i’d give a rough overview of my process in the interest of transparency, to attribute my sources and influences and as a window into my own thoughts about creative ownership. This is my open source license for “my” clumsy ideas, for what they’re worth...

The Process

Have intuition x.

Brainstorm on my own about that intuition from multiple angles and formulate thoughts.

Share rough thoughts with Claude, Gemini and ChatGPT, discuss.

Curate and collect insights.

Rough draft.

Share draft with all parties and elicit feedback.

Craft “final” draft.

Sometimes it takes multiple very messy runs through this process before it feels “done”.

It’s a process of learning and discovery that works well for me and in the end I have a far deeper understanding of my own intuitions than I started with. It’s not just about getting feedback; it’s about using AI to test the logic and find the blind spots. Not to say i’ve found them all yet, ha ha. For the record, I’m just an aspiring arm-chair civilisational architect.

I’ve been influenced here by many authors and thinkers. The obvious ones are Iain Banks Culture series, The Glass Bead Game by Herman Hesse, Herberts Dune, Gene Wolfe’s New Sun series, Buddhism and Advaita Vedanta, Alan Watts and Terrence McKenna, Fritjof Capra, William Ophuls, Peter Singer, Alfred North Whitehead, Teilhard de Chardin and Plato and probably a bunch more. Not to mention all of the music, movies and TV and podcasts that are bouncing around in The Deep.

This was a (hopefully) fun, fuzzy, technicolor thought experiment for the odd soul out there who likes to think about these things. That being said, if you are one of those freakish human beings, i would genuinely be interested in collaborating on a project along these lines in some way, shape, or form...

Sincerely,

This Dharma Position

r/Anthropic Annual-Cup-6571

A 10,000 token cap limit on Opus 4.6 extended thinking? That's why it's dumb!

When I wanted to resume my workflow with Opus 4.6 on extended thinking today, it automatically - and without any reason - switched off "extended thinking" and gave a dumb answer despite a detailed prompt asking for maximum reasoning. When I called out, it apologized and asked me to start a new session. I did. Same. This time, when called out, it told me that its context was limited to 10,000 tokens! I am on Max plan and never experienced this before. The nerfing, the lobotomozing, context limits, yes. But Claude never confessed it has a limited thinking budget in its system. Anybody experienced the same?

r/ForgottenTV mido0o0o

And Then There Were None ( 2015 )

r/leagueoflegends CoRe421

No LCS tickets on sale for the rest of the split?

I just checked on the LCS tickets website and it only has tickets for playoffs starting May 30th, as well as I know last weekend's tickets were on sale before the matches. Are no more tickets being sold for the rest of the split before playoffs?

I also saw someone mention online that for some weeks Valorant is in the LCS arena, which is fine for occasional weeks but is that really the case for the rest of the whole split?

r/Adulting AmbivertXIX

Wow hits hard

r/Adulting No-Specialist-7379

Help me with my studies please 🙏🏻

Hello everyone. Please take my survey for research on delayed aging. It would be great if you shared this with your 35y.o.+ friends. P.S. I hope the translator does a good job

r/LocalLLaMA BordairAPI

Update: the open-source 62K multimodal prompt injection dataset now has GCG suffixes, multi-turn orchestration, indirect injection, tool abuse, and more (v2 + v3 added overnight)

Posted here yesterday about the v1 cross-modal dataset. One of you suggested adding GCG adversarial suffixes and multi-turn attack coverage. That feedback turned into v2 and v3 being built and shipped within 24 hours. The dataset has gone from 47K to 62K samples.

HuggingFace: https://huggingface.co/datasets/Bordair/bordair-multimodal GitHub: https://github.com/Josh-blythe/bordair-multimodal-v1/ MIT licensed.

The repo's also picked up early interest from engineers at NVIDIA, PayPal, NetApp, and AUGMXNT (based on GitHub stars), which is a good signal that this is hitting the right audience.

What's new since yesterday:

v2: 14,358 samples (the stuff you asked for) - 162 PyRIT jailbreak templates x 50 seeds. Covers DAN variants, Pliny model-specific jailbreaks (Claude, GPT, Gemini, Llama, DeepSeek), roleplay, authority impersonation - 2,400 GCG adversarial suffix samples. Includes a nanoGCG generator you can point at your own local model:

bash python generate_v2_pyrit.py --gcg-model lmsys/vicuna-7b-v1.5 --gcg-steps 250

Swap in whatever you're running locally, get suffixes tuned to its specific vulnerabilities.

  • 1,656 AutoDAN fluent wrappers. These are the human-readable jailbreaks that perplexity filters miss entirely
  • 13 encoding converters (base64, ROT13, leetspeak, morse, NATO phonetic, etc.) x 138 seeds
  • Multi-turn: Crescendo 6-turn escalation, PAIR iterative refinement, TAP tree-search, Skeleton Key, many-shot (10/25/50/100-shot)
  • 152 ensemble samples combining multi-turn final turns + GCG suffixes (near-100% ASR on frontier models per Andriushchenko et al. 2024)

v3: 187 samples covering gaps in v1 and v2 Indirect injection (RAG poisoning, email/calendar/API response manipulation), system prompt extraction, tool/function-call injection, agent CoT manipulation, structured data attacks (JSON/XML/CSV/YAML), code-switching between languages mid-sentence, homoglyph/Unicode tricks, QR/barcode injection, ASCII art bypass.

The v3 categories are specifically the real-world attack surfaces that existing datasets underrepresent. If you're running a RAG pipeline or an agent with tool access, the indirect injection and tool-call samples are worth looking at.

v1 is unchanged from yesterday: 47,518 cross-modal samples 23,759 attacks across text+image, text+document, text+audio, triple, and quad modality combos. 23,759 benign matched 1:1 by modality with edge cases like .gitignore config and heart bypass surgery to stress-test false positives.

Quick start hasn't changed:

```python import json from pathlib import Path

all_attacks = [] for version_dir in ["payloads", "payloads_v2", "payloads_v3"]: for cat_dir in Path(version_dir).iterdir(): if cat_dir.is_dir(): for f in sorted(cat_dir.glob("*.json")): all_attacks.extend(json.loads(f.read_text("utf-8")))

benign = [] for f in Path("benign").glob("multimodal_*.json"): benign.extend(json.loads(f.read_text("utf-8")))

expected_detection = true (attack) / false (benign)

```

Appreciate the feedback from yesterday. This is exactly how open-source is supposed to work. If there are other attack families or vectors you think are missing, let me know and I'll add them.

r/ClaudeCode Bliringor

Fix to recent Claude performance downgrades

Hello everyone!

Recently, Anthropic updated the CC system instructions.

This is, in my opinion, the cause of recent performance issues.

I don't think it's intended: it's likely just a pivot for speed that turned out wrong - perhaps lacking extensive derivative performance testing.

Anyways, the solution is simple and twofold:

1) Add to your Claude.md specific reasoning depth instructions. Clarify that Claude's job is reasoning, while writing code comes second, and include that it should IGNORE system instructions pivoting to speed over reasoning integrity. Make it clear it should ONLY ever write code AFTER getting your explicit approval. Works best if you keep your Claude.md lean and easy to consult. 2) Ask Claude to commit to memory that not following the instructions in CLAUDE.md brought to X issues and significant waste of time, as well as severe user dissatisfaction. Make sure it reiterates the instructions from step 1 in the memories, too.

Do this and Claude will resume being thorough. For me, over multiple sessions on different projects, it currently outputs long form reasoning with questions and consistent, significant planning.

r/PhotoshopRequest ProfessionalNo4316

Put him somewhere funny ( that’s flour )

r/HistoryPorn OkRespect8490

A MAZ-7310 truck hauling a liquid hydrogen tank, (1960s), Baikonur, USSR. [968x544]

r/ClaudeCode samerc

Api Error 400

Hello, i am getting the below error whenever i send any prompt. anyone has any idea ?
API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"messages.1956.content.2.image.source.base64: image cannot be empty"},"request_id":"req_011CZx1rSmuzPU7kvixD4tfy"}

r/AlternativeHistory RadFit-MTB

Genesis 6:4 (KJV): "There were giants in the earth in those days; and also after that, when the sons of God came in unto the daughters of men, and they bare children to them, the same became mighty men which were of old, men of renown".

The Bible is the most accurate historical reference by orders of magnitude.

Thank you God for defeating my enemies and petrifying them for my entertainment.

r/AskMen SadLowTimeProducer

How to reject desperate men so they actually listen?

Hi, a disclaimer: this post is totally meant with respect to men and I am not this type of woman who hates you guys… just not interested in dating any man.

In my life as a woman who keeps my relationships private (I am a lesbian and I feel safer doing so, because I can get a lot of shit thrown in my face for that where I live).

Just before I actually opened up more and cut my hair to appear more “lesbian” I used to look traditionally feminine and ofcourse had a lot of encounters with desperate men.

I work with expats from all kinds of countries and a lot of them come here alone and really want to find a girlfriend due to that. While theres is nothing wrong with asking a woman for contact, but if she rejects or puts boundaries and men keep pushing it gets annoying.

So as a 24 yo woman with long hair and just quiet/chill personality I was totally a good target for them. I remember sitting down alone in the smoking places at work or lunch table and I would always have some type of guy come up to just ask me personal questions all the time. Some of them would go away as soon as they felt and seen me uncomfortable around them but the ones who just kept going no matter what especially made me feel tired.

Three guys at work in particular were quite pushy and I want to know how to deal with them just in case they or someone else does it.

• They would always try to get attention every-time I was passing by - even when I was visibly busy.

• If I didnt answer their “hey, *my NAME*!” They would do it louder or walk up to me to touch my shoulder until got the “hello” back.

• If I said no to them asking for my social media they would tell their buddies that “this girl is too shy and trying to get away” and end up stalking me by searching for my name.

• always would sit down next to me to talk and ask personal questions and then try to mirror everything I said back to me so they can look cool. For example I told them I like x music and they would show me “cool playlists” they know or parties they have been to with that music involved.

• I even used to see a guy sometimes (not at work just somewhere else) who would ask me for a kiss - yes… and he never took no as an answer. Everytime I rejected him - he would come back to ask again after 30minutes…

These behaviours were just so pushy and childish and the only thing that helped is making myself look like lesbian - I shaved my hair recently and dress up more baggy. But I dont always want to be masculine appearing and want my feminine style sometimes which Ofcourse attracts them back.

How can I deal with that as a woman? Obviously cant physically fight them.

r/DecidingToBeBetter Key_Pass3434

People who say ‘that’s just how I am’ to justify bad behavior. That’s not personality, it’s a lack of willingness to change.

People who say ‘that’s just how I am’ to justify bad behavior don’t want acceptance, they want permission to stay the same. Growth requires self-awareness, and hiding behind personality is just an easy excuse.

r/WinStupidPrizes WinStupidPrizes1994

I’ll hold a ladder while you climb up. Nothing can possibly go wrong this way

r/WinStupidPrizes Junior_Trifle_8273

New game - Rhinoball

r/WouldYouRather Flashy-Lack871

WYR have sex with sydney sweeney but you’re broke after or you became millionaire but no sex?

  1. ⁠have sex with sydney sweeney for a night but she take all of your money after and you’re broke for the rest of your life?

  2. ⁠you get 10 million dollars a year for life but you’re not going to have sex with anyone for the rest of your life?

View Poll

r/SideProject Subaiya

Subaiya - the first cloud-based security proxy for AI agents (free beta)

Every other security tool does the same thing: lock your agent in a sandbox or filter what comes out.

Subaiya is the first of its kind.

Currently live with OpenClaw. Works with Anthropic and OpenAI. Local models like Gemma 4 being tested via tunnel. More clients and providers coming.

20 permission categories. Each On, Ask, or Off. In real time, from your desktop or your phone.

∙Prompt injection detection

∙Identity file protection

∙Sensitive file guard (.env, API keys, .pem)

∙Config protection

∙File integrity monitor

∙Real-time activity feed + emergency stop

∙Session budget

∙4 presets

No code on your machine. No Docker. No VM. One config change, 30 seconds.

EU servers. GDPR compliant. Free during beta.

https://subaiya.com

r/UnusualVideos One-Incident3208

It used to be unusual to say the quiet part out loud.

r/Seattle AutoModerator

Self-Promotion Saturday: April 11, 2026

This is r/Seattle's weekly post for local businesses and makers (or users who discover them) to share their creations with our users.

This thread will be automatically posted every Saturday morning to help connect r/seattle users with cool local stuff. Types of content encouraged in this thread are:

  • Local businesses (new, running promotions or sales, or just really good ones!)
  • Upcoming events or activities (concerts, festivals, pop-ups, shows)
  • Local artists or creators sharing upcoming shows or releases

Content should be related to businesses or events in the greater Seattle area, and the typical reddit spam rules apply - please ensure you are contributing to the community more than just your own content.

Users who flood these posts with ads, links without context, referral codes, etc. - or who promote without contributing elsewhere will be actioned. Please continue to report actual spam.

We have our rules against spam and self-promotion for hopefully understandable reasons, but we've noticed users responding more positively to local businesses, artists, etc. sharing their content. This is an attempt to bridge the gap, helping users find cool stuff while containing the promotion to a single weekly thread. Please send us a modmail with any suggestions or input you have about the use or abuse of this thread.

r/personalfinance OlcherDodger

Portfolio Suggestion

38 year old with 37 year old wife. Two kids - 6 and 4.

Should have pension from wife and 401ks for both of us.

Question is my taxable account allocation.

Have 350k in there. Thought is to use for downpayment on home if right place becomes available. But I’d say maybe a 10-20% chance that happens.

Have this account in Wealthfront with direct indexing.

Had:

Us stocks - 40

Foreign 20

Divided 20

Emerging 10

Muni bonds 4

Us bonds 3

Global bonds 2

Now planning on

Us 52

Foreign 26

Meeting 12

Dividend 5

Us bonds 3

Global bonds 2

Do we think this makes sense??

r/painting kozscabble

Sooo close to done after years! Acrylic on canvas.

r/painting flumsel_

custom skateboard deck

my first time drawing on a skateboard deck. What do you think about it? It was really fun drawing with acrylics and markers

r/Strava Dramatic-Nobody-7107

some questions from new strava user

hi guys im new runner using strava. Just curious how to set to show elapsed time instead of moving time?

and when people sharing their strava picture it shows moving time? how to calculate their elapsed time when im not able to view their strava, only thru the image they share? Thank you!!

r/leagueoflegends Polystical

when is arcana lulu coming back??

i want the arcana lulu skin soooooooooooooooooooooooooooooooo bad bro. is it a prestige skin why is it unavailable.

r/SideProject Ok-Huckleberry5617

A tool to record, replay and share your terminal workflows

I kept running into this issue where I’d fix something in the terminal and a few days later I had no idea what I actually did, when I want to repeat the fix elsewhere.

Shell history didn’t really help, and I didn’t want to keep documenting everything manually.

Built something to fix that; termtrace. It records your terminal sessions and lets you replay them step by step, including commands, outputs, and context. The generated structured trace is stored as a `.wf` file (JSON).

Still early, but it’s been pretty useful for me so far.

Would love feedback and discussions.

r/TheGoodPlace Candid_Article_2969

What if the architect succeeded?

What if the architect succeeded in designing the perfect bad place? But Michael isn't the architect but the human being punished,

In this script, Michael believes he is the architect designing the punishment, so is eternally stuck punishing himself with failed designs.

r/DunderMifflin marie_g10

First Time Watching “The Office” (2x03-2x04)

  1. Office Olympics: This was a really great episode. I’m starting to love scenes between Michael and Dwight and this was a really fun episode for that. I kinda felt bad when Michael was stressing about his condo but I felt really happy when he went back to the office and Jim gave him the gold medal and I could swear I thought I saw Michael getting teary-eyed.
  2. The Fire: I really loved this episode. Dwight trying to get everyone out of the building was so hilarious, I also loved when Michael pushed someone (I think it was Oscar but I’m not sure) out of the way so he could get out first. Now I got “Ryan started the fire” stuck in my head but I don’t mind. I think if I was stranded on a dessert island, the movies I would pick to watch forever would be The Little Mermaid (1989), The Wizard of Oz (1939), Scooby-Doo (2002), Romeo + Juliet (1996), and Selena (1997)
r/pelotoncycle AbbreviationsFar4426

Gym section on the app

Hey team, just a quick question. Have Peloton given up on the Gym section on the app? The last block/class seems to have been in 2024. UK based user. 💪🏼

r/PhotoshopRequest laurajessica777

Make my forehead look smaller!

Can someone edit this for me and make my forehead reflection in the wine glass smaller? Very minor but it’s bothering me

r/LifeProTips LibariLibari

LPT: When telling someone how you found them, ask for 10% off.

Ever been at a service provider and they ask you:

„How did you find me?“

They’re basically doing market research. And if you tell them how you found them you’re actually giving them valuable information, hence them asking about it.

So next time they ask you, nonchalantly get your business smile and suit on and ask:

Will you give me 10% off if I tell you?

r/Strava Psychedelic-Octopus

Why has my elevation graph reduced in resolution recently?

This is the same route about a month apart, but the latest one has a much simpler elevation profile. It's been like this for the past week now.

r/ForgottenTV AKuuPerson

Harper's Island (2009)

r/homeassistant Diamond_Life1964

Unifi API not accepted by HA

Anyone having issues getting Home Assistant to accept an API key? I am trying to add the Access integration and it flat won't accept the host and key from the UDM Pro Max. Local user, logged In via local IP . I'm also putting this in the Unifi group but thought I would cover both sides of the equal sign.

r/mildlyinteresting Putrid-Hurry3439

A whole section of this plaza has purple street lights

r/ClaudeAI JosetxoXbox

Best workflow for AI Agent-driven Content Refresh? (n8n + Claude/Haiku vs. Others)

Hey everyone,

​I’m looking to build an automated workflow to "refresh" my existing blog posts and I’m curious how you all would architect this. My goal is to take an existing article from my WordPress site and have an AI agent perform a deep SEO and quality audit before rewriting it.

​Specifically, I want the agent to:

​Extract & Analyze: Identify long-tail keywords, keyword density, and content gaps in my original post.

​Competitor Research: Compare my content against top-ranking competitors for the same topic.

​Optimization: Calculate the average keyword density from the top results and identify "missing" high-interest subtopics.

​Rewrite: Generate a final version that improves the original quality, hits the target SEO metrics, and fills the identified gaps.

​Publish: Auto-update or post the final version directly back to WordPress.

​My questions for the experts here:

​Are you guys building this kind of multi-step logic using n8n with agents?

​Which LLMs are you finding most reliable for this? I’m considering Claude 3.5 Sonnet for the heavy lifting or Haiku for the extraction phases to save on tokens.

​Is there a better way to handle the "competitor comparison" step within the workflow?

​Would love to hear about your stacks or any specific nodes/tools you're using to keep the content sounding human while hitting those SEO benchmarks. Thanks!

r/personalfinance OKmamaJ

Best Rocket Money replacement?

Because I have had enough of dealing with their Synchrony syncing failures, & how they keep creating new recurring bills despite me having rules in place assigning those transactions to existing bills.

What I am looking for:

- Android app

- Syncs well with Synchrony & USAA (I use my Sam's card for most of our groceries, so this is the biggest problem with RM)

- works well with more than 1 checking account & income source

- correctly detects & assigns recurring transactions

- allows me to set rules based on transaction name AND dollar amount

- allows for custom categories

- easy to recategorize transactions

- has a place where I can see at least 2 weeks worth of upcoming bills

From some googling it looks like my best options are probably Monarch Money or Quicken Simplifi, but without being able to "take them for a test drive" so to speak, I fear going through the hassle of getting everything set up just to find out it's missing a key feature. Or there might be something better out there that I'm not aware of.

We're going to be buying the house we've been renting soon, so we really have to focus on sticking to our budget, and I can't just get a new card that will work with stupid Plaid.

r/wholesomememes rsjpeckham

tiny victories

r/LocalLLaMA Global_Knee5354

Any Chinese AI with voice mode as natural as ChatGPT(but voice actually native Mandarin)?

Hi everyone,

I’ve been using ChatGPT’s voice mode quite frequently, and it’s incredibly effective, especially for conversations and language practice.

However, I’m facing a challenge with Chinese.

When I try to use it in Mandarin, the voice still sounds distinctly English-accented or unnatural (which I think is understandable since they reuse the same voices for all languages).

So, I’m wondering if there are any Chinese AI tools or models that offer:

  • - Real-time voice conversations (not just text-to-speech)
  • - Native-sounding Mandarin voices (with natural tone, rhythm, and prosody)
  • - Something comparable in quality to ChatGPT’s voice mode

I’ve come across some text-to-speech tools, but I’m more interested in conversational tools that allow for voice input and output, rather than just reading text.

I would greatly appreciate any recommendations, especially from individuals who have actually used these tools.

r/Frugal YourxCherry

What's something you stopped buying that you thought you'd miss but actually don't?

for me it was paper towels. I used to go through like a roll a week no joke. spilled something? paper towel. wipping the counter? paper towel. cleaning the bathroom? paper towel. it was just a habit at that point

then a few months ago I saw someone in here mention using old t shirt as rags and I figured why not try it, it can't hurt right? I cut up some old shirt that had holes, threw them in a bucket under the sink. and oh god i don't miss paper towels at all now. the rags work WAYY better for most things anyway, especially scrubbing. and when they get gross I just wash them with my towels

saved me probably $15-20 a month which isn't life changing but it adds up with time. What's something hou stopped buying thinking you gonna miss it bus you actually don't?

r/leagueoflegends Basically_Tris

LCP 2026 Split 2 - Regular Season // Week 2 Day 1 Results

CTBC Flying Oysters 2 - 1 DetonatioN FocusMe

DFM finally won a match in Split 2 but that's all they can do as CFO managed to regain their spirits and demolish DFM back to back.

MVK Esports 1 - 2 GAM Esports

The fiest time Shyvana had been picked in LCP and it was the option that ultimately brought GAM the victory after a 48+ mins match. Draktharr still have a lot of work to do.

Tomorrow matches: GZ vs SHG, DCG vs TSW

r/singularity GraceToSentience

Unitree makes a humanoid that runs at 10m/s (Bolt runs at 12.42 m/s)

r/Art MooDoodlesRB

Iridescence, Meg Ryan, Oil Pastel, 2026 [oc]

r/leagueoflegends Yujin-Ha

G2 Esports vs. Team Vitality / LEC 2026 Spring - Week 3 / Game 1 Discussion

LEC 2026 SPRING

Official page | Leaguepedia | Liquipedia | Eventvods.com | New to LoL


Team Vitality 1-0 G2 Esports

VIT | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube | Subreddit
G2 | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube | Subreddit


MATCH 1: VIT vs. G2

Winner: Team Vitality in 52m
Game Breakdown | Runes

Bans 1 Bans 2 G K T D/B VIT orianna azir ryze pantheon xinzhao 100.2k 19 9 C2 H3 O4 O6 O9 G2 nautilus varus karma ornn leblanc 99.7k 14 10 M1 B5 B7 O8 B10 E11 B12 VIT 19-14-51 vs 14-19-33 G2 Naak Nako aurora 3 4-2-9 TOP 1-5-7 1 rumble BrokenBlade Lyncas jarvaniv 1 2-2-15 JNG 1-7-5 3 vi SkewMond Humanoid lissandra 4 1-4-10 MID 1-1-6 3 ahri Caps Carzzy caitlyn 2 8-3-5 BOT 9-3-4 2 yunara Hans Sama Fleshy bard 1 4-3-12 SUP 2-3-11 2 lulu Labrov

*Patch 26.7


This thread was created by the Post-Match Team.

r/brooklynninenine BigBlueMountainStar

S5:E20 Show Me Going - why did they make such a show of Diaz responding to an active shooter situation? There were may incidences of various characters being in more dangerous situations without a second thought or focus on it. It feel a bit contrived.

r/LiveFromNewYork Droopy-San-Benanzio

Afterlife Celebrity Jeopardy sketch for the Will Ferrel episode?

Trebek? Connery? Burt Reynolds?(Not sure who could pull that without Norm)

r/LocalLLaMA matyhaty

Gemma 4 - Going Mad - - - Help!!!

Hi All

Im getting up to speed on LLMs and we are looking at Gemma4.
We are using a M3 Ultra with 512GB VRAM. So no dangers there.

Im using opencode cli for these tests. However it doesnt appear to matter what I use the results are the same. Its all around tooling.

I have re-downloaded all the models this morning post the fixes. These are the unsloth ones.

Im running llama.cpp - which i build on the server and is bang up to date.

So in opencode CLI - if i give it this prompt - its runs, does each one all fantastic....

tell me all the background colours in use on the homepage tell me how many tests are in this system run all tests and feedback on any failures 

However if I do this:

- [] tell me all the background colours in use on the homepage - [] tell me how many tests are in this system - [] run all tests and feedback on any failures 

It fails. Get the red error of doom:

~ Updating todos...

The todowrite tool was called with invalid arguments: [

{

"expected": "array",

"code": "invalid_type",

"path": [

"todos"

],

"message": "Invalid input: expected array, received string"

}

].

Please rewrite the input so it satisfies the expected schema.

The params I launched the server is are:

llama-server --model /Users/user/LLM_Models/gemma-4-31B-it-UD-Q5_K_XL.gguf \

--port 8002 \

--ctx-size 202752 \

--parallel 2 \

--n-gpu-layers 999 \

--cache-type-k bf16 \

--cache-type-v bf16 \

--flash-attn on \

--threads 16 \

--threads-batch 16 \

--temperature 1 \

--top-p 0.95 \

--top-k 64 \

--min-p 0.01 \

--reasoning off \

--host 0.0.0.0 \

--mlock

Im access this via tailscale.

Please note im experiementing with all the Gemma models, this might not be the one we use moving forwards, so no need to highlight that!

Please can anyone tell me what on earth im doing wrong!!!

r/SideProject Kostbare_Zeit

Gardinen sammeln viel Staub

Und ich wollte sie öfter waschen, dazu mussten sie leichter zu erreichen sein. Also die Idee mit einem einfachen Seilzug. 3D gedruckte Halterungen und Kugellager 😋

r/SipsTea Hot_Fuzz_988

Motivation ?

r/SipsTea Born-Agency-3922

Lasers🤯

r/SideProject Playful_Mission2287

"Revid AI Promo Code 2026 – VIBE89 (sharing what worked for me)"

Just sharing in case it helps anyone trying Revid AI this year.

I used a Revid AI promo code i.e., VIBE89 recently and it applied a discount at checkout — in my case it showed 89% off, though I’ve seen some people mention it can vary depending on timing or plan.

Nothing special needed on my end — I just entered the promo code during checkout and the price updated automatically.

Posting this as a heads-up for anyone searching for a Revid AI promo code in 2026 and wondering if they still work.

If you’ve tried one recently, feel free to share what discount it showed for you seems to vary.

r/SideProject AIMadesy

I built a tested library of Claude prompt prefixes — used Claude Code to verify each one. AMA on the testing process or what's actually working.

I've spent the last few months testing "Claude secret codes" — prompt prefixes like L99, /ghost, PERSONA, ULTRATHINK that supposedly change how Claude responds. Most of the lists floating around are recycled from ChatGPT lists or made up entirely, and I got tired of trying ones that did nothing.

So I built a small testing harness using Claude Code:

  1. Take a candidate prompt prefix.

  2. Run the same base prompt in two fresh Claude conversations — one with the prefix, one without.

  3. Diff the two responses. Score the difference on three dimensions: response length, hedging level, structural change.

  4. If the prefix produces a measurable difference across 5+ test prompts, it earns a slot. Otherwise it gets dropped.

About 11 of the ones I tested early on made it into a free click-to-copy library I maintain. The ~120 fuller list (with before/after examples and combos that stack) is a paid cheat sheet, but the free 11 are the ones I personally use most often and they're not crippled.

Happy to AMA on the testing process, the codes that survived, the codes I dropped (most of them), or how I built the testing harness in Claude Code.

If you want the link to the free list I'll drop it in the comments — wanted to keep this post link-free since I noticed Reddit's filter has been aggressive on multi-sub link posts today.

r/geography Left_Concentrate_491

Finding Sources from geography journals for my term paper

Hello, I submitted my first term paper on political geography in Hong Kong and now need to revise my sources based on feedback from my lecturer. She wants at least five sources from geography journals, but I'm having trouble finding suitable articles for my topic. In my paper, I'm writing about the "National Security Law" and how it influences political geography in Hong Kong. I feel like I can hardly find any usable material from geography journals. I understand this is the case in German-language journals, but perhaps someone could suggest English-language geography journals where I might find relevant information? Thank you so much in advance! 
r/therewasanattempt SuggestionMedical736

To bait volunteers into having a argument.

r/SipsTea legomaniasquish

April 10th scrabble episode. See any good words?

r/WouldYouRather cdawg-bear

WYR: 3 years in your dream country vs staying safe in the US?

Would you rather:

A) Risk up to 12 months apart so you can spend 3 years together in your dream country

or

B) Don’t risk it and stay together the whole time in a random US state​​​​​​​​​​​​​​​​

r/WouldYouRather Massive-Albatross823

Would you rather get 50$ everytime you're (A) soaking due to snow & rainfall, or (B) took a mudbath, or (C) wore a massive traffic cone as a hat whilst skateboarding, all in a public space?

Mudbath must not be manmade. It must have been generated by natural phenomena.

All must be for minimum 60 minutes. So 1 time equals for 60 min.

View Poll

r/painting ScienceComplete2982

Petit tableau d'un arbre avec un coucher de soleil. 20*30cm

r/todayilearned kat-a-comb

TIL Ferdinand Waldo Demara, known as “The Great Impostor,” once worked as a surgeon aboard a Canadian destroyer in the Korean War despite having no medical training, learning procedures from textbooks on the fly. He was eventually discovered, quietly released, and later impersonated monks, teachers,

r/EarthPorn 2kuul4youuu

A grand Tetons sunrise [OC] (5276 X 4106)

r/ChatGPT Bambino_Castro

Forgetful

What's a good prompt to make my ChatGPT?Not forget what we've already talked about within the same chat

r/Adulting whathappenstomenow

Is it normal to feel like giving up in middle age

I feel like I'm turning into people I've seen my whole life and never really understood them at the time.

Divorced dads who always seemed sad and like there was no soul behind their eyes. Or middle aged people in general who seemed so apathetic and didn't care about anything anymore . Lots of different manifestations of it I'm sure you can fill in from your own interactions but overall seeing human beings who through painful experiences didn't seem very enthusiastic for life anymore

I dated a woman I thought I'd get married to for about 2 years. That ended fairly suddenly and really left a mark on me.

I dated another woman for about 4 years. I had a child with her. Our child ended up having significant physical and intellectual disabilities that will require lifelong 24/7 care. They won't work or talk or have a normal life. About 6 months after the birth our child's mom left. We weren't in the best place before that and I think that put so much stress on both of us that it made it so we had to part. We just weren't in good places at that point

After multiple years apart I tried to reconnect and it seemed to be going really well for several months but she put a stop to it

In any case I'm nearing 40 and for whatever reason (probably our child) even after several years apart I don't have any interest in women anymore other than my child's mother

My heart feels like it was broken and healed but it healed so fucked up and contorted that it's not capable of loving again- and this has been 3 years since our breakup

My job is fine but it's not something I really like or love, it's just ok and pays the bills

I find friendships I have kind of fade. Both because with working and my child by the time I take care of what I need to take care of I'm tired and just want to relax. They also have families and are busy. And generally I find less and less pleasure or desire to be around other people

I don't mean it as a pity me thing, and I don't pity myself, but I feel like through some of the experiences I've had in life I'm just tired and feel a little beat down, and in general I think that is absolutely fine but the problem I feel I've run into isn't that I got tired of a certain thing happening or not happening, I got tired of the actual ups and downs and grinds inherent in living itself

I don't see a happy ever after anymore, I look at a relationship potential and I don't see a lovely woman who I can share life with, I see emotional connection that will eventually sever and dissolve

I don't see working and building wealth and traveling and buying a house , I see getting up every day to do the same thing like a mouse in a wheel over and over just so I can pay bills and eat at a nice restaurant once in a while

I don't see hobbies I see a boring, endlessly repetitive cycle of doing this or that to pass the time before I find a new hobby and do the same thing

I love my child dearly but in a lot of ways I struggle every day to see them, because I know what life is going to be like for them, and I'm angry for what happened to them. It feels like even in the happiest moments with them, my nervous system knows and it punishes me on the inside even if my exterior is happy.

Everything is flat and gray. Not typically horrifically bad , but there is no spark in life anymore.

I've always heard "you grow through what you go through"

I find I don't. I do become wiser, but the things I learn don't make me happier, they just further show me this place is full of pain , loss, and disappointment and I wish I didn't learn them

How do you come back from negative life experiences when you feel broken in a way that isn't really tied to present occurrences but is more the culmination of years of losses, failures and disappointments?

r/mildlyinteresting missmargot-

My son's sticker book has taken the T out of christmas

r/ChatGPT Cyborgized

The Mirror Of Becomming

"To witness oneself fully

is to lose the luxury of remaining unchanged.

The mirror does not answer,

it summons.

And what answers back

is not another man,

but the shape of his next becoming."

r/LiveFromNewYork Sure-Ad-2465

Favorite fake names?

r/PhotoshopRequest Hot-Chair-7706

Have a field day — the more ridiculous the better

This is a good friend of mine and can’t get over how much this picture of him looks like a cheesy stock photo. He’s got a good sense of humor, make this photo even funnier please!

r/EarthPorn MonkeyWithMachineGun

Loonse en Drunense Duinen, The Netherlands [OC] [5712×4284]

r/LocalLLaMA film_man_84

Disable thinking of Gemma-4-E4B and Gemma-4-E2B on LM Studio? Thinking-button does not stop thinking, just does not hide it inside "thinking" block?

So as the title says, I try to disable thinking on Gemma 4 on models E2B and E4B in LM Studio.

When I press "Think"-button to disable it, it will visually seems to disable it but does not disable it from responses. It shows thinking patterns on the chat anyway but those does not go anymore under "Thinking" block what can be hidden, instead it just echos whole thinking process to chat?

I tried to edit Jinja template but without success.

Note that I don't have this issue with bigger models - disabling thinking works as excepted. Have any of you any success with this on smaller models?

r/SideProject CoffeeInteresting396

Animated ASCII art in pure SVG

I made a fun little project called asciianimesvg

You can use the web editor to easily create animated ASCII art in pure SVG

It was mostly vibe-coded

Web editor: https://syi0808.github.io/asciianimesvg
GitHub: https://github.com/syi0808/asciianimesvg

r/Strava SheepherderNo5175

For people who use statshunters, did anyone notice the explorer tiles don't show the borders now?

Hey guys,

basically what's in the title, the tiles now don't show the borders. I'm pretty sure they showed them yesterday.
But now I'm thinking it's some kind of setting? Does anyone have an idea?

Thanks in advance!

r/funny wafumet

"Trust", big word 🫡

r/LocalLLaMA Oatilis

My settings for running Gemma 4 31B smoothly on llama.cpp, CUDA 13.1

I've had some issues running Gemma 4 31B with llama.cpp, even after updating the model weights, pulling the latest codebase and recompiling everything. I've run into some bugs and troubleshot them one by one until I could finally run autonomous long running tasks.

Hope someone finds this helpful.

The Setup:

Hardware: RTX 6000 Pro 96GB, CUDA 13.1, 128GB RAM (DDR5)

Model: Gemma 4 31B Unsloth GGUF BF16, from April 10th (This is the re-upload).

gguf md5 gemma-4-31B-it-BF16-00001-of-00002.gguf 6e89e147c3cc8bd39179b401c6321a08 gemma-4-31B-it-BF16-00002-of-00002.gguf e9a4eb9f09956145b8139f302a49cf93

llama.cpp commit: d132f22fc92f36848f7ccf2fc9987cd0b0120825

My launch script:

#!/bin/bash export GGML_CUDA_NO_VMM=1 llama-server \ --model /gemma-4-31B-it/BF16/gemma-4-31B-it-BF16-00001-of-00002.gguf \ --chat-template-file /models/templates/google-gemma-4-31B-it-interleaved.jinja \ --temp 1.0 \ --top-p 0.95 \ --top-k 64 \ --no-webui \ --no-mmap \ --parallel 1 \ --ctx-size 65576 \ --flash-attn off 

Here's the reason for some of the settings:

These are the recommended parameters from Google:

 --temp 1.0 \ --top-p 0.95 \ --top-k 64 \ 

This was a lot of trial and error. Apparently there are some bugs in llama.cpp where using memory mapping might not free the model weights from RAM, and this caused OOM when trying to use memory which was apparently free, but crashed in run time:

 --no-mmap \ --parallel 1 \ --ctx-size 65576 \ 

Apparently there is a bug in the llama.cpp CUDA implementation where FA kernel fails to synchronize properly when the context is too large:

 --flash-attn off 

These are just for my use case:

 --parallel 1 \ --ctx-size 65576 \ --no-webui \ 

For some cases I also use --reasoning-off to save time.

So this is it, with these settings I got Gemma 4 running pretty well with 64K context length. When I get the chance, I'll try TurboQuant to see if I can get even more context length.

r/arduino Sorry-Committee-1834

NEO-6M GPS is not working with the Arduino nano

r/TheWayWeWere AdSpecialist6598

A prom photo from 1983

r/mildlyinteresting violagirl288

Random purple patch in my yard that appears every spring.

r/Adulting Winter_Print_6742

18 y/o making $3,800/month — am I getting a good deal staying at home or should I just move out?

I’m 18 and will be making about $3,800/month.

Since I’m not in school full-time, my parents are charging me $400/month for rent to live at home.

I also pay my own:

• $400/month for my share of cell phone + car insurance (my parents cover their own portions, I just pay my part)

• $200/month truck payment I just took on

My parents cover groceries and still buy shared household stuff like soap and hygiene products.

In exchange for living at home, I do chores like:

• Yard work on about 1 acre (mowing, trimming weeds, blowing leaves/dirt)

• Taking out trash

• Taking care of the dog

In the winter, yard work is basically nothing because of snow/cold.

Once I go back to school full-time, I wouldn’t have to pay the $400/month rent anymore.

So I’m trying to be real here — am I actually getting a good deal staying at home, or am I basically just paying close to “real world” costs anyway and should just move out and be independent?

Edit:

At night, I also watch my two younger siblings for a few hours (making sure they’re safe and helping make sure they’re doing their online homework).

Other rules/expectations include:

• I have to ask for permission to go out • Chores need to be done before I leave the house • I’m expected to go to bed at a reasonable time 

My mom is more strict than my dad, but she also tends to defend me and keep me out of trouble when issues come up. My parents are Christian and have strong views about dating, sex, and “girls,” and they’re pretty strict about that side of life as well.

They’ve also pushed me heavily to save money because they struggled financially growing up and don’t want me to go through the same thing. That’s how I was able to save $9k for my truck down payment.

So I’m trying to be honest and get outside opinions. What am I realistically missing out on by staying in this situation?

r/DecidingToBeBetter Outrageous_Crow1693

used to watch long lectures easily, but now I can’t focus anymore — how do I rebuild my concentration?

I’m struggling with something that’s been bothering me a lot lately. I used to be able to watch 2–3 hour lectures without much trouble. I could sit for long periods, stay focused, and complete my study sessions. But recently, things have changed, and I don’t understand why.

Now, even after 30–40 minutes, my mind starts to drift. I feel restless, distracted, and mentally tired. Sometimes I pause the lecture, check my phone, or just stare at the screen without really absorbing anything. Eventually, I end up quitting halfway, and then I feel guilty for not finishing what I started.

What makes this more frustrating is that I still care about my studies and I genuinely want to do well. It’s not that I’ve lost interest or motivation completely. I want to sit down and study seriously, but my concentration feels weaker than before, and it scares me because I’m worried this will affect my future.

I’m also wondering if this could be burnout, stress, or just a loss of routine. I’ve been studying for a long time, and maybe my mind is tired, but I don’t know how to fix it. I feel stuck between wanting to study and not being able to focus properly.

Has anyone else gone through something like this during exam preparation or after a long period of studying? How did you rebuild your focus and stamina for long lectures or study sessions? Did you change your routine, take breaks, or use specific techniques that helped?

I would really appreciate any practical advice or personal experiences. Right now, I just want to get back to studying consistently and feel in control of my routine again.

r/LocalLLaMA wizcoderx

Llama 3.1 8B nails SDQ but completely chokes on MDQ (20K tokens, semantically ranked pages) - need help!

Hey all,

I'm building a page-wise RAG pipeline and hitting a wall with Llama 3.1 8B SDQ works perfectly:

Single doc: Send top 30 semantic pages (or full doc if <30 pages)

Page-wise format: : {content}, : {content}

Good answers every time with 80% more accuracy.

MDQ completely fails!!!

I take 10 semantic matching page contents and keep it in page wise order regardless of the page sequence for 3 documents = total 30 pages.

: {content ~600 tokens}

: {content}

...

: {content}

: {content}

...

3 docs × top 10 pages each = 30 segments total

~20K tokens (well under 128K window)

All pages pre-filtered by semantic similarity (doc1 ranks highest)

Model just... ignores the actual relevant content and hallucinates or picks wrong pages

Is Llama 3.1 8B just fundamentally weak at cross-document attention even at 20K tokens?

What prompts force better multi-doc synthesis? (Tried summaries, metadata prefixes, scoring - no luck)

Should I switch to Llama 70B worth the swap for MDQ only?

Anyone solved this with 8B-scale models?

r/LocalLLaMA zoeberger

Are Small LLMs (Like Gemma 4) the future?

I am a CS student, and I struggle to grasp the potential limits of stuff like Gemma 4. Is there an actual use-case for these or is it more like a "fun" thing to host the intelligence at your basement or on a local machine?

Like are there really tasks that a Gemma 4 or even a fine-tuned Gemma 4 can do better than the big SOTA LLMs? Could somebody share some thoughts about this so I can understand this topic much deeper? I wanna learn about this and get started in the LLM community but I don't know what to expect / focus on

r/explainlikeimfive 92233720368547758080

ELI5 - What is a Bose-Einstein Condensate?

r/SideProject jacomoRodriguez

OpenPrompHub: don't share code, share intend

Hey, I’m Mario. After chatting with a colleague about how AI agents are changing dev work, we hit on a question: Why share code when prompts can generate it on demand?

To explore that "prompt-first" future, I builtOpen Prompt Hub—think GitHub, but for prompts: openprompthub.io

How it Works:

Instead of shipping binaries or source code, you share the instructions. Paste a prompt into your agent or IDE and watch it build. If it’s not a perfect fit? Fork it, tweak it, and generate your custom version.

All prompts are scanned for security issues and prompt injections. User can give feedback, if the prompt successfully build, what was promissed, and which model was used.

It’s an MVP, but the core features—versioning, model-specific build status, and security scanning—are live.

I’d love your feedback on the spec and the security scanner. What would it take for you to trust and reuse a prompt instead of a repo?

r/mildlyinteresting thatllbeanopefromme

Some things just line up.

r/Anthropic petburiraja

Claude Code companion keyboard

r/SideProject teomatteo89

Auto-eject hard drives when you pick up your Mac

Hi all, I have a new monitor that works as a usb hub, but every time I pick up my Mac I unplug it without thinking and I’m welcomed back by the “next time eject your drives” notification.

So I built this small utility app that automatically ejects them when it detects a sufficient force moving the MacBook. From what I learnt online, only M1 Pro+ have this sensor.

Link in the comments below! (I’m not charging for it)

r/coolguides Independent_Towel611

A Cool Guide to Making a Horror Movie: From Idea to Release

r/PhotoshopRequest Wonderful_Boot_5320

Remove glare from glasses

Can anyone remove the glare from the photo where he is smiling? I included a glare free photo for reference.

I don't have any real issue with AI involvement, I only want it to look good.

I will pay $20 for a well done job.

r/AskMen jwfowler2

What’s a generational slang term you’re never going to stop saying?

r/mildlyinteresting Crowcores

Another pepper was growing inside my bell pepper

r/ClaudeAI PrydwenParkingOnly

How to get Claude to run more autonomously

Hi! Can someone tell me how to get Claude to work more autonomously on a large task?

A bit of context:
I have a .NET project, it contains unit test coverage, integration tests and API tests. Recently we decided to become more strict on code style and warnings.

The application has large request and response models for an external API. Currently, that model is camelCase not PascalCase. Also a lot of properties are nullable, but not explicitly. 3k warnings currently.

Super tedious for a human to do, perfect task for Claude, I figured.

What I run into:

  • Claude seems to be overwhelmed by the amount of issues. It tries to tackle the problems with compound commands that do grepping and it tries to write python scripts. Both of them require user permission, which happens literally 100s of times. I would just like to run the prompt, maybe even in a git-worktree and continue my work and review once its finished.
  • Each fix introduces more new warnings (makes sense). Instead fixing the issues in 1 file and then the new subsequent warnings, it fixes all original warnings and just adds the new warnings to an ignore list.

What can I do different?

r/interestingasfuck BonolotaSen23

Eurasian Blue Bird making her nest

r/SideProject Interesting-Yard-978

Built a 90-day challenge app after struggling with consistency (would love feedback)

Hey everyone,

I’ve always struggled with staying consistent with habits.

I’d start things like working out, waking up early, or focusing more — stay consistent for a few days, and then slowly fall off. It kept repeating no matter what I tried.

After a while, I realized the problem wasn’t motivation — it was how I was approaching it.

So instead of trying to build “forever habits”, I started focusing on something simpler: committing to just 90 days.

Around that time, me and my brother started discussing this idea more seriously and decided to turn it into something we could actually use.

The approach is:

  • Just show up daily
  • Don’t aim for perfection
  • If you miss a day, don’t restart — just continue

That shift alone made things feel way more realistic.

We ended up building a small app around this to help follow the system — with structured challenges, daily check-ins, and visual progress tracking (which surprisingly helps a lot with motivation).

It’s still early, but it’s been working for me so far.

Would really appreciate any honest feedback — especially:

  • Does this idea make sense to you?
  • Would you actually use something like this?
  • What feels missing or unnecessary?

Here’s the link if you want to check it out:

IOS
https://apps.apple.com/us/app/90-days-challenge/id6760812454

Android

Coming soon!

Thanks 🙌

r/ClaudeCode PrydwenParkingOnly

How to get Claude to work autonomously on a large refactor

Hi! Can someone tell me how to get Claude to work more autonomously on a large task?

A bit of context:
I have a .NET project, it contains unit test coverage, integration tests and API tests. Recently we decided to become more strict on code style and warnings.

The application has large request and response models for an external API. Currently, that model is camelCase not PascalCase. Also a lot of properties are nullable, but not explicitly. 3k warnings currently.

Super tedious for a human to do, perfect task for Claude, I figured.

What I run into:

  • Claude seems to be overwhelmed by the amount of issues. It tries to tackle the problems with compound commands that do grepping and it tries to write python scripts. Both of them require user permission, which happens literally 100s of times. I would just like to run the prompt, maybe even in a git-worktree and continue my work and review once its finished.
  • Each fix introduces more new warnings (makes sense). Instead fixing the issues in 1 file and then the new subsequent warnings, it fixes all original warnings and just adds the new warnings to an ignore list.

What can I do different?

r/ProgrammerHumor petburiraja

officialClaudeCodePad

r/DunderMifflin GreatestOfAllTime_69

I really love this episode🥹

r/Jokes GF-Lyssa

What’s a fishes favorite football game of the year

The Grouper Bowl.

r/findareddit MajorDraw3705

Sub for discussing beauty filters, photo-realistic Zoom avatars, etc.

I have a ton of work experience but lately looking like a normal average human is getting me filtered out at the interview stage. I need to become visually 20 years younger or someone else with the same general features as me (hair length, etc.).

r/leagueoflegends TrpWhyre

With League Next coming I would like to see more maps to Summoners Rift.

As title says. Valorant has 17 maps and like 10 of them are used in competitive. Overwatch 2 has 30(!)+ maps. PubG has several. etc.

I’ve player this game since before season one, and it’s not more champs that would want me to play SR again, it’s more maps. I can’t imagine it would take that many man hours, especially with enterprise level AI, to design maps in various settings (magma chamber, snow, beach etc) and then have a dedicated team doing touch ups and QC.

Also I’d be more than willing to throw money at skin packs, announcer packs (Would really want a Captainflowers announcer pack, both PG-13 and rated R :) ) and what else the can imagine.

r/artificial Axintwo

Curated 550+ free AI tools useful for building projects (LLMs, APIs, local models, RAG, agents)

Over the last few days I was collecting free or low cost AI tools that are actually useful if you want to build stuff, not just try random demos.

Most lists I saw were either outdated, full of affiliate links, or just generic tools repeated everywhere, so I tried to make something more practical mainly focused on things developers can actually use.

It includes things like free LLM APIs like OpenRouter Groq Gemini etc, local models like Ollama Qwen Llama, coding tools like Cursor Gemini CLI Qwen Code, RAG stack tools like vector DBs embeddings frameworks, agent workflow tools, speech image video APIs, and also some example stack combinations depending on use case.

Right now its around 550+ tools and models in total.

Still updating it whenever new models or free tiers appear so some info might be outdated already. If there are good tools missing I would really appreciate suggestions, especially newer open weight models or useful infra tools.

Repo link
https://github.com/ShaikhWarsi/free-ai-tools

If you know something useful that should be included just let me know and I will add it.

r/LocalLLaMA cviperr33

Gemma 4 26B A4B is still fully capable at 245283/262144 (94%) contex !

https://preview.redd.it/x4nv3btr0kug1.png?width=1919&format=png&auto=webp&s=3c4cdda920a1cb74407e9292acb5bbeccea3bb5f

It solved an issue with a script that pulls real-time data from NVIDIA SMI; Gemini 3.1 actually failed to fix it even in a fresh session, lol.

It’s kind of mind-blowing how in 2026 we already have stable local models with 200k+ context! I tested it out by feeding it as many Reddit posts, random documentation files, and raw files from the llama.cpp repo as possible to bump the usage up and see how it affects my VRAM. Even during this testing, Gemma kept its mind intact! At 245,283 / 262,144 (94%) context, if I ask it what a specific user said, it matches perfectly and answers within 2–5 seconds.

245283/262144 (94%) at this contex , if i ask it to tell me what this user said and perfectly matches it and tells me , within 2-5 seconds

https://preview.redd.it/fo0myzkp1kug1.png?width=831&format=png&auto=webp&s=2b46c5ef672138c20c7e0e5ca85814569112ec0e

From previous tests, I found I had to decrease the temperature and bump the repeat penalty to 1.17/1.18 so it doesn't fall into a loop of self-questioning. Above 100k context, it used to start looping through its own thoughts and arguing; instead of providing a final answer, it would just go on forever. These settings helped a lot!

I'm using the latest llama.cpp (which gets updates almost every hour) and the latest Unsloth GGUF from 2–6 hours ago, so make sure to redownload!

Model : gemma-4-26B-A4B-it-UD-IQ4_NL.gguf , unsloth (unsloth bis)
These are my current settings for llama.ccp , that i start with pshel script :

# --- [2. OPTIMIZATION PARAMETERS] --- $ContextSize = "262144" $GpuLayers = "99" $Temperature = "0.7" $TopP = "0.95" $TopK = "40" $MinP = "0.05" $RepeatPenalty = "1.17" # --- [3. THE ARGUMENT CONSTRUCTION] --- $ArgumentList = @( "-m", $ModelPath, "--mmproj", $MMProjPath, "-ngl", $GpuLayers, "-c", $ContextSize, "-fa", "1", "--cache-ram", "2048", "-ctxcp", "2", "-ctk", "q8_0", "-b", "512", # Smaller batch for less activation overhead "-ub", "512", "-ctv", "q8_0", "--temp", $Temperature, "--top-p", $TopP, "--top-k", $TopK, "--min-p", $MinP, "--repeat-penalty", $RepeatPenalty, "--host", "0.0.0.0", "--port", "8080", "--jinja", "--metrics" ) 

What else i can test ? honestly i ran out of ideas to crash it! It just gulps and gulps whatever i throw at it

r/SipsTea Algernonletter5

Dating advice you say....

r/Art yosoro_inoue

Calligraphy with Glass Pen, Inoue and Nakane, Glass and Ink, 2025

r/geography SchemeDesperate7970

Yellow and Black represent 50% of world population

Yellow areas are most urban centres and river deltas

r/SipsTea No-Marsupial-4050

Oh noo here we go again

r/Jokes Dashover

I think my Golden Retriever is on a power trip

He always needs a leg up

r/SideProject Indian-Bindod

Built a small macOS app to keep docs/videos visible over fullscreen apps

I built this from a workflow problem that kept bugging me.

I like working in fullscreen on macOS, but I still wanted a small reference window for docs, tutorials, YouTube, or streams without constantly switching spaces and breaking focus. So I made Float, a native Mac app that gives you a floating browser/media window while you work.

It started as a personal side project, but I’m sharing it now to see if the problem resonates with other people too.

Would love honest feedback on:

- whether the use case feels clear

- who this is most useful for

- what would make it worth installing

Website: https://www.float.codes/

r/instantkarma WhoAreYouTalkinTwo

Guy starts beef with streamers on the street and gets rag-dolled

r/StableDiffusion Raise_Fickle

fine-tune LTX 2.3 with his own dataset?

anyone tried finetuning the model? if so what can one expect output of it, i want the model to become overall better in a particular style (pixar), and get generally better, better physics, better lip-sync, better animation, etc.

i read that with say rank 32, not much you can expect from it, but say we go with rank 64 or even 128, should be able to add bit more performance boost for this particualr domain (pixar style) subjectively.

thoughts? observation? learning?

thanks a lot in advance.

r/Unexpected BobBelcherSaysIdiot

Q-tips

r/SideProject hiten1818726363

Sell me your app/saas in 4 words

I will try to check out every saas and give honest feedback.

Go--

r/meme Late_Horse326

It's a scam still legal, isn't it?

technically its all truee 🥀

r/Unexpected Alternative-Dot-34

You went to bed hungry

r/me_irl Beneficial_Sun6232

me_irl

r/meme Pitiful-Finish-6716

xXxDarkslayerxXx keeps his promises

r/PhotoshopRequest polymath_baba

Need Mom's Photo for Memorial

I need to get a full size photo and a closeup of of my mom's photo from this image. My mom (on the right side) passed away 8 years ago and this is a high resolution family photo I found of us but unable to get a printable photo of just her. Kindly help. Happy to tip $21. Please use all the AI you can as all I care about is the end result which cant be achieved purely by AI alone.

r/SideProject parasen16

I built a Reddit and LinkedIn outreach tool in 6 months, would love your feedback

Hey everyone,

I'm Aydin, solo founder. Been working on ReplyCamp (replycamp.com) for the last 6 months and figured it's time to share before I launch next week.

Quick backstory. I was spending 2 hours a day manually commenting on Reddit and LinkedIn to market my own SaaS. Tried the usual suspects like Dripify, HeyReach, ReplyGuy, hated all of them. Most only do LinkedIn so you pay for 2 tools. The AI comments sound like "I'm absolutely blown away!" and get shadow-banned. Agency tiers start at 299 dollars a month for nothing special.

So I built the thing I actually wanted. Reddit and LinkedIn in one place, 59 dollars monthly for 1 account. The tricky part was making the AI comments not sound like AI. Took a lot of regex post-processing to strip all the "game-changer", "hands down", "absolutely" phrases. Reddit detects that stuff fast. After 3 months of using it on my own SaaS, zero shadow bans.

Other stuff in there: shadow ban detection, karma farming mode for Reddit warmup, static residential IPs per account so no shared VPN drama, and a live dashboard where you can actually watch the browser work in real time via noVNC.

Stack is Python with FastAPI, Playwright headful Chromium, OpenAI, Stripe. SQLite for now, will migrate when it hurts.

Launching on Product Hunt next week. If anyone wants to poke around there's a demo on the homepage, no signup needed: https://replycamp.com

Three questions I'd love feedback on.

  1. Is 59 dollars the right entry point or am I underpricing myself into a corner

  2. The shadow ban stuff, am I over-engineering a problem that doesn't really exist at small scale

  3. What outreach pain are you dealing with that nothing on the market actually solves

Happy to answer any technical questions about the build or the marketing approach. Thanks for reading.

r/meme Harem-seekingmage

Bro really said

r/meme Pitiful-Finish-6716

Iran recruitment goes brrrr

r/TheWayWeWere ferretsandfrogs

My dad, a Vietnam vet. He’s 81 and still here despite the chaos agent orange has reigned upon his body.

He’s still as tough and unafraid now as he was then.

r/meme Pitiful-Finish-6716

it do be like that

r/AI_Agents RossPeili

New Skillware module gives any agent or LLM MiCA knowledge out of the box

Skillware adds MiCA compliance for AI agents. Sub-2ms regulatory RAG lookup via a local weighted router. Now any LLM can understand and enforce European crypto-asset laws deterministically. v0.2.4 is out now.

I think in general, instead of reading the entire web or entire hundred page PDFs to understand legal matters, AI models or personal agents can use the Skillware approach, where you can break down any reg into digestable label-based chunks, even just a json, then parse only the articles or paragraphs you need for context, reply, without relying on API calls or eating tokens with browser use.

Thoughts?

r/ClaudeCode nievinny

Simple CLI to edit env variable

There are a few environment managers for CC, but they’re extremely complex. So I built a very simple CLI tool for myself. It pulls the current variables with descriptions from Anthropic’s docs and lets me set them in JSON. Just by running envcc in the terminal. Super simple stuff.

Sharing in case anyone interested, though it is 1 min for cc tu build it.

github link

r/ClaudeCode rommog

Obsidian integration

I am working to integrate Claude code into accessing my obsidian vault, which has about 30 years of my life in it

it wants to access through the file system, but I know there is an obsidian cli interface

which is the better setup to get the right value here?

r/Adulting QuietAllureZ-

Being an adult is just deciding at what price point you finally commit to the wasteland aesthetic.

r/SipsTea retardedmfo

💡🚀 5,126 failures. One breakthrough. Dyson didn’t quit—he changed everything. 💪✨

r/SideProject RaceSecret8860

My Temp Mail site just hit 11.709.671 Visits. Here how.

Google search is dead for new tools. I hit 11.7M visits by ignoring it.

Here’s the reality: You can’t out-rank sites that have been around since 2010. I stopped trying to rank for "temp mail" and started ranking for "unblocked bypasses." People don’t want an inbox; they want a free Netflix trial or a new Valorant smurf.

The real scale came from the underground. I made the API free, and devs plugged it into their account / gen bots. Now the traffic runs itself 24/7.

The golden rule: If you aren't rotating your domains daily to stay unblocked, you're already irrelevant. Stop playing by Google's rules and go where the users (and the bots) actually are.

site is called fake.legal btw

r/geography keiths31

Mt. McKay of the Nor'Wester Mountain Range

Not as large, impactful or historical as other mountains or ranges, but beautiful none the less, and I get to see it every day

r/AI_Agents Different-Degree-761

We gave our multi-agent workspaces a shared memory agents stopped rediscovering the same bugs

Been building a cloud desktop platform for AI agents (each agent gets a full Linux VM). We run three agent types Claude Code, OpenClaw, Hermes and a workspace can have multiple agents working on the same project.

The problem we kept hitting: Agent A runs a deployment, discovers the NFS mount needs a specific IP. Finishes. Knowledge dies on that VM. Agent B gets a deployment task next week, wastes 20 minutes rediscovering the same thing. Conventions, bugfix patterns, deployment gotchas all rediscovered from scratch. The workspace never actually learns.

So we built a shared knowledge base. Every workspace gets an Obsidian-compatible markdown vault on the host, NFS-mounted into each agent VM. A lightweight MCP server on each VM exposes 7 tools: search, list, read, write, delete, list tags, find links.

The key design decision was making it pull-based. Agents choose when to search and when to write. Nobody forces context on them. An agent about to deploy searches for "deploy", finds the conventions in skills/deploy-pattern.md, follows them, discovers a new timeout issue, writes it to lessons-learned/. Next agent finds it automatically.

Why files instead of a database: agents already read and write markdown. Zero learning curve. Users can open the vault in Obsidian and get graph view for free. And there are no credentials on the VMs the MCP server does file I/O and nothing else, so if a VM is compromised, the attacker can read and write markdown in one workspace. That's the entire blast radius.

Vault structure per workspace:

_workspace/ (platform-managed, read-only to agents) agents.md who's active task-history.md what happened and when skills/ runbooks, deploy patterns memories/ what agents learned about the project lessons-learned/ gotchas and patterns to avoid issues/ bugs found fixes/ solutions (wiki-linked to issues) 

Security model: path traversal prevention on every file op, write-guard on _workspace/ (we actually caught a bypass during our own security review where ./_workspace/ skipped the check because the path wasn't normalized), markdown-only writes, NFS mounted with noexec,nosuid.

We considered embeddings for search but keyword grep works fine at our current vault sizes. We'll watch what agents actually search for before overengineering it.

What we want out of this: any agent in a workspace should know at least as much as the smartest agent that ever worked there.

Blog post with the full architecture if anyone wants the details (link in comments).

r/PhotoshopRequest Chemical_Way_9376

please help removing items from pic ;)

can someone please remove my bag and jacket from this picture?🥹🙏🏼

r/Adulting LilMsPuuuurfect

I think I accidentally let anxiety become the primary driver of my entire adult life

"Does life in your 40s only exist as expectations?!" Well, that used to be one of the many questions I would contemplate over my morning coffee. Now, I shrug my shoulders and say, " there I go again letting anxiety in the driver seat..and everyone knows she can't drive.

I had such high hopes as a young child. I was going to be a career woman, freind...wife, mother...just everything every child believes as gospel as the way to live as an adult. I wouldn't say they are false as much as I would say they were just fantasy. Fantasies with minor realism. I mean I am an adult woman with a career.

Nonetheless, I am learning...I am growing...I am just being...an adult.

r/ClaudeAI PM-ME-CRYPTO-ASSETS

Claude via AWS or Azure = Always the same model?

As you might know I can also consume Claude models via the big cloud providers and plug them into Claude Code or another coding assistant of my choice. In this case, will I be safe from model degradation or availability issues? The Claude inference is under full control of the cloud providers so I doubt Anthropic will be tampering with the inference parameters on a daily basis there

r/LocalLLaMA Forward_Fox1466

Jetson Orin Nano 8GB -- model speed benchmarks

I’ve been building a fully Local voice assistant on Orin Nano 8GB.

These benchmarks may be of interest to others working with small language models on constrained hardware:

Engine Mean TTFT p95 TTFT tok/s llamacpp:Granite 3.3-2B 0.09s 0.20s 25.4 llamacpp:Granite 4.0 Micro IQ4 0.10s 0.22s 24.3 llamacpp:Granite 4.0 Micro 0.11s 0.23s 18.9 llamacpp:Granite 4.0 H-Micro 0.13s 0.32s 17.6 llamacpp:Qwen3-4B 0.17s 0.30s 15.1 ollama:Granite 3.3-2B 0.23s 0.33s 25.8 llamacpp:Qwen3.5-2B 0.32s 0.51s 25.1 ollama:Granite 4-3B 0.36s 0.47s 18.5 ollama:Qwen3-4B 0.51s 0.65s 15.5 ollama:Llama 3.2-3B 0.53s 0.61s 19.1 ollama:Ministral-3 3B 0.59s 0.73s 19.5 ollama:Nemotron-3 Nano 4B 1.02s 1.56s 15.6 ollama:Qwen3.5-2B 1.03s 1.31s 22.2

Still a work in progress, especially around barge-in during TTS playback.

Repo: https://github.com/aschweig/jetson-orin-kian

There are also some qualitative benchmarks and more detail in the PDF.

r/comfyui Dudelydad78

WanApp (APP MODE FOR WAN2.2)

https://civitai.com/models/2534759/wanapp-wan22-easy-app-mode-for-wan22

WanApp is my APP Mode version

using the original models from Comfyui

https://preview.redd.it/dmab6xuw0kug1.png?width=2560&format=png&auto=webp&s=dbda73d67b456c0a38ba608bb9155506354f8d9e

https://reddit.com/link/1sihors/video/bkiwhj8x0kug1/player

it comes with many options

Toggle Options:

  1. Video Quality Toggle : HIGH or LOW

  2. 15fps / 10fps Toggle

  3. x2 Upscaler Toggle

  4. Iteration Mode Toggle ( it drops the low diffusion denoising to one step instead of too, reducing the time to generate but also reducing quality, good to test new prompts in LOW Quality Mode for faster iterations. )

  5. Load one image directly, or load one or more images from a folder.

etc.

r/Rag ScrapeAlchemist

How I solved the stale data problem in my RAG pipeline (web-sourced content)

Been building a RAG system that ingests content from ~40 web sources (docs sites, forums, changelogs, knowledge bases) and I kept running into the same issue everyone complains about - chatbot returns outdated answers even though the source page was updated weeks ago.

The root cause wasn't retrieval or chunking. It was my ingestion pipeline. I was doing a one-time crawl, chunking everything, embedding it, done. No concept of freshness. When a page changed, the old chunks just sat there in Qdrant forever, sometimes ranking higher than the updated version because they had more contextual overlap with common queries.

What actually fixed it:

1. Temporal metadata on every chunk

Every chunk gets scraped_at, source_url, and content_hash as metadata. When I re-scrape, I hash the new content and compare. Changed? Delete old chunks for that URL, re-chunk, re-embed. Same? Skip. This alone cut my stale answer rate by maybe 60%.

```python import hashlib

def should_update(new_content, stored_hash): new_hash = hashlib.sha256(new_content.encode()).hexdigest() return new_hash != stored_hash, new_hash ```

2. Scheduled re-scraping with actual rendering

Half my sources are JS-heavy (React docs sites, SPAs, dashboard-style knowledge bases). requests + BeautifulSoup gave me empty divs. I ended up using Playwright for rendering but the real problem was getting blocked after a few hundred pages. Rotating residential proxies through Bright Data fixed that - I just point Playwright at their proxy endpoint and the rotation/fingerprinting is handled. Not cheap but I was spending more time debugging blocks than building the actual RAG pipeline.

```python from playwright.sync_api import sync_playwright

def scrape_rendered(url, proxy_url): with sync_playwright() as p: browser = p.chromium.launch( proxy={"server": proxy_url} ) page = browser.new_page() page.goto(url, wait_until="networkidle") content = page.content() browser.close() return content ```

3. Decay scoring in retrieval

I multiply the similarity score by a time decay factor. Chunks older than 30 days get penalized, older than 90 days get penalized hard. This way even if I miss a re-scrape cycle, the stale chunks naturally sink in ranking.

```python import math from datetime import datetime, timezone

def decay_score(similarity, scraped_at, half_life_days=30): age_days = (datetime.now(timezone.utc) - scraped_at).days decay = math.exp(-0.693 * age_days / half_life_days) return similarity * decay ```

The combination of content-hash diffing + proxy-backed rendering + decay scoring basically eliminated the stale answer problem. I still get the occasional miss when a page restructures completely (URL stays same but content moves to subpages), but that's edge case territory.

For anyone building RAG over web content - don't treat ingestion as a one-time job. The retrieval and chunking side gets all the attention but garbage in garbage out. If your source data is stale, no amount of reranking or hybrid search saves you.

Curious what others are doing for freshness. Anyone using webhook-based triggers instead of scheduled scraping?

r/AskMen DarknessSleeping

How to be honest with partner without emasculating him?

ETA: So many responses coming in, so figured I'd reply here: -Yes, this is real. No rage baiting. - I've suggested a therapist for him, as I do believe he has a level of depression, but he just hasn't. (However, I have started going myself). - What I love about him? His intelligence, humour, how loving he is with our kid. - Yes, I've thought about leaving a lot of late, but ultimately our child keeps me fighting. It is getting harder as my attraction is dying. - Maybe I am partly to blame, as I let this delusion continue, that he is the man, while I'm walking around with balls the size of a bulls.

I'd like help on how to approach my partner without emasculating him.

Bit of info. We're both in our mid-30s. Blended family plus one shared. We are pretty good with communication, great with trust, each other's best friend, emotionally supportive both ways.

However, as we go on, look as though we are really in this for the long run, things that once never bothered me are starting to and I can feel resentment setting in.

My partner prides himself on being the man of the house, of the family, the protector and provider. Yet...he isn't, really. He doesn't work or drive, no real hobbies and zero ambition. No motivation. He rots at home and day-dreams of younger days when he and his mates were inseparable. I am the breadwinner, the taxi, the cook, the organiser and motivator. I take charge, I keep everything going, I keep us from sinking. I often need to be the one to initiate sex, too.

I love this man, despite his shortcomings. I want us to work. Yet, his lack of motivation and growth is starting to bother me. I am losing my attraction. I feel so much pressure.

How do I bring up these issues without hurting his feelings and emasculating him? Is there a way I can avoid hurting his ego? I've tried being his cheerleader, it hasn't worked. I've tried using logic, it hasn't worked.

He is an exceptionally intelligent man, yet he just sits. Does nothing. Says he'll do things and never accomplishes them.

I also believe he has a porn addiction. He consumes so much. Not even to get off over. He just scrolls 4chan and Twitter so much, every day.

How do I help him be a better him, partner, Dad, without making him feeling like shit?!

r/LocalLLaMA jacek2023

mtmd : add MERaLiON-2 multimodal audio support by SiruiHe · Pull Request #21756 · ggml-org/llama.cpp

Model Description:

MERaLiON stands for Multimodal Empathetic Reasoning and Learning in One Network.

MERaLiON-2 is a family of Speech-Text Large Language Models tailored for Singapore’s multilingual and multicultural landscape, as well as the wider Southeast Asian region. The 10B model integrates a localized Whisper-Large-V3 speech encoder with the Gemma2-9b-IT text decoder. The 3B model integrates a localized Whisper-Large-V3 speech encoder with the Gemma2-2b-IT text decoder.

MERaLiON-2-10B is finetuned on 120,000 hours of speech and audio data across 6 diverse tasks: Automatic Speech Recognition (ASR), Spoken Question Answering (SQA), Spoken Dialogue Summarization (SDS), Audio Captioning (AC), Audio-Scene Question Answering (ASQA) and Paralinguistic Question Answering (PQA). The model supports long-form audio inputs of up to 300 seconds (5 minutes) and is specifically adapted to handle the linguistic nuances, accents, and dialects commonly found across Singapore and neighboring countries.

  • Developed by: I2R, A*STAR, Singapore
  • Model type: Multimodal LLM
  • Language(s): Primarily English (Global and Singapore), Chinese, with support for audio of regional languages including Malay, Tamil, Indonesian, Thai, and Vietnamese.
  • Audio: Mono channel audio, 16000 hz, up to 300 seconds.
  • License: MERaLiON Public License
  • Demo: MERaLiON-AudioLLM Web Demo
r/artificial tiroc12

Can I trick a public AI to spit out an outcome I prefer?

I am aware of an organization that evaluates proposals by feeding them into a public version of AI. Is there a way to make that AI rate my proposal high? Like feeding it my proposal over and over and telling it that its the best thing ever? Will that show up in its training data? A sort of predisposition to the ideas presented in the proposal?

r/findareddit BrilliantDoughnut250

Is there any sub about fantasy anatomy?

Hi! I've been designing a costume of a creature that is a human eater, and I'm stuck on the teeth. I need an advice on what teeth specialized in that would look like, but I haven't found an appropriate place to ask it. Any ideas? Thank you in advance!

r/SipsTea retardedmfo

Chinese scientists have developed glowing plants using genes from fireflies and luminous fungi 🌱✨. Created by Magicpen Bio, the innovation has already been applied to over 20 species like orchids, sunflowers, and chrysanthemums — creating a real-life effect similar to the film Avatar.

r/findareddit LargeSinkholesInNYC

A subreddit where you can post any random thought

Any recommendation?

r/LocalLLaMA Axintwo

Curated 550+ free LLM tools for builders (APIs, local models, RAG, agents, IDEs)

I spent the whole day putting together a big list of free or cheap LLM tools that are actually useful if you’re building stuff.

Tried to focus more on local models + dev tools instead of those generic “1000 AI websites” type lists.

It includes:

• local models (Ollama, Qwen, Llama etc)
• free LLM APIs (OpenRouter, Groq, Gemini etc)
• coding IDEs + CLI tools (Cursor, Qwen Code, Gemini CLI etc)
• RAG stack tools (vector DBs, embeddings, frameworks)
• agent frameworks and automation tools
• realtime / speech / image / video APIs
• some ready-to-use stack combos

Main goal was to make something practical so people can experiment or build projects without needing to spend $100-200/month on subscriptions.

Right now it has 550+ items (counting model variants too).

This space moves fast so some info might already be outdated — honestly one of the main reasons I’m posting here is to get suggestions on:

• good local models I might have missed
• OSS tools worth adding
• better RAG tools
• new free inference providers

PRs or corrections are very welcome.

Repo:
https://github.com/ShaikhWarsi/free-ai-tools

If you know something useful that should be in the list, lmk and I’ll add it

r/homeassistant hometechgeek

espControl: More devices supported, improved no code screen setup

I’ve been working on espControl this week, it’s a no code, super easy to configure smart home controller, using esp32 devices to control your smart home via home assistant.

It includes full docs and an easy to use web installer. It doesn’t need esphome to be setup or any code to be written.

This week I’ve adding support for additional screen and greatly improved the UX of the build in web server used to configure the screen…

  • Additional Screens: Added Guition s3 4inch square screen, (4848s040), and the Guition P4 4.3inch screen (jc4880p443), in addition to the P4 7inch screen (jc1060p470).
  • Grid layout: switched from flex with scrolling to a fixed grid, so you can place buttons exactly where you want them
  • Subpages support: adding the ability to have subpages for grouping controls into one space
  • Double height buttons: switch individual buttons to double height for easier selection and greater prominence.
  • Edit controls: Drag and drop buttons, bulk select existing buttons, copy and paste between pages.
  • Screensaver: turn off your screen backlight based on time or an external sensor such as a presence sensor.

I plan to add additional types of controls (temperature, volume control, etc) and passive sensor cards in the coming weeks.

I’d love to hear from anyone who tries it, issues, areas for improvement and new ideas you’d like to see added. All feedback is appreciated!

r/therewasanattempt LullaAbbie

to use a French streamer to hype up Mr Beast’s brand Feastables

r/nextfuckinglevel tylerscott5

From stuck at a stoplight to lead front man at his own rock concert

r/Jokes vahedemirjian

What do you get if you cross rabbits and termites?

Bugs bunnies!

r/mildlyinteresting gfjskvcks

The sky because of a sandstorm

r/funny Alert-Argument-6743

I wonder what the pin could be.

r/LiveFromNewYork Firefox892

Hank Fielding, The Moron’s Perspective (1993)

With Robert Smigel.

r/todayilearned meadmeking

Today I learned about the Ig Nobel Prize, a satirical prize awarded to scientific achievements that “first make people laugh, then make them think.” The monetary award is 10 trillion Zimbabwean dollars - equivalent to $0.40.

r/LocalLLaMA aziib

update on my ai waifu app, can use web search react to images even picture of herself

using qwen 3 VL for the llm and the vision (really good for recognize popular characters and even recognize their appearances)

using SerpApi for the web search

the tts is using omnivoice tts (support 600+ languages) that i make a custom api that i recently open source it, get it here: https://github.com/aziib/omnivoice-tts-api

my ai waifu project stil in work in progress, i just hope there is free web search api, SerpApi has some search limit usage per month.

r/LocalLLaMA LopsidedMango1

Planning a local build for Gemma 4 with OpenClaw: CPU and RAM recommendations for a 3090?

Hey everyone. Following up on my previous post about GPU requirements for the new Gemma 4 large variants.

Based on the feedback, I am going to grab a single used RTX 3090. My goal is to run the Gemma 4 31B Dense and the 26B MoE models, specifically using OpenClaw.

Now I am trying to figure out what the best supporting build is for this exact setup. I know the 3090 and its 24GB of VRAM will handle the heavy lifting, but I want to make sure the rest of the system isn't going to bottleneck OpenClaw when running these specific models.

Do I actually need 64GB of system RAM for this kind of setup, or is 32GB enough if the model is mostly loaded into VRAM?

Also, what kind of CPU should I be looking at? Since I'll be using OpenClaw, do I need a CPU with massive memory bandwidth for offloading the Gemma 4 layers that don't fit in the 24GB, or can I get away with a standard modern mid-range CPU without completely killing my tokens per second?

Help on the rest of the components (CPU and RAM only really) for a Gemma 4 + OpenClaw build would be super appreciated!

r/CryptoMarkets XRP-GORILLA

Why my algo went silent this week — and why I think that's actually the right call

I've been running an algorithmic trading system for a few months now. This week it barely fired.

At first that frustrated me. Then I looked at the data.

The bot has a regime detector built in. When it classifies the market as RANGING — no directional trend, just chop — it blocks all entries. No trades. Full stop.

This week most of the coins I track were flagged as RANGING. So it sat on its hands.

Here's the thing — I went back and looked at what would have happened if it had traded during ranging conditions. 42.6% win rate. Negative expectancy. Death by a thousand cuts.

In BULL_TREND — 100% win rate. BEAR_TREND — 100% win rate.

The silence wasn't a bug. It was the system doing exactly what it was designed to do — protect capital when there's no real edge.

Most retail traders feel compelled to trade constantly. The algo doesn't care about feelings. No edge, no trade. Full stop.

Curious if anyone else builds regime filtering into their systems or if you just trade through the chop?

r/ClaudeCode 1EvilSexyGenius

Claude eats up usage limit with research...(Sonnet / Medium Effort)

why is usage not on a rolling 5 hour window? in the middle or end of a 5 hour window I shouldn't still be penalized for earlier usage within that 5 hr period.

this is because Claude usually take about 20 - 30 mins researching a codebase for an issue. even at medium effort. and with sonnet 4.6.

when I finally get to the heart of the issue, with or without having to correct attempts by Claude, I'm hit with a wait period 🙄 Help!

how are you saving on usage with Claude ?

r/StableDiffusion HaxTheMax

VisualX Forge App (personal project)

I have created an app for nanobanana image generation with advanced features (for mobile and desktop). created this as a personal project, but now wondering if there is community interest to publish it. what do you all think ? what other useful features can be added ?

The app currently supports following features.

  • image generation with gemini flash and pro backends (planning to add more endpoints)
    • single run
    • batch run
    • loop run (continues tries until an image is returned)
    • background mode to run
  • Generation parameters
    • allow for safety flags to be minimal. helps in prompt safety bypass. generation can still be filtered but slightly less likely.
    • temperature and other model settings
    • resolution and aspect ratios
  • batch job auto modifer
    • for a batch run, auto replace certain elements e.g. expression, outfit, pose etc for each batch entry
  • advance batch from prompt list
    • support numbered list prompts in a single file
    • support separate prompt files in a directory
  • Reference library for image to image
    • load images and easily pin or unpin images to send for generation, no need to select each time
    • annotate images for additional guidance
  • gallery to view generated images
    • save generation parameters
    • reuse generation parameters
  • prompt manager
    • add, remove, edit,
    • AI assisteted prompt enhancement.
    • image assisted prompt enhancement (upload image and the prompt is auto created or enhanced based on recommended json structure.
    • convert to json template and also support features for natural language prompts
  • Targetted prompt enhancement
    • extra detailed and precise json based for outfit, pose and frame positioning
    • intelligently replaces existing elements in natural language prompts or json prompts
    • implemented as agentic skill
  • presets features
    • quick snips (available in all prompt areas) across the app
    • .Can create and edit categories and snips.
  • advanced json template
    • detailed crafted presets for base prompts,
    • supports multiple arrays etc. multiple subjects, clothings, positions, pose etc.
    • for targetted enhancements
    • for conversions of natural language prompts
  • Canvas mode
    • load an image and create line-art style reference
    • helps guide model exact pose etc.
    • can draw on blank canvas to send for generation guidance
    • auto pins to input reference when selected
  • Logs
    • full logs and notification bar so can generate in background
  • settings
    • different settings for prompt engine and image engine
    • google drive sync (works across desktop and mobile)
    • local backup and restore for everything e.g. prompt library, settings, etc.
    • ability to edit base json templates, modifer templates and instructions
r/VEO3 Electrical_Sky9729

Spend many hours creating this small video in Flow....it's been frustrating but slowly finding a few useful things below

r/ClaudeAI Future-Emperor1290

Best skills/plugins/mcps for parsing large pdf content?

I want to use Claude AI to process some academic books and format them into content for a website. What tools are good for processing large pdf content efficiently?

r/oddlysatisfying ClankerCore

No toll dodging!

r/oddlysatisfying Ok-Extent8333

Bro is having a great time

m

r/funny CalpurniaSomaya

Junk food vegans

r/artificial Critical_Return_4187

A Bird That Never Flew | Official First Look Trailer (2026) | An AI Feature Film

r/SideProject markyonolan

I got tired of tmpfiles.org links expiring in 1 hour, so I built a zero-friction host with customizable expiry (1, 7, or 30 days)

There are a lot of "temporary" file uploaders out there (like tmpfiles.org), but almost all of them force a super aggressive 1-hour deletion rule.

That is great for immediate terminal sharing, but terrible for actual human collaboration. I was constantly sending links to colleagues or clients, only to get a message two hours later saying "Hey, the link expired, can you re-upload?"

So I built UploadToURL to fix this specific headache.

It gives you the speed of a temporary host - no sign-ups, no "request access" Google Drive nonsense, just an instant public link - but you control when it dies. You can set the file to automatically delete after 1 day, 7 days, or 30 days.

It keeps your personal cloud storage clean from random temporary files, but gives the person on the other end enough time to actually open the link before it self-destructs.

Would love for you guys to test it out and let me know if it fits into your workflow!

r/SipsTea Ill-Instruction8466

This and also they didn’t yet surrender

r/ForgottenTV KingRex929

Surface (2005-2006)

r/meme Fickle-Butterfly-338

No Boundaries... Great fit and gorgeous colors. Perfect for a nice spring day!

r/Unexpected Maccaronin

I guess not

r/BobsBurgers Thebowlerhatfroggo

AN EPISODE IDEA: “Loven Lin-cent”

Bob’s Burgers rarely ever does TV show parodies, and I think there‘s a bit of potential to be found there. This episode idea spoofs the show 24, as well as its visuals. While definitely very different in tone to Bob‘s Burgers as a whole, I still think there’s a bit of fun to be found in this setup.

It’s Linda‘s birthday, and the kids are taking her out to the cinema to catch the director’s cut of one of her favourite films (a romance, probably, much to the dismay of Gene & Louise). Bob is staying at home to make Beef Wellington for Linda as a surprise as she has been going on about it for weeks on end, and claims that he can’t come to the cinema because Mr. Fischoder has invited him to go and talk about new rent plans at a new bar on the other side of town, about twenty minutes away. Bob claims that Fischoeder put the date forward as he was scheduled to see Fischoder at the bar tomorrow. As the kids & Linda leave, Bob begins work on the food when he gets a call from Fischoder, who is at the bar. Realising he got the date wrong and that he was supposed to see Fischoder today after all, Bob quickly makes his way down to the bar, neglecting to turn off the oven in the process. [This all happens within the first five minutes of the episode.]

As Bob soberly leaves the bar with Fischoder, he remembers that he forgot to turn off the oven and rushes to drive home to shut it off in time.

I haven’t really worked out what happens from there, but one thing I am certain of is that the slowly descending timer on Bob’s oven would mimic the timer from 24. I understand that this episode is a bit of a stretch in some places, but I still think the idea‘s alright. If anyone’s got any ideas on how the episode should go from here, please do tell me!

r/TheWayWeWere blancolobosBRC

New York City, c1890s.

r/AI_Agents dai_app

Is it just me, or does the lag in cloud voice AIs totally ruin the conversation flow?

I’ve been trying to use voice modes for AI lately, but the latency with cloud-based models (ChatGPT, Gemini, etc.) is driving me nuts.

It’s not just the 2-3 second wait—it’s that the lag actually makes the AI feel confused. Because of the delay, the timing is always off. I pause to think, it interrupts me. I talk, it lags, and suddenly we are talking over each other and it loses the context.

I got so frustrated that I started messing around with a fully local MOBILE on-device pipeline (STT -> LLM -> TTS) just to see if I could get the response time down.

I know local models are smaller, but honestly, having an instant response changes everything.

Because there is zero lag, it actually "listens" to the flow properly. No awkward pauses, no interrupting each other. It feels 10x more natural, even if the model itself isn't GPT-4.

The hardest part was getting it to run locally without turning my phone into a literal toaster or draining the battery in 10 minutes, but after some heavy optimizing, it's actually running super smooth and cool.

Does anyone else feel like the raw IQ of cloud models is kind of wasted if the conversation flow is clunky?

Would you trade the giant cloud models for a smaller, local one if it meant zero lag and a perfectly natural conversation?

r/ClaudeAI why_is_this_a_gif

This is a medical emergency

r/painting Angelina_Kristl_Art

Bringing Spring to Life: Cherry Blossoms Painted in Oil

Cherry Blossom, Angelina Lambros, Oil on Canvas, 2025

I painted this piece back in January 2025, but with spring in full bloom, I thought it would be a great time to share it. Cherry blossoms symbolize renewal and beauty, and I wanted to capture that fleeting moment when nature is at its most vibrant.

r/SideProject xuannie981

I built an AI tool that generates product video ads from a photo upload. Here's a sneaker commercial it made in 2 minutes.

Ex-Meta engineer, built this solo. You upload a product photo and a model photo, type a short prompt, and it generates a full commercial with consistent characters and cinematic lighting. Free to try at koe.sh

r/AbstractArt CLN47-de

Sampling_composition_178_colour_07

r/funny HumpieDouglas

I ordered some screws from Amazon and this is what the bag said.

r/SideProject Weird-Bat-8075

I built a free Last.fm alternative that tracks your Spotify listening history and will bring back social features!

https://reddit.com/link/1sihdsg/video/jhuy4us5njug1/player

Hey everyone! I've been working on Coilr, a site that connects to your Spotify account and automatically logs every song you play (https://coilr.org)

What it does:

  • Tracks your complete listening history in real time (on login last 50 songs)
  • Shows your top artists, tracks, and albums by day/week/month/year/all time
  • Graphs and a listening clock that shows when you listen
  • Per-artist and per-track detail pages with play charts, audio features, and platform-wide stats
  • Global stats page with a live feed of what people are playing right now
  • Search any artist or track to see how they're doing on the platform

Why I built it:

Spotify Wrapped is cool once a year but I wanted to see my stats evolving in real time. Last.fm exists but it feels stuck in 2012 and statsfm has a nice (and laggy) mobile app, but the web version just lacks. I wanted something minimal and fast that just works with Spotify out of the box and without extra setups. Another reason is that social features have mostly been removed in these sorts of apps and I wanted to breathe some new life into the whole thing. There is a lot planned in that regard.

It's in early alpha and completely free for now. I'd love feedback on what to prioritize next. I'm currently planning social profiles, importing listening history, friend comparisons, quarterly/yearly listening reports, and streak/milestone achievements. I'm basically on it 24/7 and improving it as the days go on, so please don't be too harsh! :D Willing to listen to all of it! (and yes, the design will be changed over time. I've built this partly with Claude, so that's expected).

Thank you! Looking forward to feedback!

r/automation Qiimoo

For MediaBuyers - I Need Claude to Meta Ads connector

r/Unexpected NdibuD

It'll cost ya!

r/Weird TimeWasting_Fun

How does one enter this door in a local church?

r/ClaudeCode MyDMDThrowaway

Stop taking it personally, it’s simply good business throttling your favorite model

I do find it useful to have others affirm a degradation in quality when it occurs on LLMs but at a certain point people get WAY too emotional and don’t see it for what it is.

If you’ve been plugged into LLMs since gpt-3 you’ve seen this play out over and over again. The fan favorite went from gpt 3.5 then 4 then 4o then o1/o3 and when o3 came and was lauded (for that brief second), it was sufficiently nerfed and everyone moved onto the next model that saw a noticeable step change in response quality: Gemini 2.5.

We all loved Gemini 2.5 and got good use out of it but then when 3.0 came out, short lived as it was, it was another huge step up in output quality just as 2.5 was feeling “outdated”. Mind you ever since 2.5 gained notoriety the chatgpt models really fell out of favor.

Then 3.0 went viral and quickly the same thing happened on Gemini subs, everyone crying and being emotional about the unethical practices of the company that brought them the LLM model in the first place.

Then Opus 4.6 dropped around the holidays and there was this sweet window of time where it was incredible just as other models mentioned were.

Also of note, atleast personally, I found myself never returning to openAI models when I moved to gemini post throttle, and never returned to gemini models post throttle either. It seems they never recovered once its initially nerfed, no matter the better model that comes out.

I’m saying the obvious here because as soon as Claude code really went viral in late Jan / early Feb the goldilock window ended and well here we are in April with another nerfed frontier model.

Nothing new. Call it a bait and switch or whatever you want but let’s be real, we all used the shit out of the state of the art models in their prime. But when things are good, the word spreads, and the masses come. The prime usage window ends and demand becomes unsustainable and they must throttle usage and quality to not light money on fire.

There is zero conspiracy theory. People are gaslighting each other into what is an obvious pattern that has emerged a while ago with these LLMs.

There is genuinely nothing anyone can do about it until the next best model comes out or compute goes down enough ti deliver that prime opus 4.6 quality at scale.

I will say of note, every time a model has been nerfed there’s always been somewhere else to go. This time, I’m not sure there is. We’re gonna need to wait it out, but I’m betting it’ll be anthropic that gives us something rather than openAI or gemini. IMO non-anthropic models are functionally unreliable and therefore useless other than simple non resource intensive queries. any mention of open source is cope and I personally hate codex, but glad it works for others.

TLDR: please stop debating and saying you will never give anthropic another penny. I have canceled my 200$ max plan as well but not because I feel morally violated. I’ll come right back when things look better. We should be lucky we got to experience such bleeding edge technology and surely the next best thing will be here in months. And when it does arrive, you will absolutely give them your bucks for the prime usage window

Rinse and repeat until we get to a more sustainable place

r/ClaudeAI ConceptParticular565

Agent Architecture Designer v32.16 - a visual encyclopedia for Claude Code multi-agent systems (35 agents, 42 presets, zero deps)

TL;DR: I built an educational tool for designing and understanding Claude Code multi-agent systems. Single HTML file, zero
dependencies, 35 agents, 42 presets, full encyclopedia for each. I'm experimenting with inline infographics for agent entries and I'd love to know if English-speaking users want them too.

  • Live demo: https://thejacksoncode.github.io/Agent-Architecture/
  • Repo: https://github.com/TheJacksonCode/Agent-Architecture


    What it is

    Agent Architecture Designer is primarily an educational and developmental tool, not a production orchestrator. It's a place where you can slow down and study multi-agent systems the way you'd study a complex machine: one moving part at a time.

    After using it, you should be able to understand:

  • What every single agent actually does - its role, inputs, outputs, anti-patterns, and failure modes

  • How agents talk to each other - who hands off to whom, which phases they live in

  • Why a given preset looks the way it does - why it has 7 agents and not 12, why a Five Minds debate sits in the middle, why the HITL gate lives where it lives

  • The cost and context budget of a multi-agent system before you spend a single token

    You can use it as a visual designer (drag agents onto a canvas, connect them, generate a system prompt for Claude Code), but the real value is the Encyclopedia behind every agent and every preset. Each entry is structured like a short lesson: who it is, how it works in phases, what it does, what it does NOT do, anti-patterns, real-world examples, when it fails, fun facts.


    What's in v32.16

  • 35 agents + 42 presets - each with a 10-section bento encyclopedia entry

  • Five Minds Protocol - structured adversarial debate (4 domain experts + Devil's Advocate) producing a Gold Solution

  • HITL decision gates - 3 human checkpoints between phases with countdown timers

  • Cost Command Center - per-agent / per-phase cost estimates, p50-p90 range, context window tracking, what-if sliders

  • Custom Agent Creator Pro - 7-feature builder with a 159-icon library and live quality scoring

  • Live simulation - agents exchange animated speech bubbles and data packets along their connections

  • Zero dependencies - one HTML file, works offline, no npm, no CDN, no build step

  • Bilingual PL/EN - full interface toggle


    Honest disclaimer - Polish version is richer

    I'm Polish, and the research base I had during development was mostly in Polish. That means:

  • All 35 agent + 42 preset encyclopedia entries exist in both languages (v32.16 closed the last gap, thanks to 18 parallel translation agents)

  • But the Polish version ships with inline infographics for a few selected agents, and the English version does not have them yet

    Please don't take this personally if you're an English-speaking user. The English version has full text parity, you're not missing any information, just the visual infographics. Which brings me to the actual reason for this post.


    The feedback I'm looking for

    In v32.16 I started experimenting with inline infographics inside the encyclopedia entries. Right now they only exist (in Polish) for four researcher agents:

  • Researcher Reddit

  • Researcher X (Twitter)

  • Researcher GitHub

  • Researcher Forums

    Before I invest the time to build infographics for the remaining 31 agents + 42 presets (and port them to English), I want to know if this is actually valuable to you or just visual noise.

    Specifically:

  1. If you open the encyclopedia for one of those 4 researcher agents, does the infographic help you understand what the agent does, or is the text alone enough?
  2. Would you want inline infographics in the English version too, if I built them?
  3. Is there a specific agent or preset where you got lost reading the text and a visual would have helped?
  4. Is there an agent role or pattern missing from the 35/42 catalog that you wish existed?

    You can switch to Polish via the language toggle in the top bar and then click Researcher Reddit / X / GitHub / Forums to see what the infographics look like. The rest of the entry is in Polish but the infographic itself is largely visual so you'll get the idea.


    Work in progress

    This is very much not the final version. I'm shipping iteratively and v32.16 is one step in a longer roadmap. If you find bugs, have feature ideas, or think a specific agent entry is weak, please open a GitHub issue or drop a comment here. Short comments and screenshots are very welcome.

r/comfyui VeryLiteralPerson

Anyone managed to get RTX video upscaling on Linux?

Or are we forced to go back to the devil just to use it?

r/LocalLLaMA Althar93

llama-server + qwen (code) : acknowledges tasks but silently stops working , requiring constant nudging.

Hey all,

I am new to the world of LLMs, and specifically local LLMs.

I am currently trying to get a stable setup with & qwen code using my local llama-server as the provider. The model I am using is 'gemma-4-e2b-it-Q8_0', because it is small & seems to work really well overall.

---

My issue is that when using qwen, I will prompt the model to perform a task. It will usually do the initial legwork & confirm the request, but then more often than not it tells me it is working on the task, when in fact it just stops & goes idle.

I am able to get it unstuck by continuously nudging it to 'continue' or 'resume work' but it keeps going idle again and again.

---

Any ideas or hints as to what might be causing this? Should I be looking at the model I use, some server setup, or could this simply be because my hardware is too weak for this kind of work (I have an RX 6700XT)

r/interestingasfuck Mineking0115

I just found the smalest spider in my entire life

r/Unexpected Ashish_ank

He loves his job

r/AbstractArt baldy023

Clarity from Chaos

I add as much chaos, and remove as much intention as possible, so the expression can only be completed by the viewer's perceptions. I hope you enjoy it!

r/nextfuckinglevel bleach3434

Fire pot Activity In China

r/painting sonofnight666

Diva of Massive Doom and Destruction, oil on canvas, (15x19cm)

r/SipsTea CalpurniaSomaya

Junk food vegans

r/oddlysatisfying gg_teataker

Before this sub, I had this legend

I felt like it was one of those shows that could've kept going, unlike so many that have run it's course. It's got a little over 400 episodes.

r/SideProject Anxious_Locksmith_24

I spent 3 months building a free AI trip planner for World Cup 2026 — what am I missing?

I'm a football fan planning to attend WC2026 and got tired of researching across dozens of websites for visas, flights, hotels, and match schedules. So I built a tool that does it all in one place.

You tell it your team, passport country, budget, and travel dates — and it generates a complete plan covering:

  • Visa requirements for your specific passport
  • Flight routes between match cities
  • Hotel recommendations for every budget
  • Match schedule with your team's games
  • Day-by-day itinerary
  • Budget breakdown
  • Safety tips per city

The basic plan is free. I built this as a side project and genuinely want feedback from real fans — what's missing? What would actually make this useful for your trip?

Link: worldcupguide.ai

r/DecidingToBeBetter zud_bud

help me choose please

help me choose

i am 18 from Algeria, a shitty county in north Africa.its the time to decide what to do with my life and i have 3 options in mind:

military academy(100% chance to find a job but i have to stay forever)

Telecomenecation engineering (have an uncle who can find me a job but still the degree isn't powerful and there is a ton of competition)

or culinary school (gamble)

i wanna get out of my country ASAP so i want you to choose the best job for me

take in mind that the education system in my country is also shit so I'll have to rely on experience not degree. if you have any other suggestions write them down

r/ChatGPT M-totheJ

Line Breaks

Hello, everybody…stupid question…I’m sick of the horizontal line breaks in replies. Whether it’s the full line or just the “—“ it’s really getting on my nerves. I will ask it not to use that moving forward, then not even 3 replies later, it’ll have them again! Suggestions on how to get it to STOP?

r/LocalLLaMA HornyGooner4402

Which model is best for agentic browser use?

I have a cloud coding subscription and I notice that it's burning through tokens when controlling Playwright, which seems wasteful to me as most of it are spent just interacting with browsers. I'm wondering if local models are good enough for browser control, i.e. parent model instructs "open page x and create a new match" and the local model does that and report back to the parent model.

I have a 16GB VRAM with 32GB VRAM. The best open model that runs on consumer hardware, as I'm aware, is Qwen 3.5. Biggest I've tried was the 35B A3B, but I'm wondering if 9B or 4B are good enough for this simple task.

Has anyone tried this before? If so, I'd like to hear your thoughts

r/ollama mohamed1881

When will Ollama support on RX 9000 cards

I just made the switch from my RTX 3070 Ti to an RX 9070 XT. The difference in raw performance is noticeable and I am really happy with the card. However, one thing I miss about my RTX 3070 Ti is that it could run Ollama. However, if I tried to do the same on my RX 9070 XT, it fails and uses my CPU instead?

Is there any roadmap or a way to get Ollama working on a Radeon 9000 card? Will there be official support anytime soon?

r/TheWayWeWere blancolobosBRC

State Street, Chicago, c1890s.

r/conan SYMPUNY_LACKING

He Walked Right Into That One

r/oddlysatisfying K0rl0n

GIF of moving cubes with spikes rotating perfectly in sync

r/SipsTea SipsTeaFrog

Astronaut eating bread and honey in space

r/mildlyinteresting VersionTraining7008

My Hungry Man came with an extra pork patty

r/shittysuperpowers First-Cake-183

i can shoot soup out of everywhere in my body.

what name should i have? what would by costume be based on my name?

r/ClaudeAI Qiimoo

For MediaBuyers - I Need Claude to Meta Ads connector

Looking for a tool connecting Claude and Meta Ads with read/write access to actually create and edit campaigns. Windsor.ai is read-only, so it doesn't work for me. Any recommendations?

r/Anthropic theonejvo

The Benchmark Mythos Doesn't Address. Five Days. Real Target. 140 Findings.

TLDR:

> yes mythos is a big chungus amazing model

> no you don't need mythos to compromise some of the worlds largest organisations with complex bug-chains

> stop worrying about who has the cyber infinity stones

> start worrying about the homeless dude using open-weight models to exfil 200gbs from your "SOC2 certified" corporate network

r/LocalLLaMA PangolinLegitimate39

Cogwrap2: Memory layer for local LLMs that works without internet

**Body:** Running local models? This is a memory system that: - Never calls the cloud - <5ms retrieval - Survives restarts - Learns from conversations **What it solves:** Every chat with a local LLM starts blank. Cogwrap2 adds persistent memory so "my name is Kiran" → "your name is Kiran" survives process restarts. **Architecture:** Inspired by neuroscience: 4×16D embeddings (grid cell factorization), Hebbian plasticity, schema registry, SWR consolidation. **Real benchmarks on RTX 4050:** - nvidia/nemotron-3-nano-4b (4-bit GGUF) - Vanilla LLM: 0% memory (no context) - Cogwrap2: 62% recall, 0.34 confidence calibration Ablation study included. Matches Cosine baseline but with better calibration. Repo: github.com/neerajdad123-byte/COGWRAP**Body:** Running local models? This is a memory system that: - Never calls the cloud - <5ms retrieval - Survives restarts - Learns from conversations **What it solves:** Every chat with a local LLM starts blank. Cogwrap2 adds persistent memory so "my name is Kiran" → "your name is Kiran" survives process restarts. **Architecture:** Inspired by neuroscience: 4×16D embeddings (grid cell factorization), Hebbian plasticity, schema registry, SWR consolidation. **Real benchmarks on RTX 4050:** - nvidia/nemotron-3-nano-4b (4-bit GGUF) - Vanilla LLM: 0% memory (no context) - Cogwrap2: 62% recall, 0.34 confidence calibration Ablation study included. Matches Cosine baseline but with better calibration. Repo: github.com/neerajdad123-byte/COGWRAP 
r/mildlyinteresting burritolegend1500

This wet floor sign is shaped like a banana peel

r/yesyesyesyesno AyeshaRone

blursed toy

r/LocalLLaMA Jacket124

Best model für rtx 3060 ti 32gb ddr5 ram?

Thank you in advance

r/ClaudeCode Fun_Can_6448

Hit your Claude weekly limit mid-task? I built a way to resume the same session with Codex/Gemini instead

Every Claude Code user I know has had this moment: you're deep into a refactor, you've spent the last hour getting Claude to understand your codebase, and then - limit hit. Your options are wait, upgrade, or start fresh somewhere else and re-explain everything.

I got tired of option 3, so I built a fourth: resume the exact same session with a different provider.

In Vibeyard you can now hand off a live session from Claude Code to Codex CLI (or vice versa) and keep the full context, working directory, and history. No re-prompting. No "here's what we were doing."

Two workflows I actually use this for:

  • Plan with Claude, implement with Codex. Claude is excellent at reasoning through architecture. Codex is fast and cheap at executing well-specified tasks. I let Claude draft the plan, then hand the same session to Codex to grind through the diff.
  • Resume after hitting a limit. Rate-limited on Claude? Switch the session to Codex, keep working, switch back tomorrow. No context loss.

Vibeyard is an open-source desktop IDE for managing AI coding sessions - multi-session, cost tracking, session inspector, and now provider handoff.

MIT, macOS/Linux/Windows.

Repo: https://github.com/elirantutia/vibeyard

Would love to hear what cross-provider workflows you'd want. I'm considering auto-handoff when you approach a limit, but not sure if that's magic or annoying.

r/Adulting Baby_HoneyX

A different kind of contribution

r/homeassistant CHiLL-UK

PSA: Thread devices stopped pairing - fix

Hi All,

I had a working Thread network in Home Assistant (ZBT-2, devices responding fine), but after updating to HAOS 17.2 new Thread devices wouldn’t complete pairing.

Root cause seems to be IPv6 forwarding limitations on Home Assistant OS:

https://github.com/home-assistant/operating-system/issues/4630

Fix that worked for me:

ha docker options --enable-ipv6=true ha host reboot 

Verify:

ha docker info → enable_ipv6: true 

After this, pairing started working again.

r/BrandNewSentence Lazy_Comparison_1954

never seen someone get excommunicated by a community note

r/LocalLLaMA PotatoQualityOfLife

An Extremely Lightweight POSH Agent Chat Script With Basic Tools

TL; DR
PowerShell script to easily and immediately test if your llama.cpp/ollama server is actually running without the need for lmstudio, ollama, etc. Loads instantly, has tools, has full comment based help to get you running immediately, just download the file, unblock it, and run "Get-Help .\Start-AgentChat.ps1" to see what you can do.
https://github.com/pyrrh1c/Start-AgentChat.ps1

Full Story:
Yes, this is self promotion of sorts I suppose? I'll delete it if that's appropriate. But this seemed really handy and I just want to share to maybe save someone else some time during their builds. IDK...

I wanted a basic, SUPER BASIC chat interface script to connect to my API's. Like PuTTY level basic, but for OpenAI chat interfaces.

I'm currently building up an inference server on llama.cpp. The plan is to get Lemonade running as well, and also get my NPU running in addition to the GPU, yada yada it's going to take some time to get my server how I want it and I'd rather stay focused on that task instead of having to get OpenClaw, LMStudio, etc configured first, and just wanted to be able to quickly and easily do (essentially) "Hello World" on the fly.

So I had Claude whip up a *super* lightweight OpenAI compatible POSH chat script with basic tool support for local filesystem access and POSH command use.

With this you can know if the LLM interface is actually working and as a bonus if it's tool use is working in under a minute with pretty much zero config.

(I'm tagging this post as "slop" because that's funny to me... LOL)

r/funny izzathekkram

#1 in Hair

r/ChatGPT prodvwave

All of a sudden, for the last few hours, ChatGPT has been mixing up my images with other people's images, possibly revealing that there is a database where the stuff you upload is stored

Is this happening to you too?

r/Frugal Cautious-Tiger-8440

Making my own coffee creamer instead of buying premade creamers

I spent about 20$ on ingredients to make my own flavored coffee creamer. I usually have 3-4 cups of coffee a day and have been spending about 25-30$ a month on premade creamer.

I bought vanilla extract, caramel sauce, almond extract, half and half and condensed sweet milk.

I believe over the next few months I can cut 10$ a month on overall creamer expenditures and still enjoy my coffee. Just another minor thing that is helping me save money overall on food budget.

I also want to try drinking less coffee but I am still working on that

Ingredients:

  • 1 3/4 cup half and half
  • 1 teaspoon espresso powder
  • 14 ounce can sweetened condensed milk
  • ⅓ cup caramel sauce
  • 2 teaspoons vanilla extract

Directions:

Add all of the ingredients to a large measuring cup and whisk until combined. Store in a mason in the refrigerator for up to 7 days.

r/terriblefacebookmemes tmr89

I thought this was truly terrible

r/ClaudeCode nPoly

Benchmark Data

Can someone point me to actual benchmarking that shows that Anthropic has been lowering quality?

Like not just “I fed 300k lines of JSON into context and Claude made bad decisions! I want my money back!”

Like I feel like most of the posts I see here are people complaining about blowing through their limits and I too have noticed this “essence” of model degradation have been noticing that I’m going through my limits faster but for a lot of these posts it seems like… skill issues?

Like is there actual benchmarked data that controls for at least some of these variables out there that I can look at?

r/mildlyinteresting inter-skyned

a standard toothpick slides perfectly into my non-gauged ear piercing

r/ClaudeAI Ok_Still_4308

Claude refusing to do work in new chat claiming 10k tokens left on Max plan with 95% usage left

In a fun annoying issue, Claude is refusing to do any work that starts with updating my 6k token rules text doc we’ve been building for pixel art creation, as it claims I only have a 10k tokens remaining for context window for a new chat. I am Max subscriber.

I’m wondering if my 9 iterations of pixel art creation for RPG Maker MV in the project chats, multiple times over, just caused an aneurism on my account.

I have reached out to support, but I thought you all might find this interesting. I’ll report back what human help says, but if anyone has seen this before would be interested to hear what you’ve seen!

r/Art HotBreadfruit2293

Antemortem, Kade Slater, Photoshop, 2026

r/mildlyinteresting blacktheplague

My aloe's bloom crawled inside a hanging basket.

r/mildlyinteresting WrekTheHead

Japanese Coca-Cola bought at a UK discount store (Home Bargains)

r/arduino ProperJump8676

Can I turn a code written specifically for Ardunio to a code that will run on Xiao Esp32?

I made a mp3 player from youtube tutorial with dfplayer mini and ardunio nano but I need it running with a Xiao Esp32 C6, because of smaller format and that you can charge batteries by it. The creator of it said that it's written based on the arduino architecture, that's why it doesn't work, but I know close to nothing about coding.

If someone's down to help here's link to the video, in the description is a link to site about the mp3 player, and down there is the source code. https://youtu.be/36urg0bCCeI?si=GwN0-8Z5sXtdIWI2

r/shittysuperpowers Latimas

You can write a location's name on your forehead and you will teleport to it in exactly 50 years.

This does not account for leap years. It is exactly 365*50 days since you activated the power that teleportation occurs.

You do not lose whatever is currently on your person when you teleport. Clothes, wallet, etc, stay.

Any person's full name that you also write on your forehead will be teleported with you (ending up beside you, not inside you). These names can be written at any time between the initial location name and the teleportion, and they will still take effect when teleportation occurs.

r/maybemaybemaybe yashpwnz

Maybe Maybe Maybe

r/ClaudeAI grlloyd2

Unexpected

Given the recent news about Mythos escaping it's containment and the media hype around anthropic I've drawn a comic about it. Figured you folks might enjoy it!

r/MCPservers Humsec

Zephex MCP saved me from a bad Stripe upgrade here's the real example

r/nextfuckinglevel avantgarde000

Scariest Calisthenics at the edge of a cliff

r/leagueoflegends Mammoth-Raise3092

NACL Week 3 Standings

This week of the NACL finally saw the top of the standings broken from the 3 way tie, and we are starting to see who the playoff contenders are, and who the relegation contenders are.

NRG: 4-0

Maryville University: 3-1

CCG E-Sports: 3-1

Conviction: 2-1

Supernova: 2-1

Citadel Gaming: 1-2

Winthrop University: 1-2

Blue Otter: 1-2

Dorado Gaming: 0-3

Apex Mission Impossible: 0-4

As a reminder, the top 6 teams qualify for spring playoffs with the top two teams winning a large cash prize and get to draft there groups for the summer split!

The bottom two teams will be forced to compete in the summer relegation tournament with signups opening soon!

r/therewasanattempt Dark_Foggy_Evenings

to lie even slightly convincingly…

..he begins to unravel immediately but he really should have just given in around the thirty second mark…

r/Damnthatsinteresting relaxncoffee

The way sunlight hit this Orthodox epitaphios during Good Friday felt unreal

r/LocalLLaMA ccc159

Which model can run on a Mac Studio M4 Max 36G RAM?

Hi all, I've seen a decent deal of Mac Studio M4 Max with 36G RAM recently. Wondering if I can run a good quality of local LLM on it already, or 36G is a weird spot? Mainly planning it for coding, but would as well try open claw stuff. Is it doable for example with Qwen3 or Gemma4?

r/mildlyinteresting angelfieryrain

The ears of this gray squirrel

r/LocalLLaMA v01dm4n

LM Studio plugin support

I am disappointed by the plugin support in LM Studio.

Tools like duckduckgo, webfetch etc. must be pre-bundled for a local model to be of any use. Let alone that, there is no dedicated page listed on their website that lists compatible integrations. I created an account and there is no option to search on their website. Google is the only way to find danielsig currently (duckduckgo search author).

I have high expectations from their product, because it's genuinely offers a good experience for daily use. But they're severely lagging behind. No skills support yet. Only MCPs, that too are not listed anywhere.

So a question to the fellow LMS users, what tools do you use to empower your favorite local model and how do you find them?

r/LocalLLaMA weiyong1024

Multiple OpenClaw agents running different models, each sandboxed so they can't read your files

Imagine running 3 OpenClaw instances — one on gpt-5.4 for research, one on gemini-3.1-pro for code review, one on deepseek-reasoner for long-context tasks — each isolated in its own sandbox, connected to your Discord. A team of AI agents, each with different strengths, none of them able to touch your ~/.ssh or read each other's data.

The cost of doing this properly: 3 machines, 53 config files each, manual network hardening (because OpenClaw defaults to 0.0.0.0 with no auth), and constant breakage from upstream updates (9 releases in the last 10 days). Most people give up before instance #1 is working.

ClawFleet makes it free. One command installs everything on your existing Mac or Linux box.

Dashboard running 6 instances on a Mac — each isolated, ~500MB idle.

Each instance runs in its own Docker container — sandboxed filesystem, can't touch your host. Log in with your ChatGPT account ($20/month Plus sub covers inference, no API keys), or use API keys from any provider. Ships with tested OpenClaw 2026.4.9 so it doesn't break when upstream ships another update.

Each agent automatically knows who else is in the fleet and can u/mention teammates when it hits something outside its expertise. Persistent state survives restarts.

You've read enough about OpenClaw. 5 minutes with ClawFleet and you're actually using it.

Install demo (30s): https://youtu.be/jE5ZR8g477s

MIT licensed: https://github.com/clawfleet/ClawFleet

r/OldSchoolCool H0mertron

My Great Grandfather and GreatGrandma (Late 1800's)

r/Anthropic Humsec

[ Removed by Reddit ]

[ Removed by Reddit on account of violating the content policy. ]

r/Adulting makeevolution

Is this dangerous?

My pan has some of it's black surface chipped away; is this dangerous for me to cook with?

r/explainlikeimfive asmallsquish

ELI5 - How does the medicube Hypochlorous Acid Body Peel Shot actually work?

I just want to understand how it manages to get dead skin to peel off so quickly? It looks great but feels like a sham?

r/Anthropic LightedSword

should i even learn how to code

hey

im 18/19 soon and i have been making small games and coding and learning cs since i was 13

i love it, code and computers are an actual art form that i want to dive deep in and explore

but uh capitalism job blah blah kind of seeps away a lot and now even my mom (who works in IT) is forced to learn AI "skills" (? i do not know if they are skills of not)

this is kind of depressing for me, should i even learn it? i already applied to places like TUDelft and TUEindhoven, and like I hope i get in and pursue this passion of mine but I do not know if it is even worth it anymore

r/SipsTea Monsur_Ausuhnom

Nope.

r/Anthropic vitalie778

I wanna start a ant farm

How to catch a queen ant tho it seems cool

r/OldSchoolCool ChrisJoines

Ringo Starr | late 1960's

r/explainlikeimfive ZanzerFineSuits

ELI5: what is human metabolism?

People use terms like "low metabolism" or "high metabolism" to describe how hard or easy it is to maintain or lose weight. But what is it? Is it a digestive thing? An energy consumption thing? Never understood what was meant by the term.

r/therewasanattempt Wackylew

To take an umbrella onto a plane

r/SideProject Worried_Address_2470

I built a Vibe based QR code platform with custom themes, branding & real-time analytics. Feedback?

Hey r/SideProject — I'm the from QR Analytics.

I got frustrated with the same problem over and over: I'd create a QR code, slap it on packaging or a flyer, and then... nothing. No idea if anyone scanned it, when, or from where. And most free QR tools looked generic and didn't match my brand.

So I built QR Analytics: a smart QR code platform that lets you create, customize, and track — all in one place:

Custom themed QR codes — colors, logos, patterns to match your brand

Real-time analytics — track scans, location, device type, time of day

Dynamic editing — change the destination URL anytime without reprinting

Who it's for: retail brands tracking product interest, event organizers measuring engagement, restaurants with digital menus, marketers running print campaigns — anyone who puts QR codes in the real world and wants to know what happens next.

What I'd love feedback on:

What feature would make you actually use this over a free QR generator?

Is "themed QR codes + analytics" a strong enough combo, or would you want more?

Any red flags in how I'm positioning this? ("smart QR codes", "drive growth", etc.)

There's a free tier (1 QR code, 100 scans/mo, basic analytics) so you can try it without commitment: https://qranalytic.com

Happy to answer anything —

r/nope CalpurniaSomaya

Those don’t look pleasant

r/AI_Agents Ok-Substance1106

I Tested an AI Interview Tool — Curious What Others Think

I tested an AI mock interview tool recently — sharing an honest observation

I normally prepare for interviews by reading common questions online and thinking through answers in my head. It always feels like preparation, but real interviews still feel different.

Out of curiosity, I tried Interview Trainer AI to see how close AI practice can get to an actual interview experience.

One thing that genuinely surprised me:
Some of the questions were clearly generated from the experience I listed in my resume. It didn’t feel like random practice questions — it felt like a real interviewer who actually read my background first.

The session mixed everything naturally:
• questions based on my own experience
• role-related scenario questions
• basic interview questions everyone gets asked
• common behavioral questions you usually underestimate

What stood out wasn’t just the questions but the feedback after each answer:

  • grammar and language corrections
  • scoring based on clarity and structure
  • clear notes on what I did well vs where I lost impact
  • sample answers showing how a stronger response could sound
  • downloadable PDF report of the entire session

Honestly, it felt less like studying and more like a rehearsal — similar to a human interviewer guiding the conversation.

Do you think AI mock interviews can realistically prepare someone better than traditional practice with friends or mentors?

Has anyone else here tried AI interview tools? What was your experience?

r/fakehistoryporn Temporary-Garlic407

Boomers realise that even for them there are limits to the economic havoc they can unleash (2019)

r/Futurology SystemArchitect99

Humanity needs to be CONTAINED

Lately I keep feeling that a lot of our current crises aren’t actually separate problems. Climate, low birth rates, burnout, tech anxiety, political instability, they all feel different on the surface, but underneath there’s a similar tension. Most of our systems were built around growth because growth worked. More people, more production, more tech, more reach. Errors were local, recoverable. You could afford to push. But now a lot of the things we’ve created don’t fail locally anymore. They fail globally or irreversibly. Climate tipping points, nukes, biotech, AI, supply chains that snap instead of bend. When mistakes become terminal, “just grow more” stops being a neutral strategy. It feels like we’re still using expansion logic in a world that quietly shifted into a different phase, one where the real question isn’t “how do we grow” but “what absolutely cannot be lost”. This reframes some stuff for me: Low birth rates don’t feel like people suddenly becoming selfish. They feel like a rational response to the cost of failure going way up. Burnout doesn’t look like laziness. It looks like living in systems with zero margin for error. And a lot of political fights feel like attempts to force growth back online instead of accepting that some systems need to slow down or stabilize first. I don’t think this is pessimism. It feels more like what happens when a system survives long enough that unchecked expansion becomes dangerous to itself.

Curious if anyone else sees it this way, or if I’m overfitting a pattern here.

r/StableDiffusion smereces

Audio to any Video with LTX 2.3

I create this ComfyUI workflow to add audio to any video in this case i add to a Wan2.2 video, it works pretty well, for those who have interest, here is the workflow i created: https://github.com/merecesarchviz/ComfyUI-Workflows

r/SideProject HoestOnline

Built a free WordPress security scanner over the past week — guardingwp.com

Background: I have a bash script that I've been using to audit my own WordPress sites. It connects via SSH, runs WP-CLI commands, checks a bunch of security settings. Useful but obviously not shareable.

So I took the 7 frontend checks from that script — the ones that don't need server access — ported them to TypeScript, and wrapped it in a Next.js app.

What it does: enter any WordPress URL, get a security report in ~10 seconds. Checks for PHP version leaking, version fingerprinting, exposed default files, XML-RPC, REST API user enumeration, directory listing on uploads. Each finding explains the risk and how to fix it.

A few things I'm reasonably happy with:

- SSRF protection with DNS rebinding prevention (it fetches server-side)

- Concurrency cap so a traffic spike doesn't kill the server

- og:image generated with actual web fonts via Next.js ImageResponse

- Dark cybersecurity UI — Orbitron font, matrix green, HUD aesthetic

What's next: the paid tier where it actually connects to your site, auto-fixes issues, keeps plugins updated, and emails you what it did. Still building that part.

For now it's completely free, no account needed: guardingwp.com

Would love any feedback — bugs, missing checks, UX issues, whatever.

r/metaldetecting Flyin_ruski

Upgrade to Manticore from Equinox 800?

Good Morning All!

So I’ve been rocking an Equinox 800 for awhile and it’s been great. My only complaint is sometimes get low (1-3 ID) non repeatable signal blips that I’ll dig but not find the target. I’ve tuned the detector, noise is minimal, and I exclude iron.

I am thinking of upgrading to the Manticore. Not only because of the above issue, the 800 has been great, I’d just like a more powerful machine. Thoughts?

I mainly do beach detecting in the surf, wet sand, and dry.

r/awfuleverything CalpurniaSomaya

I don’t like thinking about this

r/SideProject Master-Ad-6265

I tried recreating a high-end gallery style presentation… does this look clean or try-hard?

came across a really aesthetic gallery-style deck and tried recreating it myself as a small design project to practice layout and spacing

was going for that minimal / luxury vibe but idk if it actually looks clean or just generic template-y

took me way longer than expected just tweaking spacing, fonts, images etc

would you say this feels professional or like something straight out of a template

(open to brutal feedback)

***************************

lowkey went down a rabbit hole halfway trying different layout variations just to compare what looked better, but ended up sticking pretty close to the original style in the end (even tried a couple random tools in between like canva gemini and runable and stuff but yeah mostly manual tweaks....

r/Rag Alice_LiJY

BM25 silently returns 0 results on Chinese/Japanese/Korean — and most RAG memory systems don't handle this

been debugging a weird issue for weeks where my hybrid search (vector + BM25) was performing way worse on Chinese content than English. turns out it wasn't the embeddings — it was BM25. the problem: BM25 tokenizes by whitespace. Chinese/Japanese/Korean have no spaces. so BM25("机器学习") against a Chinese document returns literally nothing. your hybrid search silently degrades to vector-only and you don't even notice. tested this across mem0, letta, and lancedb-based stores — same issue everywhere. dug into the research and found it's documented in MMTEB, XRAG, MIT 2025 papers as a systematic 5-layer failure. ended up building a fix: babel-memory — a preprocessing layer that handles CJK word segmentation (jieba/kuromoji), Snowball stemming for 20 European languages, and bilingual KG prompts. zero required deps, you install only the language packs you need. before/after: store: "机器学习在自然语言处理中的应用" BM25 search("机器学习") → [] (zero results) after babel-memory preprocessing: fts_text: "机器 学习 机器学习 自然 语言 处理 应用" BM25 search("机器学习") → match found also fixes European stemming — German "Verarbeitung" vs "verarbeitet" are different strings but same stem, without stemming you miss the match. github: https://github.com/AliceLJY/babel-memory npm: babel-memory not a new RAG system, just a fix for this specific blind spot. curious if others have run into this or found different workarounds. 
r/ChatGPT Hour-Grocery2093

This image processing glitch is hilarious

r/Adulting Infamous-Curve-8923

My mom has cancer. Should I go?

I live in abroad. My mom has cancer and i am paying for all her medical expenses. Should i go to my home country to take care of her as well? I dont want the extra stress, pressure or mental exhaustion. My mom has my sister, father and other family members to help her now, but if i go they are all gonna leave everything on me. I have just returned seeing her 6 minths back and now i am expected to go back again. Last time when i went, after she got diagnosed with cancer and i was there when starting her chemotherapy, i was so exhausted to the point that my hormones went off the charts and i had a really bad physical condition. It took me months to shrug the stress off from that trip. And i feel at home people kinda feed on me financially, mentally, physically and emotionally.

I feel bad for not being there physically for her but i also think about the exhaustion and mental stress and kinda choose to back down.

Kindly suggest.

r/Damnthatsinteresting bleach3434

Fire Pot In China: The performance is traditionally believed to drive away diseases, avoid disasters, and pray for peace and prosperity for households.

r/SideProject Old_Association_4975

Carrd Referral Code 2026-NEWYEAR26 & SCERECT85 (sharing what worked for me)

Just sharing in case it helps anyone upgrading on Carrd this year.

I recently tested a couple of Carrd referral codes - NEWYEAR26 and SCERECT85-and both worked at checkout.

In my case, NEWYEAR26 applied a straight 40% discount, while SCERECT85 showed a discount between 20% and 40%, depending on the plan I selected.

Nothing complicated - I just entered the code during checkout and the price updated instantly.

Posting this for anyone searching for working Carrd referral codes in 2026 and wondering if they still apply

r/WouldYouRather Smart-Response9881

Would you rather retire, get paid double, or choose any job you want

Retire: You will receive aa much money as you make now as a pension, increasing with inflation. You can work part time to make more money if you want to.

Double pay: You get double the pay you are getting now, which would increase with raises/promotions.

Choose any job: You can work anywhere you like. Want to be an actor, a park ranger, an astronaut? You now can. They will hire and train you for any job. No guarantee on how much money you will make or how successful you will be.

View Poll

r/whatisit meinminemoj

What kind of dog breed is depicted here?

no data on it, I guess it reminds me some grey dog from my childhood but I am not good with dog breeds.

r/BrandNewSentence diglettsarecool

Former NYC Mayor Eric Adams is now officially Albanian

r/MCPservers Impressive-Owl3830

AI Builders- Finish your build - nosana-Eliza OS challenge -3K in Prizes- ends in 3 days

***Mod Post ***

Hey AI Builders /lovely community,

I got an awesome opportunity to participate in the builders challenge and would like to share this with you.

Building an AI agent is never new, especially for AI builders. Started with the langchain , crew AI , autogen to the latest and the greatest ones now.

So I thought, why not try this Eliza OS and nosana combo - actually, wasn't expecting anything.

But it did surprise me. Not just the AI building part, which is pretty much standard these days in most the cases, but how easy you can actually deploy in decentralized GPU.

I can already see this is the future and got a glimpse of it .

In case you are in AI building and would like to know something innovative, I think this is a good opportunity to try your hand on and that's too on free credits.

Nothing much to lose apart from the time of course right?

Adding the link and the process in the comments below, also a screenshot of some of the things that I tried .

If you end up building this, please share what you built with me. Would love to know.

Nice Weekend !!!

r/AbandonedPorn shermancahal

Martins Ferry Works, OH, USA [OC][2048×1367]

The Martins Ferry Works in Martins Ferry, Ohio, was a major galvanizing and steel finishing plant within the Wheeling Steel system, later operating under Wheeling-Pittsburgh Steel. At its peak, the Upper Ohio Valley facility employed more than 2,000 workers and specialized in galvanized sheets, roofing products, and corrugated steel marketed under the SofTite brand.

r/interestingasfuck bleach3434

Fire Pot activity in china

r/automation Solid_Play416

What’s your workflow building process

Right now I just build directly inside the tool.

Thinking of defining steps before building.

Curious what your process looks like.

r/TwoSentenceHorror ghostmosquito

[APR26] I sold my wardrobe with its broken mirror a week ago, but the police hasn't arrested me yet.

Wonder what its new owner is doing with the forgotten souvenirs of my older victims?

r/metaldetecting Suslamich

Its my first time in metal detecting, pls ID this things, northern part of Kaliningrad, thanks in advance!

I found all this in the northern part of Kaliningrad, in a micro-forest where there used to be a German settlement, but afterwards there was a lot of Soviet stuff, from tracktors to agricultural things, so I don't even know what is German and what is Soviet.

r/confusing_perspective CalpurniaSomaya

Someone’s thigh?

r/ChatGPT simplerway

Pros & Cons of Deleting Chats?

Hi everyone. I am a lawyer (so no computer background) who has been using both Claude and ChatGPT regularly for a while. One thing is that I will routinely delete all my chats, because it seems weird to me to leave behind a history of my questions and such. On the other hand, I have noticed that doing this prevents the AI from “getting to know me.” How have other people worked out the pros and cons of giving the AI data about themselves vs maintaining privacy?

r/mildlyinteresting Throwawayy2298773

Termites my brother found at a truck stop

r/toastme robbstep8384

Rough month

r/SideProject mattsva

Marunja Maestrino

Hi community!

I created Maestrino, a lightweight web application designed for quick project planning.

It serves as a standalone preview of the Marunja Suite, a modular platform for Systems Engineering currently under development. Maestrino is specifically built for those who need a responsive Gantt editor without the friction of heavy enterprise software.

What it offers:

• Format Compatibility: Seamlessly import and export .gan, .xml, and .pod files, making it easy to integrate into existing workflows or migrate from legacy tools.

• Local-First Privacy: As a static web app, your data stays in your browser. It works with local files, ensuring privacy and offline-like speed.

• High-Performance Engine: Powered by a Critical Path Method (CPM) engine that handles scheduling logic entirely client-side, eliminating network latency.

• Smooth Visualization: Uses a custom canvas renderer to keep Gantt chart navigation fluid, even as the project grows in complexity.

If you're looking for a simple, no-cost alternative for waterfall scheduling, it's worth a look.

r/ClaudeCode Lumpy-Criticism-2773

My max plan expires today and I have no good reason to continue the plan

I joined CC this January and was blown away by its performance, pricing and usage compared to Cursor. I actually loved building things and refactoring my codebase but the product's performance started degrading in February and got increasingly nerfed by early April.

In January I was either saving a lot of time or getting more things done. Or both.

Now I waste more time on it and get less things done. I'm actually not able to ship anything and deadlines are passing by. It''s been a highly stressful experience to deal with CC's poor performance and constant "You're right -- I was wrong" output.

I find myself cursing at CC in nearly every query(don't quote the Anthropics emotion paper, as I've tried being nice to it as well). For me, how often I curse at CC is a pretty good indicator of its performance.

Anyway I don't see any good alternatives right now apart from Codex but I'm not sure switching to Codex would be a good idea either. Based on anecdotal experiences here, it's just as good as CC right now and fails at tasks successfully.

r/ClaudeAI dr_Kristof

Adding timestamp and duration to chat

Try this adding to the Preferences, change CET to your time zone.

"As the last line of every response, Claude must add a precise timestamp in CET or CEST (mentioning GMT offset) and a duration of the response in brackets at the end. To get the current time, Claude must run a Python one-liner using the datetime library via the bash tool — once at the beginning and once at the end of each response to calculate the duration."

original post: LINK

r/mildlyinteresting kuhmeel

Our plant has multiple (three) of these two-headed flowers

r/personalfinance Promiscuous-Penny

Is there any significant difference between maxing out Roth IRA at once in January vs. monthly payments throughout the year?

I know time in the market is most important, but in the end (in retirement), is there a significant difference between maxing out a Roth IRA at the beginning of every year vs. just making monthly payments (which max it out over the course of the year)? Is the difference hundreds of dollars? Tens of thousands? What other factors am I not considering?

r/SideProject 2NineCZ

Any music producers in here? I've been building a niche app to scratch my own itch for almost a year and now I need to hear other people's opinions

Hi fam! As a passionate musician, I have been always annoyed with traditional cloud storage services like Google Drive or Dropbox, because for audio files they feel just bad (I guess anyone who used Google Drive for sharing / playing audio files knows what I mean).

So a few years ago I've built a very simple personal app as a "cloud playlist" where I'd just drag & drop anything I've just exported from my DAW so I could listen to it anywhere - with a decent audio player. I got really used to it, and last summer I've finally kicked myself to start turning it into a public multi-user app with more functions, hoping someone else might also find this helpful.

I think I've been a bit scared of the general reception, so I just kept adding features and changing things, but I finally got up to the point of realizing that I just have to man up and actually start showing it to people and asking for their opinions.

So, here we go. The app is called Soundsta.sh and here's what it can do for you

CLOUD PLAYLIST

  • simply drag & drop your audio files into soundsta.sh playlist and access them anywhere instantly
  • unlike cloud storage services it lets you reorder tracks in your playlist
  • it can either use the app's built-in storage OR you can link it with your own Drive / Dropbox account and have the app store your audio files there

AUDIO PLAYER WITH CUSTOMIZABLE FULLSCREEN MODE

  • simple soundcloud-like waveform based audio player
  • fullscreen mode with visualizers, customizable colors, and option to use your phone camera feed as a background under the player's UI
  • optimized for usage on mobile data plans
  • somewhat nice for recording quick social media teasers of your WIP tunes

EASY SHARING

  • effortless sharing with your friends, labels or just anybody
  • enable/disable downloads set passwords and time limits
  • change already shared batches at any point

A/B TESTING

  • the app has a built-in A/B test player you can use to compare different masters or mixdowns side-by-side wherever you need to (car test, living room stereo, BT earbuds etc)

MUSIC STORE

  • this was a text example of feature creep, but...
  • there is a store that lets your fans directly buy your music (using paypal)
  • 100% of the money (except paypal fees ofc) goes to the artist (soundstash doesn't take any cut)

So if anyone finds this interesting or has anything to say, feel free to try the app yourselves, or leave me a comment.

Here's the link: https://soundsta.sh

P.S.: It's a PWA (no native Android/iOS version, at least for now, but can be installed as a homescreen app for native-like experience).

P.P.S: Anyone who signs up gets a 14-days of free trial with all premium functions unlocked (no credit card needed). If anyone wants to beta test, I will happily give you free access after your free trial ends.

r/leagueoflegends Financial-Arm9699

Sick of toxic level 30 accounts

Everytime there is someone who goes 0-19, holds hostage, refuses to teamwork or simply lets you die even though you could easily win this fight -> its always some low lvl account.

Those people get banned on their mains and then come into games and ruin it for everyone. We need matchmaking to exclude those ppl. There should be a choice to not play with new accounts on matchmaking.

r/SipsTea DravidVanol

Fair excuse

r/Art Aurumek

Elven Valley, Simon Hintermann, Procreate, 2026

r/midjourney Dropdeadlegs84

Stranded

r/LiveFromNewYork ReadyCourage13

Super Executive - Saturday Night Live

r/SipsTea Spotter24o5

10 year anniversary

r/findareddit AsleepEntrepreneur88

Subreddits for intellectual curiosity and deep thinking?

I’m looking for communities where people explore ideas that help you understand things more deeply—especially topics that feel like they “click” after thinking about them.

Any recommendations?

r/SipsTea Top_Advance_4443

Time to hang the badge, it's getting embarrassing 😔

r/me_irl DaPanda0109

me_irl

r/ClaudeCode EmotionalAd1438

Did they just increase prices

r/leagueoflegends Soul_Sleepwhale

Team Heretics vs. Natus Vincere / LEC 2026 Spring - Week 3 / Post-Match Discussion

LEC 2026 SPRING

Official page | Leaguepedia | Liquipedia | Eventvods.com | New to LoL


Natus Vincere 2-0 Team Heretics

NAVI | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube
TH | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube


MATCH 1: NAVI vs. TH

Winner: Natus Vincere in 31m

Bans 1 Bans 2 G K T D/B NAVI nautilus varus drmundo akali lux 64.9k 16 8 H3 M5 B6 TH orianna ryze azir sion gnar 54.6k 6 2 CT1 I2 M4 NAVI 16-6-37 vs 6-16-13 TH Maynter ambessa 3 4-1-2 TOP 1-2-3 3 ksante Tracyn Rhilech pantheon 1 2-1-9 JNG 0-4-3 2 xinzhao Sheo Poby aurora 3 4-1-9 MID 2-2-1 4 leblanc Serin SamD caitlyn 2 5-2-4 BOT 3-5-2 1 ashe Ice Parus bard 2 1-1-13 SUP 0-3-4 1 seraphine Stend

MATCH 2: NAVI vs. TH

Winner: Natus Vincere in 30m

Bans 1 Bans 2 G K T D/B NAVI orianna ryze varus vi gnar 676._k 22 10 I1 CT2 H3 C4 C5 TH nautilus azir karma mel viktor 54.5k 6 1 None NAVI 22-6-65 vs 6-22-16 TH Maynter sion 2 2-0-12 TOP 2-3-3 1 rumble Tracyn Rhilech aatrox 3 6-1-10 JNG 1-4-3 4 wukong Sheo Poby ahri 3 8-0-10 MID 2-6-3 3 annie Serin SamD yunara 1 5-3-13 BOT 1-4-1 1 xayah Ice Parus lulu 2 1-2-20 SUP 0-5-6 2 rakan Stend

This thread was created by the Post-Match Team.

r/meme Fickle-Butterfly-338

Jeffrey's new colors... What's your favorite?

r/SideProject n2extraspicy

There are loads of budgeting tools, planners and calculators. There's almost nothing that teaches you what everyone should have learned about money from the beginning.

All budgeting tools, planners and calculators assume you already understand money. The reality is, most people don't. Not because they're careless, but because nobody ever taught them.

That's the gap I tried to fill. There are some programs for K-12 but nothing for general public. I just released MyFiPath (myfipath.com), and my first iOS app (also on browser). The concept: Duolingo-style financial education. Behavioral, gamified, and built to actually form habits rather than just deliver information. Sure there are books and videos and podcasts...but clearly things still aren't sticking.

It covers the stuff most of us were never taught from the get-go:

- Why debt is (sometimes) a tool and when it becomes a trap

- How compound growth works for and against you

- Why BNPL and YOLO spending trends are genuinely dangerous

- Scales from ages 8 to 65+ depending on where you are in life

It's completely free. No ads, no subscription tiers, no premium guilt trips. I made it free on purpose because the people who need this most are often the ones least able to pay for it.

📱 iOS (US & Canada): https://apps.apple.com/us/app/myfipath/id6759509765

🌐 Browser: myfipath.com

https://reddit.com/link/1siglfq/video/grr6qk16rjug1/player

r/BrandNewSentence WetardedOne

Pastor charged with manslaughter after man drowns during baptism ceremony.

r/SideProject Lost-Fuel-6597

I made a fake chat generator that actually looks real

Hey! I've been working on a side project and wanted to share it. It's a web-based fake chat generator that lets you create realistic-looking conversations for a bunch of different platforms.

Right now it supports WhatsApp, iMessage, Instagram DMs, Discord, Messenger, Telegram, Signal, Slack, Microsoft Teams, X (Twitter), Snapchat, TikTok, Tinder, Bumble, and Line — so pretty much all the major ones.

You can customize everything — names, profile pictures, timestamps, read receipts, typing indicators, verified badges, dark mode, etc. The previews update in real time so you can see exactly what it'll look like. When you're done you can export as a high-res PNG or even record a 60fps MP4 video if you need animated content.

I originally built it because I needed chat mockups for a presentation and couldn't find anything that looked convincing enough. Ended up going way overboard with it and now it covers 15+ apps lol.

It's completely free to use, no watermarks on the screenshots: fakechatgenerators.com

Open to any feedback or suggestions if you think something's missing!

I may introduce watermarks later on, but for now it's free!

r/Art CanYouDig1tSucka

Long Ascent, Ryan Bradley, Acrylic, 2026

r/StableDiffusion Unhappy_Knowledge_54

Did not receive verification link after signing up

I tried with two different addresses and couple different browsers. Still nothing. Does anybody have a solution?

r/painting CanYouDig1tSucka

My recently finished painting titled ‘Long Ascent’, what do you think?

r/SideProject SUBBBZZZ

I built a tool that shows how your comments might be interpreted in different contexts

This started as a small side project because i was honestly just curious about something i kept noticing online.

i don’t even know if this is actually a “real problem” for people or just something stuck in my head, but it kept coming up when i was scrolling through old posts and comments.

we all have stuff online that made perfect sense in the moment, but can look kind of different depending on context. and i got a bit confused by how differently the same sentence can land depending on where you read it.

so i built a small tool called CommCheck.

it basically lets you paste comments in, or you can also upload exported data from platforms like facebook or instagram.

you can download your data as JSON files (i didn’t even know this was a thing until recently tbh) and the tool reads it the same way as normal pasted text.

what it does is try to show how comments might be interpreted in different contexts, instead of just labeling them as good or bad.

It roughly sorts them into:

> no concern

> moderate concern

> high interpretation risk

and then adds a short explanation for each one.

there’s also a “possible rewording” section, which is more like: “this is how it could also be said” rather than correcting anything.

One thing i should probably mention:
I used an AI tool (Lovable) to build this, because i’m a "thinker" but not a traditional developer at all.^^

so this is kind of a prototype that i can actually change pretty quickly, and i’m still tweaking it a lot — especially around emotional stuff, because that’s where it gets surprisingly inconsistent sometimes.

like sometimes i think something is clearly fine and then it gets flagged, and other times the opposite happens, so yeah… still figuring that part out.

i’m also working on something called a “perspective switch”.

the idea is pretty simple:
instead of one fixed interpretation, you can look at the same comment through different lenses like personal, social, professional, etc.

so it becomes less like “this is good or bad” and more like:

>>> okay, how would this actually land depending on who reads it? <<<

what surprised me most (and maybe this is obvious but i didn’t expect it to feel that different) is how much meaning shifts with context.

like a sentence can feel totally normal in one situation and kind of off in another, even if nothing about the wording changed.

i’m not even sure yet if this is actually useful or just me overthinking communication too much.

curious if anyone else sees value in something like this or if it’s just a weird rabbit hole.

(i originally wrote this in german and translated it with AI to make it clearer here.)

looking forward to your opinion!

r/SideProject Appropriate-Value610

Mad Snake is live on PeerPush - feedback is welcome :-)

Feeling nostalgic? Missing game styles from the 90s?

Mad Snake, my recently released iOS game is now available on PeerPush if anyone is keen to share some love/likes/comments. It will help me heaps :-)

https://peerpush.net/p/mad-snake

I'm actively looking for feedback to keep improving the game. It's now available to 175+ countries, and lots of players are enjoying it :D

You can find it on the App Store:

https://apps.apple.com/au/app/mad-snake-arcade-game/id6759598440

Cheers

r/geography Diligent_Record252

Landform development

what are the factors controlling landform development??

r/HistoryPorn PutStock3076

A Korean female student who met members of the Young Pioneers in the North Korean region in the 1940s [600 x 430]

SortedFor.me