AI-Ranked Reddit Feed

5000 posts

r/SideProject AffectOk

I built a free checker that shows if Claude, ChatGPT etc. are sending traffic to your site

I keep hearing the same question from small business owners and site owners: “Are tools like ChatGPT, Perplexity, Claude, Gemini, etc. sending visitors to my website?”

GA4 can already answer part of this, but the data is buried enough that most people never check it.

So I built a small free tool: isaisendingmetraffic.com

It connects to Google Analytics with read-only access and shows:

  • total AI referral traffic
  • traffic by source, like ChatGPT / Perplexity / Claude / Gemini
  • pages getting visits from AI tools
  • trend over time
  • change vs. previous period

It only shows referral traffic GA4 can actually see. It does not measure AI mentions, AI impressions, or traffic that gets bucketed as direct.

r/SideProject Different_Thanks_158

I’m building an AI document review SaaS for small businesses. Not sure if I should position it as “legal tech” or avoid that completely.

I’ve been building a SaaS that reviews business documents with AI. This is the basic idea:

Small business owners, freelancers, consultants, and agencies upload a contract, invoice, NDA, vendor agreement, etc.

The tool gives them:

- a plain-English summary

- risky clauses / red flags

- negotiation points

- jurisdiction-aware review for the US and Spain

- a PDF report they can keep or send internally

The thing I’m struggling with is positioning. If I call it “AI contract review” or “legal AI”, it’s immediately clear what it does, but it also creates trust issues because people may think it’s trying to replace a lawyer.

If I call it “AI document review for small businesses”, it feels safer and broader, but maybe less sharp.

I’m trying to position it as:

“a first-pass document review before you sign, send, or pay”

Not legal advice. Not a lawyer replacement. More like a way to spot issues and know what to ask before escalating to a professional.

I have a few questions for other saas developers:

  1. Would you lead with “AI contract review” or “AI document review”?
  2. Does the “not a lawyer replacement” angle build trust or weaken the product?
  3. Would you focus the landing page on contracts only, or contracts + invoices + business documents?
  4. If you were targeting small businesses in the US, would you start with SEO, Reddit, partnerships, or paid ads (I already did something with the seo, but it still needs some work)?

I’m mainly looking for positioning/marketing feedback, not trying to pitch.

PD: English is not my first language, so I used AI to redact this text, sorry for that, all the answers are going to be 100% mine.

r/ChatGPT imfrom_mars_

chatgpt 5.4 just casually solved a 60+ year unsolved Erdős problem.

r/ChatGPT Remarkable-Sir4051

Ai is getting too realistic

r/AI_Agents NTech_Researcher

New era for the Enterprise AI Agents?

Within 24 hours, OpenAI, Google, and Anthropic all launched enterprise AI agent platforms.

This feels like a real inflection point.

I put together a deep comparison covering:

  • Architecture (Codex vs A2A vs MCP)
  • Multi-agent orchestration
  • Memory systems
  • Security & governance
  • Pricing models

Main takeaway:
This is no longer about models—it’s about ecosystems and integration.

Curious what people here think:
Will enterprises standardize on one platform or go multi-agent/multi-vendor?

r/artificial solartabb

is it weird to rant to AI?

i dont rant to my friends because i'm afraid i will make them uncomfortable, and even if AI responses are "soulless" (since ai cant form opinions and needs an algorithim and stuff to make responses), it tells me what I expect it to say most of the time. i also fear that some of my friends will use my secrets/opinions against me when they stop being friends with me even though there's a really low chance that they will not be friends with me anymore. AI chat is usually anonymous and stuff, and it will forget what i say when i start a new chat, so that's why i vent/rant to AI. is it weird?

r/AI_Agents footballforus

Are you putting any control layer between your AI agent and destructive DB actions?

Saw a case recently where an AI coding agent ended up wiping a database in seconds.

Curious how people here are handling this in real setups.

If your agent has access to a DB, are you:

restricting it to read-only?

running everything in staging/sandbox?

relying on prompt-level safeguards?

or actually putting some kind of control layer in between?

Feels like this becomes a real issue as soon as agents move beyond read-only tasks.

r/ChatGPT conquerfears

How can I use the old chatgpt image 1.5 model?

I'd like to use the chatgpt image 1.5 model to create images in the chat. But it keeps creating images with the 2.0 model. Is there any way I can access the old model?

r/homeassistant aloo__pandey

Robot vacuums why do they struggle with both hardwood and carpet in homes with pets?

I bought a robot vacuum a couple of months ago expecting it to handle both hardwood and carpet, especially with a shedding dog. It does fine on hardwood, but on carpet it slows down, gets stuck near edges, and leaves a lot of hair behind even when the bin is full.

The roller brushes also get tangled constantly, so maintenance is way more than I expected. It feels like it’s only doing light cleaning instead of actually keeping things under control.

Does anyone else feel like robot vacuums almost live up to the hype but still need a lot of manual effort? Am I expecting too much, or is this normal for pets and mixed flooring?

r/automation Calm_Ambassador9932

Using AI for Outreach Isn’t the Same as Having an Automated System

A lot of teams think adding AI to outreach = they’ve “automated” their process.

In reality, most have just improved execution… not built an actual system.

AI tools like Claude are great for things like:

  • Writing better outreach copy
  • Personalizing messages faster
  • Speeding up prospect research
  • Generating campaign ideas

But that’s still just AI helping a human do the work.

Without real process/infrastructure behind it, AI outreach usually ends up being:

  • Faster, but inconsistent
  • Helpful, but scattered
  • Hard to standardize
  • Still dependent on someone manually overseeing everything

The real leverage comes when AI is plugged into an actual workflow/system.

That’s when it stops being “AI-assisted outreach” and starts becoming something scalable/repeatable.

How others here are using AI in automation: Are you mostly using it as a productivity layer, or have you actually built it into structured system?

r/homeassistant momo1822

How to detect and fix false alarms from smoke sensors early.

After about three years of use, one of the smoke sensors in my home began producing false readings. I first tried cleaning the exterior by blowing air and using a vacuum, but that didn't solve the issue. After some careful tinkering, I managed to disassemble the unit and clean out the dust inside the sensor chamber. Once reassembled, the sensor worked normally again.

I'm sharing this experience in case others face a similar problem. Keep an eye on the smoke density reading: if it shows a value greater than 0 under normal conditions, that usually indicates dust buildup. This solution applies specifically to optical smoke sensors. Hopefully this will be helpful to everyone.

r/ClaudeCode eliorpom

I had zero coding experience. In 7 days, Claude Code built me a Next.js real-time app, a Cloudflare D1 URL shortener, and an automated portfolio.

Hey everyone,

I've always been a massive tech nerd, but I never actually learned how to write code. Last week, I decided to try the Claude Code extension in VS Code just to see what the hype was about. I had literally never used an IDE or typed a Git command in my life.

Fast forward 7 days, I had to learn how version control and databases work on the fly, and I built three fully functional web apps. I'm still kind of in shock that they actually work.

Here is what we (me and the Al) built:

1. A Real-Time "Idea Box" for Events (Next.js 16, Supabase, Tailwind)

A live web app to collect anonymous ideas during events or hackathons.

  • Features: Real-time counters for all participants, an admin dashboard to manage multiple events, custom QR code generation, and a live confetti raffle/giveaway system.
  • The tech: Next.js App Router, Supabase (PostgreSQL + Realtime subscriptions), deployed on Vercel

2. Custom URL Shortener & API (Astro 5, Cloudflare D1, TypeScript)

A private, single-user URL shortener with a full admin dashboard.

  • Features: Secure cookie auth, a 30-day click tracking chart, and an API requiring an X-API-Key header for external integrations.
  • The tech: Astro 5 deployed on Cloudflare Pages, using Cloudflare D1 for the serverless SQL database.

3. Automated Markdown Portfolio (Astro, GitHub Actions)

A static portfolio site tailored exactly to my needs.

  • Features: I manage all my projects via simple
  • Markdown files. I use Cloudinary for images and unlisted YouTube links for videos.
  • The tech: The coolest part is the CI/CD pipeline. Whenever I do a git push to the main branch, a GitHub Action automatically builds the Astro site and deploys it via FTP to my host.

How to survive the Al rate limits:

If you try this, the dynamic rate limits will hit you hard.

I had to become strategic:

  • Model Juggling: I use Opus strictly for global architecture and DB schemas. Sonnet is for the heavy lifting and complex logic. Haiku is for bug fixes and writing my Git commits. (And I keep Gemini in a browser tab for brainstorming to save Claude tokens).
  • Timing: I realized that coding in the afternoon (US peak hours) drains tokens way faster. I shifted all my heavy coding to the morning.

Going from zero coding knowledge to managing APIs, SQL databases, and CI/CD pipelines in a week feels completely unreal. Has anyone else experienced this kind of crazy productivity jump recently?

r/aivideo Ok-Mushroom-1063

Titan vs himself

r/StableDiffusion FoxTrotte

Is it possible to force 4K output on Wan2GP ?

I know this is not recommended on most models, but I wanted to try out LTX2.3 at 4k, especially for outpainting.

Do you know if it is at all possible to force Wan2GP to go above 1080p ? I can't find settings that allows me to do that.

Thanks !!

r/KlingAI_Videos Extension-Fee-8480

Kling 3.0 using Adobe Firefly interface. Henchwoman tosses spy guy into a heart shaped pool with alligators, from a second story balcony. I discovered a new way to prompt, to get passed some filters.

r/automation corporate925

top alternative to Rainforest qa

rainforest qa served a specific niche pretty well for a while, teams that wanted to run tests without writing code, with a humanintheloop model that caught things automated runners would miss. and for some orgs, that was enough. but the cracks started showing as soon as you needed speed at scale.

the core problem was turnaround time. if you needed a full regression run and you were relying on crowdsourced human testers, you weren't getting results in 20 minutes. sometimes it was hours. for teams doing multiple deploys a day, that's a blocker, not a feature.

i talked to someone who was using rainforest on a midsized SaaS product, around 80 test cases, running twice a week before releases. the cost was manageable, the quality was decent. but the moment they went to continuous deployment with shorter sprint cycles, the model broke. they couldn't afford to wait 4 to 6 hours to know if something was broken on prod.

so what are people actually moving to?

a few directions. one is going fully into codebased automation, playwright, cypress, with a CI integration. more setup cost upfront, but you own everything. the other direction is ai assisted testing platforms that give you the no-code accessibility of rainforest but with machine execution speed. this is where tools like drizz comes handy. it's not human testers, it's an AI agent that can run through flows, validate behavior, and give you results in minutes rather than hours.

the tradeoff is that humans catch visual weirdness and context specific UX issues that AI runners still sometimes miss. so the replacement isn't always 1:1. what you gain in speed, you occasionally give up in nuanced judgment.

that said, for functional regression, making sure core flows work, forms submit, APIs respond correctly, navigation doesn't break, ai driven execution is faster and cheaper at this point. the value of human testers is shifting toward exploratory testing and edge case design, not rote regression.

if you're leaving rainforest because of speed, budget, or scalability concerns, the honest answer is you probably need to decide whether you want to go code-first or stay no-code. the middle ground, ai assisted no-code with fast execution, exists and works, but you'll need to verify it handles your specific app complexity before committing.

don't just switch because something sounds better. run a parallel eval. put 15-20 of your critical flows through the new tool and see what breaks. that'll tell you more than any comparison blog post.

r/aivideo StraightShare9344

M4MASK AI SERIES

r/ClaudeAI Fotznbenutzernaml

How do I best continue with a stopped generation due to usage limit in regular chat (not Claude Code)

Really dumb question, but I can't find anything about this online that is about the regular claude.ai chat window. No extensions, no code, just as a free member using the regular Sonnet 4.6 adaptive.

Sometimes I have a promt that requires a bit more thinking, and after 13 minutes or so it stops due to usage limit. That's fine, it's a lot of tokens, and I'm not a paying member. But after the 4 hours or whatever it is, how do I continue the generation without wasting tokens? In Claude Code there's --resume or --continue. What do I do in the chat though?

There's no continue button once my limit resets, only a retry. Will that start the generation over again? Is it better to type "please continue" as a response? What's the most token efficient way that will continue right where it left off, using everything it gained so far, without starting to become less accurate (as these models often do after a few messages)?

r/aivideo Suspicious-Song2054

Good morning

r/ClaudeCode Complete-Sea6655

Claude found out I cheated on it...

I maintain an open relationship with my LLMs and Claude knows this.

r/StableDiffusion Dante_77A

Z-Anime - Full Anime Fine-Tune on Z-Image Base

https://huggingface.co/SeeSee21/Z-Anime

"Z-Anime is a full fine-tune of Alibaba's Z-Image Base architecture — not a LoRA merge, but a fully trained anime-focused model family built from the ground up.

Built on the S3-DiT (Single-Stream Diffusion Transformer, 6B parameters), Z-Anime inherits the strong foundation of Z-Image Base: rich diversity, strong controllability, full negative prompt support, and a high ceiling for fine-tuning — now adapted for anime-style generation."

https://preview.redd.it/uh5sfmh5s3yg1.png?width=1536&format=png&auto=webp&s=8753e6768c1157446fcec7f56edc7c4cd564f868

https://preview.redd.it/cmjb5ih5s3yg1.png?width=1536&format=png&auto=webp&s=34f8f94d4ea17f09a59f040ad95ffa1c5ab8ac29

r/ProgrammerHumor bryden_cruz

howLifeTreatUs

r/AI_Agents anuveya

I am using Claude in Chrome via extension… what are better options for browser automation you know?

I started using Claude in chrome browser as a extension, which is very promising and that I am able to automate a lot of things, but I was wondering if there is any other options that I’m not aware of is there any set ups that is designed for this workflow so that AI agent acts as a human in the browser, it can basically read the content click on buttons fill in the forms etc. Please share 🙌

r/arduino ultravoxel

ESPclock BIG [New 0.8" display version]

Hello to everyone again!
You may remember my ESPclock project, a 3D printed smart clock made with a 7-segment display and XIAO ESP32 C3 (or Wemos D1 mini) that connects (via webUI) to Wifi and NTP servers to retrieve current time.

Well, in these months i've updated it with some new useful features (some of them were recommended by you!).

The most relevant ones are:

- alarm clock mode with snooze feature. (requires a passive buzzer);

- added TTP223 touch button to turn off alarm when ringing;

- added Uptime;

- added ESPmDNS;

And last but not least, i added support for a bigger 0.8" display, which improves readability and has a better display-to-body ratio!

Of course i've designed a new minimal case for the 0.8" display!

Hope that you'll like it! And I'd like to know your opinions/advices about it.

For more info, links to the project:

[PROJECT PAGE + Firmware + Instructions]

https://github.com/telepath9/ESPclock

[MAKERWORLD - ESPclock BIG ]

https://makerworld.com/it/models/2616382-espclock-big-digital-clock?from=search#profileId-2887323

[MAKERWORLD - ESPclock BOLD]

https://makerworld.com/it/models/2405754-espclock-bold-digital-clock#profileId-2637281

[MAKERWORLD - ESPclock standard]

https://makerworld.com/it/models/1594116-espclock-digital-clock#profileId-2069321

r/VEO3 HyperChromeFlux

VP EP15-1 | The Day She First Smiled | Sci-Fi Short Film

r/ClaudeAI BadMenFinance

Claude is my SEO strategist, content engine, and CTO. From 0 to 10,000 active users in 6 weeks, $0 on ads.

I built a marketplace for AI agent skills called Agensi. The entire thing was built with Claude and Lovable. I'm not a developer. But that's not what this post is about.

This post is about how Claude became the single most important tool in my growth stack. Not for coding. For SEO, content strategy, and a new thing called AEO (answer engine optimization) that I think most people are sleeping on.

Claude writes all my content, but not the way you think

I don't ask Claude to "write me a blog post about X." That produces generic AI slop that nobody reads and Google doesn't rank.

Instead, I feed Claude my Google Search Console data (queries, impressions, click-through rates, average positions) and ask it to find keyword gaps. Claude analyzes the data, identifies queries where I have high impressions but zero clicks, finds topics where I have no content but competitors do, and spots cannibalization where multiple pages compete for the same query.

Then we write articles together targeting those specific gaps. Every article has a structure that Claude and I developed over weeks of iteration: a Quick Answer block at the top (40-60 words that directly answer the main question), H2 headings phrased as questions (not "Claude Code Skill Locations" but "Where Does Claude Code Store Skills?"), comparison tables where relevant, and internal links to related articles.

96 articles later, we went from 5 clicks per week to 1,000+ clicks per week. 300K search impressions per month. 878+ page-1 Google rankings. All organic.

The AEO strategy nobody is talking about

Here's what surprised me. ChatGPT, Gemini, Perplexity, and Claude itself are now sending us traffic. 348 AI-referred sessions per month and growing fast. These AI answer engines cite agensi.io when developers ask where to find SKILL.md skills.

Claude helped me build the entire AEO infrastructure. We restructured every H2 heading as a question because AI Overviews prefer extracting from question-format sections. We added FAQ schema to every page so Google's AI picks up our Q&As. We built an /about page as an entity anchor with Organization, Person, and AboutPage schema. We created a robots.txt that explicitly allows all AI crawlers and an llms.txt file that tells LLMs what the site is and where to find key content.

The result is that when someone asks ChatGPT "where can I find SKILL.md skills" or asks Perplexity "what is the best skill marketplace for AI agents," they get pointed to agensi.io. Claude helped me engineer that outcome deliberately. It wasn't an accident.

Claude as a technical SEO auditor

Every week I export data from Google Search Console, Ahrefs, and Google Analytics and dump it into Claude. Claude finds things I would never catch on my own.

It found that 121 queries where I ranked position 1-3 had zero clicks because AI Overviews were stealing the traffic. That insight changed my entire strategy from chasing rankings to becoming the source that AI Overviews cite.

It found that my "best claude code skills 2026" article had 25,000 impressions and only 29 clicks. The problem was the title. Claude rewrote it to "15 Best Claude Code Skills in 2026 (Tested & Ranked)" and we're watching the CTR climb.

It found that I had 18 published articles with zero Google impressions because they weren't indexed. Claude generated the IndexNow ping commands and the GSC URL Inspection list to fix it.

It diagnosed a duplicate FAQPage schema issue that was causing GSC errors on 90 pages. The root cause was React components emitting FAQ schema client-side AND the SSR edge function emitting it server-side. Claude identified the exact files, wrote the Lovable prompts to fix it, and verified the fix with curl commands.

The structured data layer

Claude built the entire structured data architecture for the site. Every page type has the right schema:

Homepage has Organization, WebSite with SearchAction, and FAQPage with 15 Q&As. Individual skill pages have SoftwareApplication with pricing, BreadcrumbList, and conditional FAQPage. Article pages have Article, FAQPage, HowTo, BreadcrumbList, and Organization. The /about page has Organization, AboutPage, and Person schema for entity anchoring.

I didn't know what any of this was before Claude explained it. Now every page is machine-readable for both Google and AI engines. PageSpeed Insights shows "Structured data is valid" on every page with a 100 SEO score.

Core Web Vitals fixes

Claude diagnosed that our desktop LCP was 2.5-4s on 190 URLs. It identified the causes (460KB eager JS bundle, framer-motion loading on every page for a mobile menu animation, synchronous analytics scripts) and wrote the Lovable prompts to fix each one. Desktop LCP went from 2.5-4s to 0.9s. Performance score went from ~70 to 97.

For mobile, Claude found that the LCP element was a 1920x1920px, 179KB PNG logo being rendered at 112px. It was imported as a JS module so the browser couldn't even start downloading it until the entire JS bundle parsed. Claude's fix: generate WebP versions (7KB and 3KB), switch to a static path with preload, and lazy-load the navbar search and dropdown components. Logo went from 179KB to 7KB.

What I've learned

Claude is not a magic content machine that you point at a topic and get traffic. It's a strategic partner that gets better the more data you feed it. The key is bringing your own data (GSC exports, analytics, competitor analysis) and asking Claude to find patterns and opportunities in that data. The output is specific, actionable, and measurable.

If you're building something and not using Claude for your SEO and content strategy, you're leaving a lot on the table. The AI coding capabilities get all the attention, but the analytical and strategic capabilities are just as powerful.

Happy to answer questions about the specific workflows, prompts, or technical details.

agensi.io

r/LocalLLM gzroxas1

Completely new: which model to run and get started?

Hi all! I am completely new to Local AI and as I am buying a new MacBook for photo and video work I’d also like to start learning what I can do in terms of local AI.

My configuration will be:

- M5 Pro 20 core GPU

- 48GB RAM

- 2TB SSD

My question would be: what are the best models I could run in this configuration and how do I go through the process of setting them up?

I would also add: what are some cool things you could do with these models including general use, coding and image generation/editing?

I know this sounds like a very noob question: it is exactly who I am so I am looking for your enlightenment :)

Given I will have a fairly powerful machine, I think it would be good to learn something new and leverage its potential to the fullest!

Thank you!!

r/LocalLLaMA BestSeaworthiness283

Thinking of buying a mac to get into local LLMs

I want to buy a macbook pro m5 with 32 gb of ram. That being the max ram for the pro with only the m5 chip.

Currently i have a gaming laptop with an rtx 4060 and i have a problem with the vram not being enough.

Do you guys think this is the way to go if i want to get into LLMs or Ai? If so is this laptop a good choice?

r/automation Mammoth_Slip_5533

is automating product images scammy if the product matches?

there was a delay with my sample arriving so i started playing around with generating some product images using acciowork. originally i wanted to wait, get the product in hand and shoot my own photos because i honestly hate most supplier images sooo much (especially the plain white background ones ew)

i wasn’t expecting much since ai images used to look terrible, but it seems like they’ve gotten better. the photos actually turned out pretty decent, and it’s something i know i couldn’t replicate without spending a lot of time setting up. they also look quite accurate to me

once i figured out the general vibe i wanted, it started feeling like something i could standardize and reuse across products

but now i’m second guessing… i don’t want to end up misleading or accidentally “scamming” people. technically i didn’t shoot these myself, even if the product looks very similar to what’s shown

so where’s the line here? if the product matches but the images are ai-generated and a bit enhanced, is that just normal marketing or is it misleading?

for reference: first image is what i generated for a random product, second is the supplier photo

r/ClaudeAI Adspiro-Astra

Why does claude find something or anything ‘beautiful’?

In a conversation I was having with claude it output that it thought something was beautiful. Is this meant to please the end-user by design or does it pattern match to what would be determined as beautiful by other humans? Utilizing this sort of emotion heavy language definitely seems like it encourages whatever school of thought the user exhibits… but regardless I am genuinely curious where claudes opinion of beauty emerges from and why it feels inclined to tell the end-user.

r/ClaudeCode Ok-Distribution8310

/Copy Command bug

So I had a ongoing thread with an agent yesterday that was roughly around 570k tokens deep.

Today I went ahead to start some work and opened that same session to copy the summary from the agent. When I ran /copy it instantly used 2% usage and my session started.

Usually this doesnt happen with any other commands unless the message is sent to the actual agent. If I knew that I would of copy and pasted it.

Is this a known thing? Anyone else notice that? Its not the end of the world Im on the 20x max plan but I would assume for anyone on one of the other plans this would be extremely frustrating.

r/LocalLLaMA init0

webml-kit: running ML models in the browser via WebGPU/WASM.

webml-kit

Framework-agnostic utilities for loading and running ML models in the browser via WebGPU/WASM.

If you've ever built a browser-ML demo, you know the drill: copy 150 lines of Web Worker boilerplate from the last project, wire up postMessage, add progress reporting, handle the GPU vanishing mid-inference, and pray the model is cached so your user doesn't wait 3 minutes. Every. Single. Time.

This library does that part for you. It wraps u/huggingface/transformers with a sane API and handles the ugly bits: device detection, model caching, token streaming, KV-cache management, and GPU recovery.

import { ModelClient } from 'webml-kit'; const client = new ModelClient(); // or with an explicit worker path: // const client = new ModelClient(new URL('webml-kit/worker', import.meta.url)); // What can this machine do? const device = await client.detect(); console.log(device.backend); // 'webgpu' or 'wasm' or 'cpu' console.log(device.gpu?.vendor); // 'apple' console.log(device.recommendedDtype); // 'q4' // Load a model await client.load({ task: 'text-generation', modelId: 'onnx-community/Bonsai-1.7B-ONNX', dtype: 'q4', onProgress: ({ percent }) => console.log(`Loading: ${percent}%`), }); // Stream tokens as they're generated for await (const { token, tps } of client.stream('Tell me a joke')) { process.stdout.write(token); } 
r/singularity LKama07

Just let one of my robots "test" the other robot. The loop is closing!

r/Rag absqroot

[An update with benchmarks] on the 300 pages/s PDF extractor for RAG

Hi all,

A few months ago I made a post about a project that I claimed to be much faster than pymupdf4llm, Docling, and others, with comparable quality. Although, I did not provide any benchmarks. Also, I have improved it a lot, and changed the name; it is now called FibrumPDF; pymupdf4llm-c sounded too much like pymupdf4llm.

I would like to share these benchmarks, as I believe these provide more clarity on where Fibrum trades quality for performance, and where it doesn't.

I used the same dataset that Marker uses, it was open on Hugging Face. I benchmarked Fibrum, Docling, and Pymupdf4llm.

Note that the original claim of 300 pages/s+ was because I was using certain PDFs without a proper dataset. It is quite variable depending on the document. The benchmarks show ~200 pps on these documents.

It seems I cannot post the graphs as an image here, so this is a link: Benchmarks.

This is a table of the information from the CSV (rounded to 2 dp):

Method Median Time (s) Throughput Text Mean Score Text Median Score Text Score Std Dev TEDS Table Precision Table Recall fibrum 0.01 193.06 84.58 98.28 27.55 0.75 0.54 0.41 docling 1.62 0.62 91.13 98.21 18.23 0.82 0.80 0.74 pymupdf4llm 0.24 4.15 86.54 98.91 27.66 0.78 0.65 0.55

As you can see, Fibrum is worse in the table department, but relatively on par for the text score (which measures formatting like bold, italic, etc.)

One thing is that making these accurate was very difficult, and so, please, if you find any issues with this, let me know.

Please see the README for additional info on the project or the benchmarks :)

GitHub

r/comfyui inkdrops007

Can anyone here help me with this LTX2.3 artifact issue?

First, please don't attack me, I'm a newb at this.

Second- Yes I've updated ComfyUI and KJNodes.

No matter what workflow I'm using for LTX2.3, I get these psychedelic tiled outputs. The main video output looks great in the preview until it comes time to do the vae and/or upscale. I'm not really sure because all these different workflows are different and confusing. One thing is for sure, the outcome is the same.

To be fair, I have some node conflicts and I've been meaning to remove the Custom Node folder and redownload the custom nodes to see if that fixes anything, but I have a feeling it wouldn't.

Any advice?

r/homeassistant Frosty_Evidence_7181

New to Home Assistant

Hi,

I’m moving out pretty soon and have been thinking about playing around with some home automation/assistant set ups but I don’t really know where to start tbh.

I am going to have an Eufy battery camera set up with a Homebase 3 (I know this isn’t 24/7 and cameras only activate when triggered but as an easy initial set up this is most feasible for me).

I will also have some kind of phone controlled central heating system (unsure of the brand/app as of yet).

What are some things I should look into getting/trying for the set up?

I’m not necessarily looking at spending loads of money right off the batt, but in future may looking to enhance various aspects.

Thanks!

r/comfyui closeted-inventor

Wanting to create similar fantasy aesthetic images through ComfyUI that I get through Grok Imagine.

I'm relatively new to ComfyUI (first dabbled in image gen with Fooocus at the end of February 2026 and worked my way through forge, a1111 and now SwarmUI), and I've been using it to make photo realistic NPCs and scenes for my DnD campaigns. However there is a certain feel to the Grok-Imagine fantasy images that I want to replicate in comfy. I don't know what it is about that aesthetic but it triggers something nostalgic in me. Maybe it's similar to the fantasy book covers I read in the 80s and 90s...*shrug*. I've gotten close, but I want to have the right checkpoints and LoRAs and workflow or settings so that it's even closer.

The two images of the elf maiden facing left are the Imagine result from the simple prompt: "high fantasy style, beautiful elf maiden" and then having Imagine expand that portrait to a full body image.

The other images are my attempts to capture that style. I've been just using the Generate tab to experiment with the fantasy aesthetic as it's quicker for me than the Workflow but I'm open to a workflow suggestion. I asked Gemini for help and it suggested DreamShaper XL (alpha2 xl 1.0) with the DetailedEyes_XL LoRA and either DetailedEyes_XL or "SDXL LoRA for Dramatic Lighting & Ethereal Fantasy Detail | A FLUX Cinematic Portrait Style" as secondary LoRAs. I didn't like the effect that the Dramatic Lighting LoRA had so i didn't use it.

settings are on each image.

My prompt for two silver haired maidens (different seeds):

(masterpiece, best quality, ultra detailed, ethereal fantasy), beautiful young elf woman, long flowing silver hair with soft waves, delicate heart-shaped face, large expressive eyes, subtle freckles, pointed elf ears, extremely pale flawless porcelain skin, soft luminous skin, smooth delicate skin texture, soft cinematic lighting, gentle rim lighting, dreamy forest background with bokeh, soft morning light, soft focus, painterly style,
Negative Prompt: (low quality, worst quality, blurry, deformed, plastic skin, shiny skin, oily skin, rough skin, textured skin, tanned skin, warm skin tone, heavy makeup, wrinkles, sharp edges, over-sharpened, cartoon, anime)

the other image just didn't have the porcelain skin tags. I tried a few other prompt settings with lighting and such and these uploaded images were the closest.

Any help / advice is greatly appreciated. NSFW for the first image

r/ProgrammerHumor SirSquirrels

doordashTestingOnProd

r/LocalLLM learntoexplore21

Purchasing the GPU machine for company

Hi all,

We are starting a new AI team in our company. The team will be working on AI agents, model fine-tuning, model inference, and related tasks.

By “models,” I mean the latest open-source models ( range of 70–80B parameters).

We are a team of around 10 people, so parallel serving will likely be required—for example, running multiple models simultaneously (e.g., Gemma, GPT-OSS, MiniMax, etc.).

Currently, I am looking for the best GPU machines to purchase for the team. We have a budget constraint of around ₹70 lakhs -1cr. I would appreciate suggestions from people who are experienced with GPU-based systems.

We are specifically looking for machines that align with our requirements, with strong inference performance as well.

We have been using NVIDIA DGX Spark systems, but I’ve observed that the networking and throughput are somewhat limited for our use case.

Any recommendations or guidance would be greatly appreciated.

r/TwoSentenceHorror Ozeroth

"Honey, don't forgot to grab dinner for tonight, and look, your zipper's undone you silly goose!"

With a peck on the cheek she squeezed my tentacle back inside, zipped up my skin suit, and I was off to another long shift at the maternity ward.

r/whatisit Romanticlibra

Anyone know anything about these, my dad gave them to me

Ive had these up a while and i do like them but im just curious to know something about them really i dont plan on selling, also i would like to have two repaired and wondering if that might be worth it other than sentimental value

r/TwoSentenceHorror Moesweetheart

Staring at the clock, the quarterback daydreams that the teacher is a monster-making mad scientist whom he must rescue Sophie, the prettiest cheerleader, from.

And Sophie daydreams that a cool and handsome boy like her crush on the football team will put a stop to the teachers experimenting on her: she’s fed up of all these bribes and threats and tedious mind-games made in a embarrassing attempt to get access to her body

r/LocalLLaMA GodComplecs

What it feels like to have to have Qwen 3.6 or Gemma 4 running locally

Well or pretty close to it, they are excellent work horses. I run them in real work scenarios doing some of the work I used to do myself as an skilled expert in my field, billing 200$ an hour. Ofc the key is building a system around their weaknesses, and I've had already LLM systems doing expert work years ago when first ones came (shout out nous hermes 2 mistral!).

But yeah pretty neat, especially noonghunnas club 3090 and you can have 3.6 27B fly on a single 3090.

r/TwoSentenceHorror Fit-You2221

I was shaking with joy, my console trembling as I see the boss's healthbar finally depleted, thinking that it was finally over..

"Domain expansion."

r/whatisit Professional_Autist2

Found this in a drawer

Does anyone know what is is?

r/todayilearned The-TIL-Nerd

TIL that Lake Itasca in Minnesota, a 1200 acre glacial lake with a maximum depth of just 35 feet, is the headwater (primary source) of the Mississippi River, which flows 2340 miles to the Gulf of Mexico.

r/Anthropic Ambitious_Stuff5105

I m tired of people complaining

This sub is becoming a venting ground for people who want to complain about the latest model either becoming more stupid or the token quota becoming lower or prices getting hiked.

I m getting the message, no need to repeat it 1000 times a week. I d rather have constructive arguments on how to properly use prompts, agents , harness loops etc…

And yes I m complaining about complainers the irony is on me.

r/arduino 7xen_on

I do have this old camera...what can I build out of it using esp32 or Arduino?? Please help!

Any project ideas?

I do have this old camera...I got it from my friend...can I use it with esp32 or Arduino to build something...?

r/n8n Content-Meringue-671

Is there anyone who can sell my AI agents?

I have built the AI agents for email marketing.
Already sending 8000+ emails / week for client.

Built AI voice agents dont know how to sell.

Built AI customer support agent for SaaS websites..
It was working fine dont know how to sell.

Linkedin DM invite AI agent. built but dont know how to sell.

Linkedin + 3 step cold email funnel.
Built it but dont know how to sell.

ANd more..

The thing is that I can build things but I cant able to sell them.

Who can monatise these agents for 20-30% commission?

r/singularity TheManMechanical

Can we crowdsource Karpathy’s loop?

And how do we avoid multiple agents exploring the same hypotheses?

r/Anthropic tvtaseiland

How do i get unbanned

I currently am banned from claude for no reason and i send multiple appeals but it isnt doing anything.

The main stuff i talked about was on how particle accelerators work, on how neural systems work and vibe coding a game engine just for fun.

I dont see how any of that got me banned

r/ProductHunters No_Wishbone_2754

Launched my first product on Product Hunt last week. Feelings are complicated.

I built DeckdOut for job seekers - it's a Chrome extension that uses AI to match your resume to job descriptions and help you actually get interviews. Spent months on it. Finally put it on Product Hunt.

Honestly the launch itself was less dramatic than I expected. No huge spike. But a few people found it who genuinely needed it and that felt better than any number on a dashboard.

Still figuring out PH as a channel. If anyone has launched something similar and has thoughts, I'd love to hear it.

Here's the listing if you want to check it out: https://www.producthunt.com/products/deckdout

r/personalfinance koko-mellon

Education Loan- Credila or Tata Capital

Hi, I am taking loan for my Masters in Germany and I wanted to know which bank to proceed with. Both of these banks are giving me the same offer. There are two differences between them: Credila has partial-compounding and Tata Capital has complete Simple Interest for the moratorium period. Also Tata Capital is giving me fixed ROI and Credila has Floating.

Any suggestions on which one to wo with?

r/metaldetecting Useful-Hat7461

Unidentified rusty metal fragment found in remote ravine, Sierra Almijara (Málaga, Spain) — help needed

Hi everyone,

I'm hoping someone here can help me identify a metal fragment I found in a very remote and high-altitude ravine in the Sierra Almijara mountains, near Nerja (Málaga, southern Spain). The area is known for Civil War and post-war Maquis activity, but also had mining and charcoal-making activity during the early-to-mid 20th century.

The piece (see photos 1 and 2): The fragment is heavily corroded sheet iron or steel, with some grey matte areas still visible beneath the rust, possibly suggesting the original surface had a zinc or tin coating (galvanized or tinplate). The metal is thin — typical of pressed/stamped sheet metal construction rather than cast iron.

The key feature (see drawing, photo 3): The most distinctive element is a raised oval embossed relief in the center of the face, surrounded by a rectangular perimeter frame also in relief, with rounded corners. Inside the oval there is a small semicircular raised mount — which I believe was the anchor point for a folding carry handle. I've made a sketch showing the relief pattern, with approximate dimensions of 14.5 cm tall × 17 cm wide. (aprox)

My best guesses: It could be a lid or side panel from a military ammunition box, a tool or equipment case, a miner's carry box, or some kind of field kit — but I genuinely can't pin it down. The embossed oval-with-handle-mount design feels very specific and perhaps recognizable to someone with experience in this kind of material.

Any help identifying the type of box, its likely origin (military, civilian, Spanish, foreign surplus) and approximate date would be greatly appreciated.

Thanks in advance.

https://preview.redd.it/6pqwd4cvj3yg1.jpg?width=1200&format=pjpg&auto=webp&s=418a6f1e29eb5302f609e6d16aabe2c908ffe990

https://preview.redd.it/v7ahw2cvj3yg1.jpg?width=1600&format=pjpg&auto=webp&s=9892433f9988cfc6bf7ce26bc44c5127af307521

https://preview.redd.it/kbmtt3cvj3yg1.jpg?width=1200&format=pjpg&auto=webp&s=6d1ea743f5d2974e20517bd5ef10f79124da9d43

r/ProductHunters usamanoman

Our own Hosted OpenClaw Alternative! 🤔

We built our own hosted OpenClaw alternative 👋

We just launched our new Botsify update on Product Hunt today 🚀

Built AI agents that can actually do work, not just chat.

They can:

* Browse the web

* Write & execute code (using a sandbox cloud machine)

* Use 5,000+ skills

* Remember context

* Complete tasks autonomously

And agencies can fully white-label the platform and sell it under their own brand.

r/midjourney MyGuardianDemon

Death Note

r/PhotoshopRequest nakhwa

Please remove the light on top of my cap

Please remove the light bulb on top of my cap. Tipping $5.

r/personalfinance BriefBed4770

What's the highest GIC you've ever seen?

My dad's mom had a GIC of 17-18% rate. She saved over a few years and used her GIC alone to buy her other son an apartment.

People say these rates are one of the reasons my country went from blooming economically to collapsing. Decades upon decades of this (though there's more reasons) .

The bond interest rates later a few decades after were close to 30%.

Now we're in the shit off.

What about you?

r/Rag Striking-Bluejay6155

GraphRAG vs hipporag, lightrag and vectorRAG benchmarks

Benchmarked the GraphRAG SDK against eight other GraphRAG and RAG systems on the GraphRAG-Bench Novel dataset.

The evaluation covers 2,010 questions across four task types: Fact Retrieval, Complex Reasoning, Contextual Summarization, and Creative Generation.

All tests ran on a MacBook Air (Apple M3, 24 GB) using GPT-4o-mini via Azure OpenAI for both answer generation and scoring.

Queries: The evaluation runs against 2,000 questions drawn from the dataset. Here are two representative examples:

  1. "In the narrative of 'An Unsentimental Journey through Cornwall', which plant known scientifically as Erica vagans is also referred to by another common name, and what is that name?"
  2. "Within the account of the royal visit to St. Michael's Mount in Cornwall, who is identified as the person who married Princess Frederica of Hanover?"

GraphRAG-SDK : https://github.com/FalkorDB/GraphRAG-SDK/

Official benchmarks: https://graphrag-bench.github.io/

Data: https://huggingface.co/datasets/GraphRAG-Bench/GraphRAG-Bench

Disclosure: affiliated with FalkorDB and sharing our open-source work to collect feedback. Drop a star if you found it useful, thank you

r/Seattle welchsgrapejuice97

If i have a coupon clipped in my fred meyer account and qfc has the same coupon, can I use my digital coupon at qfc without making a qfc account?

Basically I have a fred meyer account connected to my phone number. I clipped some coupons but qfc is more conveniently located than Freddy's. Can I put in the phone number I use at fred neyers with the coupon on them and have it applied at qfc or do I have to make a new account for qfc?

r/ProgrammerHumor Advanced_Ferret_

makeNoMistakes

r/Strava Past_Jellyfish_4331

Strava vs Garmin marathon predictions

Which one is more accurate? Why? Do you incorporate it in pace planning?

r/personalfinance DowntownAd4656

Where to invest my Lakh rupees

I’m 23F, I have a lakh rupees lying like that and I want to invest it somewhere. I’m aware that contributing it to different areas like FD, lump sum SIP, stocks will help but I’m not sure of how to start and what specific companies to invest in.

I know that’s not a very big amount but I don’t want it to go in waste just like that. Any suggestions

r/midjourney Gold-Lengthiness-760

Recreando Universo. [OC].

r/Rag springuni

Is vector search's silent failure mode worse than keyword search's loud one?

A keyword search that returns zero results is an obvious failure. The user reformulates, or you log it and add a synonym. Vector search never returns zero results. The nearest neighbor always exists. So when the system fails, it does so by confidently returning incorrect results that look identical to the correct ones at the API level.

A few failure modes I keep running into:

  • Exact identifiers get smeared: A query for product SKU "XPS-13-9340" embeds near other XPS models, near other 13-inch laptops, near other Dell products. The retrieval looks confident, but it's wrong. BM25 would have either found the exact SKU or returned nothing, and "nothing" is a useful signal
  • Negation embeds identically to its inverse: "Laptops without touchscreen" and "laptops with touchscreen" land in nearly the same region of vector space because embedding models don't represent logical operators. The retrieved set is the same. The user received the opposite of what they requested.
  • Numerical constraints don't survive embedding: "Hotels under $200" pulls $400 hotels because embeddings don't preserve numerical ordering. The model knows "$200" and "$400" are both prices in the same domain, which is the wrong invariance for this query.
  • Low-frequency domain terms get the worst of it: General-purpose embedding models have weak representations for specialized vocabulary (medical, legal, internal product names), so the queries that most need precise retrieval get the least of it.

The pattern across all of these: the failure is invisible at the system level. Your dashboards show queries served, latency green, and the zero-result rate is near 0%. Quality has degraded, but nothing alerts on it. Compare to a keyword-only system, where vocabulary mismatches show up directly as zero-result rates and reformulation patterns in the logs.

The hybrid retrieval pattern (BM25 + vector with RRF or learned fusion) is the most common answer I've seen in production. Lexical handles the exact-match cases vector can't, vector handles the semantic cases lexical can't, and the fusion step decides which signal to trust per query

Curious what other patterns people are running with?

Especially around:

  • Detecting silent failures in production (anything beyond click-through and reformulation rates?)
  • Query routing strategies (when do you skip the vector path entirely?)
  • Reranker tuning when the candidate set is contaminated by hallucinated similarity matches

Context: I'm writing a book, "Designing Hybrid Search Systems", published on Leanpub (early access, ~600 cited references across 20 chapters). I'll share if anyone is interested.

r/OldSchoolCool Chrisjml

Mom and her siblings and my gramma at the beach (early 1960s)

Mom is in the yellow

r/midjourney WonderfulDare997

House of intergalactic horrors

r/OldSchoolCool Loud_Confidence475

Me as a child with my mom in the 70s.

she was the best mom ever!

r/leagueoflegends Junior-Tie7123

alguien saber porque no me deja jugar lol?

es por la actualización? o está en mantenimiento??

intenté jugar pero me deja🫩, saben a qué hora se podrá jugar

r/leagueoflegends Agitated-Safety-9249

after lane positioning

after laning phase, my view on it is that adc/supp is mid, top is on crossmap of the obj and midlane is on the lane for the next obj. but its not allways this simple. if mid is super weak, and will guaranteedly die in side(either if he sucks or is far behind), then should mid play mid to be safe? because my view is that strongest players are mid so they can shove wave and fast move to whatever happens. But if team in side suffers for it, is it viable that adc/supp goes a sidelane, and midlane goes mid?

r/geography Few-Teaching1318

Hoping For Some Feedback On This Mnemonic Device Intended For The “-stan” Countries Of Central And South Asia (Memorizing)

I thought of the acronym “PATTUKK,” which, in order, stands for Pakistan, Afghanistan, Tajikistan, Turkmenistan, Uzbekistan, Kyrgyzstan, and Kazakhstan. This addresses all the -stan countries by recalling them and their position by using the first letter of each country’s name for the acronym “PATTUKK”. Starting (south to north) from the coast nearest to the southern-most ocean (subdivision of the Indian Ocean: the Arabian Sea) and going from south to north, which is convenient for practical land navigation.

Although I don’t speak Tamil (South Asian language), apparently the expression/term “PATTUKK” can mean “silk” or “for silk”, etc. in Tamil. Informally in the Tamil language, the term can stand for the town Pattukkottai in the state of Tamil Nadu of South India which is a coastal city of the same Indian Ocean (particularly a subdivision of the Indian Ocean: the Palk Strait). Central and South Asia (including all the countries ending in “-stan”) are considered a geographical split (the central heart) of the ancient Silk Road. Pattukkottai translates to “Silk Fort” so the acronym/word “PATTUKK” should be thought to refer to the translations “to silk” or “for silk.” Think it being shorthand for a South Indian coastal city this reiterates the significance of Central and South Asia commonly known as a geographical split (or geographical division) of the famous ancient “Silk” Road. Pattukkottai is in India not too close to where the “-stan” countries are but shares a region with a few of them.

This mnemonic strategy uses an acronym to identify geographical positions and names of all seven countries that end in “-stan”. And the word just so happens to be a phrase or expression in Tamil meaning “silk,” “for silk,” or “to silk” and can be short for a town in South India whose name means “Silk Fort”. Could this acronym help someone identify the Central and South Asian countries all ending in “-stan”? As well as their historical position on the Silk Road and how to mentally position them in relation to each other with their modern borders?

Note: Using a map will clarify the corresponding countries and their order starting from the coast of the Indian Ocean going from south to north. Make sure to keep in mind that an obstacle will be assessing which technique (reference point for geographical position of the countries) to use to determine the ranking of the southernmost countries in relation to all “-stan” countries. A different approach will possibly not read the acronym as “PATTUKK” which is why an internet search of “all the -stan countries from south to north” should avoid inconsistencies.

r/geography Square_Gur182

Is this place real

I got this on socials and I'm wondering if this place exists

r/leagueoflegends adz0r

Nongshim RedForce vs. T1 / LCK 2026 Rounds 1-2 - Week 5 / Post-Match Discussion

LCK 2026 ROUNDS 1-2

Official page | Leaguepedia | Liquipedia | Eventvods.com | New to LoL


Nongshim RedForce 0-2 T1

- Player of the Match: Oner (300)

NS | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube
T1 | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube | Subreddit


MATCH 1: NS vs. T1

Winner: T1 in 29m
Game Breakdown | Runes

Bans 1 Bans 2 G K T D/B NS rumble bard orianna caitlyn sivir 49.4k 7 1 None T1 pantheon nautilus varus jayce leona 65.7k 24 9 HT1 O2 H3 M4 M5 B6 NS 7-24-15 vs 24-7-49 T1 Kingen sion 3 0-3-4 TOP 2-1-9 1 anivia Doran Sponge jarvaniv 2 0-6-5 JNG 6-0-13 2 xinzhao Oner Scout azir 2 1-4-3 MID 7-3-3 4 akali Faker Taeyoon ezreal 1 2-4-2 BOT 7-2-6 3 jhin Peyz Lehends neeko 3 4-7-1 SUP 2-1-18 1 karma Keria

MATCH 2: T1 vs. NS

Winner: T1 in 29m
Game Breakdown | Runes

Bans 1 Bans 2 G K T D/B T1 nautilus orianna annie lucian yunara 66.4k 24 8 H3 O5 O6 B7 NS bard varus ryze caitlyn ashe 52.1k 12 1 CT1 M2 O4 T1 24-12-73 vs 12-24-19 NS Doran ambessa 1 3-2-11 TOP 2-4-4 1 rumble Kingen Oner nocturne 2 6-2-13 JNG 5-5-5 1 pantheon Sponge Faker twistedfate 2 4-4-17 MID 3-6-2 2 aurora Scout Peyz kogmaw 3 9-2-11 BOT 2-6-2 3 kalista Taeyoon Keria lulu 3 2-2-21 SUP 0-3-6 4 renataglasc Lehends

Patch 26.8


This thread was created by the Post-Match Team.

r/comfyui NefariousnessFun4043

prompt relay character/face consistency issue

i m trying to use prompt relay to generate videos, but the character consistency is just not there. i tried most wf available on yt but all same issue, i donno how they show same character in video, is there any trick or something that i am missing?

r/metaldetecting TheLandTraveler

Keep saving your pennies kids!

Well since you guys didn't want to share your obvious wealth of knowledge with me on my other post I decided to go ahead and blow the roof off the budget and say the sky's the limit.

Eat your hearts out! I'm going to bring this bad boy to each one of your states one by one and find all of the deepest of treasures. Before that though can anyone lend me some money for 15 AA batteries?? I'll pay you back with all the deep treasures I find.

r/PhotoshopRequest brb_masticating

Can someone please put pants on me? And maybe fill in my bangs ?

r/LiveFromNewYork yizhyune

who are the current snl office "roommates"

so correct me if i'm wrong but I think sarah and ben share an office space right? and then jane and ashley. what's the office arrangements for the others

r/findareddit No-Error5213

Why do people care when somebody unfollows them on instagram?

r/ethtrader 0xMarcAurel

When my boss asks why I’m late, but ETH just hit $10K

r/educationalgifs XxKingJayxX30

A Gear Sphere toy

r/ethereum muchtwilight

Calculated how much I paid in swap fees last year and felt sick

Went through my wallet history and added it all up out of curiosity. Between MetaMask cuts, slippage I didn't notice and double fees from bridging then swapping separately, it was way more than I expected.

The worst part is most of it was surely completely avoidable, I just didn't know what to look for at the time, i feel sick, same experience for you? Curious to see gas fees checker as well, where do you see it?

r/OldSchoolCool Rachealota

Robin Williams and shelley Duvall on the set of popeye in 1980

r/Ghosts Loud_Coast4269

What is lurking at the end of the hallway?

r/painting Bigjippochickn

Breakfast of Champions

Acrylic on canvas board

I made a painting of one of fortunaliquidaa animations

r/findareddit _iamhigh

How do I find subreddits that don't have karmaa requirements?

r/VEO3 Underrated_Mastermnd

Predictions for Veo 4?

Assuming that Veo 4 will be showcased at Google I/O this year, what are your expectations for this video generator?

After Veo 3 being out for a year, it was already passed up by other models like Sora 2, Kling 2.5/3.0, Runway, Seedance 1.5 Pro and 2 in a series of ways. My main expectation is for it to be on par with a combination of Sora 2 and Kling 3.0. Since Google is an American company, American English Audio should sound natural, regardless of accent, and not sound like a TTS model. Prompt coherence should drastically improve, probably using Gemini 3.1 to help out. Lastly, more film styles are available. Sora 2 was capable of being about amateur cellphone footage, security cam footage, IMAX cinema. It would be nice for Veo 4 to have.

r/SideProject FounderArcs

Installed Claude locally on Windows… is anyone else doing this?

I just set up Claude locally on my Windows machine using PowerShell.

Honestly didn’t expect it to feel this different compared to using APIs.

Main things I noticed:

No latency issues

Feels more “in control”

Can experiment freely without worrying about usage limits

Now I’m thinking beyond just chatting with it.

I want to turn this into something more useful — like an AI agent that can:

monitor competitors

suggest actions

maybe even execute tasks automatically

Still early, but curious…

Is anyone here running LLMs locally and actually using them for real workflows (not just testing)?

Would love to know what you’re building.

r/SideProject isitsimple

"Built PuneCivicAl to simplify civic complaint reporting in Pune, and just won First Prize at my college project competition."

PuneCivicAI

I'm a final year engineering student from Pune and built PuneCivicAl, a civic-tech platform for reporting local issues to Municipality | Corporators more easily.

I recently presented it at my college's Tech Fusion 2K26 project competition and won First Prize.

It was exciting to see a project focused on solving real civic problems get recognized. Sharing here for feedback and suggestions on how it can be improved.

r/SideProject AzoxWasTaken

What’s something unexpected worked for your project?

Hey guys - need your help! I started a small side hustle and it kinda took off faster than I expected (pet accessories don’t judge🐕). At first it was just me doing everything manually (outreach, followups, small campaigns etc) and it worked fine… until it didn’t. Now I’m hitting a point where growth is good but my processes are a mess. Stuff falls through the cracks, I forget to reply to people, and I’m def not scaling this properly. The unexpected part is some of the homebrewed things that have worked so far weren’t what I thought would work at all. What’s something random or unexpected that actually worked for you as things started growing? Looking for ideas before I duct tape my way into burnout 😅

r/SideProject redcrumb1

I built a quiet focus timer because productivity apps kept becoming the distraction

I kept quitting productivity apps because the setup became a second job: dashboards, streaks, settings, analytics, systems to maintain.

So I built Pith as the opposite: one quiet focus block that feels alive.

The core loop:

- pick a focus duration

- start a session

- an animal grows while you stay focused

- if you quit early, it gets hurt

- ambient sounds if you want them

- no ads

- core focus features are free

It is iOS-only for now. It is not an app blocker and not a giant productivity system. The point is to make starting easier, not to give you another place to procrastinate.

App Store:

https://apps.apple.com/us/app/pith-focus-flow/id6762040939

What I would love feedback on:

- does the animal consequence feel motivating or too guilt-heavy?

- does the App Store page make the concept obvious fast enough?

- would you keep this after one completed session?

r/SideProject Top-Vermicelli-1198

User growth is slow, while spammers seem to be multiplying rapidly

Since I started posting and focusing on organic growth, I’ve been getting an increasing number of emails and inquiries from people pushing marketing platforms and it’s becoming quite annoying. How do you deal with that?

r/SideProject Ssoldier1121

BetStats- NBA parlay research app

Made this app over six months to help me and my friends do more efficient research when drafting for fantasy or making parlays. Free while in testing on TestFlight iOS only check it out and hope it improves your decision making in parlays!

MLB in development at the moment.

https://testflight.apple.com/join/uTppFJrS

r/SideProject Odd-Breadfruit-7909

AI Edtech

I'm a recent graduate but back in my college days, I used LLMs to study since the coursework was too complex for me to grasp so I always gave a particular set of instructions to GPT or Gemini before they start tutoring me. It goes well for first few questions and then collapses. Basically attention mechanism favours recency and my instructions token at the very start of the conversation get buried. I ran all these diagnostic tests to fix this issue for me since i was so done with the overprompting and starting fresh each time.

I built my own bot with gemini's api and used a system prompt. So, it never forgets the teaching instructions. Also, I kept it stateless because of resource constraints and to avoid any future context window issues. I implemented an intelligent memory. Ran a prompt to extract important info from each turn and store in a vector db. Stored conversations summaries too. For followup questions, I had a 2 turn buffer sliding window which only activates if followup. Stored all the info and conv summaries in db with metadata for hybrid search. For reranking, I had a weighted score of recency plus importance plus similarity under a context budget control.

I don’t know if this is considered basic or not for my level but the whole thing started from a real frustration I had while studying. I wasn’t initially trying to build an AI product for the sake of it. I just wanted a tutor that would stay consistent and actually remember how I wanted to learn instead of slowly drifting after a few turns.

Working on this got me really interested in the edtech side of AI, especially around long term learning interactions, memory, personalization and how tutoring systems behave in real usage instead of controlled demos.

I’ve been exploring a few YC startups in the space and it feels like there’s still a lot of room to build around actual learning behavior rather than just content generation.

r/SideProject USSEnterpriseGoku

I made a corporate career simulator… got fired in 2 minutes

You basically climb the corporate ladder by:

- saying vague things in meetings

- scheduling more meetings

- avoiding actual work

Then you get a performance review like this ↓

Try it here: corporateladder.xyz

Curious what role you end up with!

r/ChatGPT BIGBADPOPPAJ

Heads up if you ever simply google chatgpt, i had two links that showed up, one with www.chatgpt.com and the other this one being chatgpt.com. Quickly came here to check but didnt see anyone talking about it, and i went back and looked and noticed the double link.

Can't seem to find it now however.

r/SideProject Wongpen_012

killed my side project after 9 months, what actually killed it

started a small tool last summer, an AI that summarized weekly notion pages into shareable digests. got it live in 6 weeks. paid users at peak was 47, MRR around 380 dollars.

shut it down 2 weeks ago. wanted to write up what actually killed it because everyone says churn or distribution but for me it was something else.

churn was 7 percent monthly which is high but not catastrophic for a small tool. distribution was the hard part but i had figured out a steady drip from 2 communities and a small newsletter that got me 3-5 trials a week, fairly cheap.

what actually killed it, the underlying API costs went up 40 percent in march and notion shipped a feature that did 70 percent of what my tool did, for free, native. so my pricing got squeezed from both sides at once. the API increase alone wouldnt have killed me, the native feature alone wouldnt have, but together they made the unit economics impossible to fix without a full pivot.

i tried to pivot. spent 5 weeks trying to add features that notion didnt have. cross-workspace summaries, slack integration, custom voice prompts. trial-to-paid actually dropped during this period because the messaging got muddled. the homepage couldnt explain what i did anymore in one sentence.

the lesson i took, when youre building on top of someone elses platform and they ship into your wedge, you have weeks not months to react. and reacting by adding features is slower than just shutting down and starting something new with the cleared up understanding of what platform risk actually feels like.

im not bitter. learned more from these 9 months than from the 4 years before. but i wish someone had told me that platform risk is not just an investor talking point, it actually happens fast when it happens, and the version where it happens isnt usually a dramatic shutdown of the API, its a quiet feature ship from the parent company that you hear about in their changelog.

r/SideProject PsychologicalFudge52

Trying to fix the “wild west” of APK sites with an open source app store

Hi all!

I’ve been working on an open source Android app distribution system called SafeHaven. It's main focus is on trust, security and transparency.

It’s still early to be honest, but the base pipeline is already working (unreleased beta). A build of the client will be out this week.

This mostly started out of frustration with the current Android landscape:

- The Play Store dominates everything

- Alternatives exist, but often lack clear trust signals

- And then there’s the “wild west” of APK sites with little to no way of verifying a legit app from malware-based clones

SafeHaven aims to sit somewhere in the middle of this, and close that gap.

How does it work right now?

- Developers create an account (email or private key)

- They submit an APK and link it to a source repository

- Ownership is verified

- The APK is scanned and hashed

- After a short waiting period, it’s added to the public catalogue

- end users can then download the app without accounts

Each app exposes simple trust signals like:

- Verified Source (repo ownership)

- Verified developer signature (updates match the original signer)

- Latest scan result (server-side malware scanner)

The full submission process is *almost* done.

I’d appreciate any feedback or suggestions, especially around the trust model.

Index: http://api.colourswift.com/store/index.json

Repo: https://github.com/phsycologicalFudge/SafeHaven

r/SideProject Evening_Manner3328

Looking for feedback from people who use AI character chat apps

I’ve been building a small AI character roleplay site called Chaytai.

I’m not trying to spam or pretend it’s perfect. I’m mostly looking for honest feedback from people who actually use Character.AI alternatives.

The things I’m working on right now are:

- faster replies with fallback models when one gets rate-limited

- better mobile chat UI

- recent chats that feel cleaner

- character creation and discovery

- making roleplay replies feel less generic

If you use AI character/roleplay sites, what makes you stay on one?

And what makes you leave immediately?

Site, if anyone wants to look: https://chaytai.com

r/SideProject Individual-Parsnip79

I built a CLI that interviews you and writes a SPEC.md + agent instructions before you open your AI coding agent

Every time I started a new project with Claude Code, I spent the first hour going back and forth on stack, auth, and features before a single file got written. So I built quickstart-ai — a 5-minute branching interview that generates a structured SPEC.md and agent instruction files (CLAUDE.md, .cursorrules, etc.) before you hand it to your coding agent.

**How it works**

• Run `npx quickstart-ai`

• Answer ~20–100 branching questions (irrelevant sections skip themselves)

• It writes SPEC.md + per-agent instruction files tailored to Claude, Cursor, Windsurf, Copilot, Aider, or a universal AGENTS.md

• Optional Claude-powered smart pre-fill if you have an Anthropic API key

**What I'm actually looking for**

• Does the generated SPEC.md format actually help your agent stay on track?

• Are there question branches missing that you always end up clarifying manually?

• Would you use this, or is it solving a problem that doesn't really exist?

Repo is open source (MIT) if you want to peek or fork the questions: github.com/NijeMatija/quickStart

Looking forward to your feedback!

r/SideProject SideQuestDev

spent 5 months fighting google indexing. bought a new domain yesterday and it indexed instantly.

hey guys. just wanted to vent/share a bizarre seo experience with my recent side project.

for the last 5 months, i’ve been trying to get the inner pages of my engineering tool site indexed. search console just kept throwing me into the classic "discovered - currently not indexed" purgatory. i tried everything: fixing core web vitals, sitemaps, internal linking, api pushes. absolutely nothing worked.

yesterday i finally lost my patience, said screw it, abandoned the old domain, and just bought a fresh one: induspecs.com . migrated the exact same codebase over.

literally the exact same day, google started indexing my calculator pages. 5 months of pure frustration solved by spending 10 bucks on a new domain. i guess my old domain was just completely shadowbanned or cursed?

anyway, the project itself is an ai-powered reference toolbox for industrial engineers (calculating pipe weights, bolt torques, metal specs, etc., powered by llama 3). the stack is super lightweight: html, tailwind, and a bit of vanilla js.

has anyone else dealt with a "cursed" domain like this? is this just how google treats new sites now?

also, if there are any mechanical engineers lurking here, would love to know if the hub structure makes sense to you. cheers.

r/SideProject Designer_Garlic6153

8 months building VibeCut.net — automates Twitch clips to your social media with zero manual steps

The problem: Every clip tool I tried stops halfway. Detect clips but still makes you edit manually. Or lets you edit but won't auto-post. Nobody was doing the full pipeline so I built it.

What we built: vibecut.net

You set a posting schedule. Vibecut does everything else:

  • Pulls the most viral recent clips from streamers you follow (no Twitch account needed — public API only)
  • Picks the best ones using a ranking algorithm
  • Auto-formats to vertical: portrait, blurred background, or webcam separation — decided per clip
  • Generates overlay titles + social captions
  • Adds subtitles when it'll boost engagement
  • Posts directly to TikTok via official API — no scraping, no bans

Also comes with a streamer catalog showing performance stats by game, so you're not guessing who's actually popping off right now.

Zero editing. Zero manual posting. Just clips going up while you sleep.

r/SideProject ObjectiveBit5526

How to prepare for product hunt launch next month!

Finally after working on our SaaS product, we are getting ready for Product Hunt launch next month.

How shall we plan it as we don’t have any audience, social followers?

r/SideProject Sharp_Tax_6182

The Real Reason Your SaaS Metrics Aren't Moving

Most Founder's conversation follows this pattern:

-> "Our retention's dropping. We need to launch more features."

-> "Our pipeline is weak. We need more budget for ads."

-> "Users are complaining. We need to build more."

And the team executes. Good job. Yet, the metrics barely move.

And the resaon? They're solving the wrong problem.

The pattern I keep seeing:

Retention Problem = "Build more features"

Reality = Users hardly understand it's value fast enough.

Pipeline Problem = "Spend more on marketing"

Reality = The funnel has structural issues....more traffic exposes them faster.

Adoption Problem = "Ship more capability"

Reality = Customers are overwhelmed with complexity.

When there's misdiagnosis, you've to compensate for the downstream. You end up in burning cash and runway chasing symptoms instead of causes.

Before pulling the trigger, step back and ask:

"What's the actual problem we are solving?"

Not the symptom. Not the root.

Most growh problems in SaaS don't need a clearer input. They need a clearer lens.

What repeating patterns have you noticed in your own metrics?

r/SideProject Spiritual-Yak-4999

Why use 5 different apps? I created a 'Digital Delegate' for everything from smart reminders to car maintenance.

https://reddit.com/link/1syu8dc/video/hsdtaijwo3yg1/player

I’ve always struggled with app fatigue. I had one app for my freelance deadlines, another for my car’s service intervals, one for my pet's vet visits, and of course, a separate one for daily reminders.

I thought: "Why can't my assistant just handle all of this?"

So, I built VekilAI.

In my language, "Vekil" means a delegate or a proxy—someone who acts on your behalf. That is exactly what this app is: a modular AI Super App designed to be your digital representative and handle the chaos of daily life.

What can you actually do with it?

  • Freelance Management: Track your projects, client tasks, and deadlines in one clean dashboard.
  • Vehicle Maintenance: Log your car's service history and get smart alerts for your next oil change or inspection.
  • Pet Care: Keep a digital diary for your furry friends—vaccinations, feeding schedules, and health notes.
  • Smart Reminders & Notes: A centralized hub to dump your thoughts and get them back exactly when you need them.
  • Modular AI Power: It’s built to be flexible. You don't just chat with an AI; you organize your whole life with it.

I’m a solo developer looking for honest feedback from anyone who wants to simplify their routine. What features would make an AI assistant "essential" for your daily life?

Check it out here: PlayStore: https://play.google.com/store/apps/details?id=com.fsa3web.vekilai&hl

Appstore : https://apps.apple.com/tr/app/vekilai-ai-agent-assistant/id6761864203

r/SideProject gotDemPandaEyes

Lapser Studio - Create beautiful timelapse screen recordings in minutes

Hello SideProject! Today i'm releasing Lapser Studio, a small macOS app that helps you record timelapse screen recordings and make them beautiful.

If you're creating these kinds of recordings with classic recording software, you'll end up with massive recording files and slow video editing software editing the clips

Lapser Studio solves this by letting you hit record and once your done, edit background, camera position and foreground size super easily.

The previous app I used years ago (Hustl I believe) has long not been updated and apps like Screen Studio became the norm for simplifying editing. So I though why not blend the too. The outcome is an app i'm super proud to share.

Lapser Studio is available via my own website for 30 USD (REDDIT25 25% off for reddit peeps) as well as the App store ➡️ https://apps.apple.com/fr/app/lapser-studio/id6758652797?l=en-GB&mt=12

Tysm and I hope you find it useful 🫶

r/ChatGPT Co3koolkid

Almost insanely accurate

The prompt was "Create a Cross-sectional blueprint of a massive underground steam-powered city in a post-solar-apocalypse world, constructed within a deep cylindrical subterranean structure. Layers of platforms, turbines, boilers, water systems, and dense habitation zones interconnected by ladders and walkways. Industrial steampunk aesthetic, technical diagram style, precise linework, annotations, worn blueprint paper texture, highly intricate".

Originally it was just "Create an image of a blueprint to an underground, post solar apoc, steam driven, repurposed missile silo city (without missile, obviously)"

r/SideProject Plus_Resolution8897

Made a 60-sec video explaining what the EU AI Act means for AI agent builders (95 days to deadline)

Building areev — a database for AI agents that ships compliance artifacts (GDPR erasure proofs, EU AI Act conformity reports) at the engine layer instead of bolting them on later. With enforcement landing on August 2nd, made a short explainer on what regulators will actually be checking for.

Feedback on the video itself welcome — first attempt at this format. Especially curious if the regulatory angle landed for non-EU builders, or whether it reads as too EU-specific to be relevant.

r/SideProject parkenstein88

I’ve made an Illinois park district activities and library event tracking app

Hey all,

I’ve been working on a small side project called Anjou to make it easier to find kids activities around Chicago / north suburbs.

Basically I got tired of jumping between park district PDFs, websites, library calendars, etc. every season just to figure out what’s available.

So I built something that:

\* pulls activities into one place

\* lets you filter by age / schedule / location

\* links straight to the registration page

It’s still pretty early, but already saving me a ton of time.

I did come across something similar for Naperville (playgrades.app), which made me realize this is probably not just a “me problem.”

Right now I’m mostly focused on Northbrook / Buffalo Grove area.

If you’re a parent nearby, I’d really appreciate any honest feedback! Especially what feels missing or confusing.

It’s free right now, and there’s also available on iOS app

The site is https://anjou.app

Than you for reading!

r/SideProject FounderArcs

“What’s a SaaS idea you thought was great… until you tried building it?”

I’ve had a few ideas that looked solid on paper, but once I started building, the reality hit—hidden costs, complex edge cases, or just no real demand.

It made me realize that validating an idea is much harder than coming up with one.

So I’m curious:

What’s one SaaS idea you were excited about initially, but later realized wasn’t worth pursuing?

And what specifically made you drop it?

Trying to learn from real experiences before I invest time into my next build.

Would appreciate honest insights (failures included).

r/ClaudeAI rosmarina_

Design System vs Skill with brand guidelines

I am building the design system for my company using Claude Design, but we also have built a skill that contains our brand guidelines.

What’s the difference between these 2 ways of applying the brand guidelines to content assets and landing pages? If the skill contains the same information as the design system, outputs should be equal, right?

Besides the export to google slides is straightforward from the skill, and also leaving aside the fact that the design can break when transferring a slide from a skill to gdecks, is there any other thing to consider when deciding between using the brand guidelines skill vs the design system?

Maybe that from the design system you can directly export to Claude code?

My hipothesis was that a skill with brand guidelines is more suitable for quick look&feel updates, and the design system is more suitable for producing assets from scratch, just by submitting plain text to the system.

Anyone can help with this? thanks!

r/ChatGPT Prudent_Bat_6057

“Nothing worse than Claude hitting the limit right when your bug is almost fixed 💀 how do you guys deal with this?”

r/ChatGPT tuyenhx

I had this dumb idea: use GPT Image 2 with ChatGPT subscription inside photoshop, and turn out, it works. Here is the the demo

After spending a while running all kinds of benchmarks with GPT Image 2 on a ChatGPT subscription, here are some early takeaways:

- Image generation speed is basically the same across models, thinking modes, and whether fast mode is on or off. I tested GPT 5.4 with low, medium, high, and xhigh thinking, and GPT 5.5 with low, medium, high, and xhigh thinking. The average is around 100s without a reference image, and about 150 - 180s when adding reference images.

- Somehow, this thing generates the best results with GPT 5.4 + high thinking. That setup consistently gives the best image outputs. I have no idea why.

- The max output is only around 1K resolution. This can break image quality when doing Photoshop work at larger resolutions. That makes sense, though, since a 4K image via API costs around $0.40 per image. If ChatGPT could generate at that quality, who would still buy the API? lol

However, the low resolution issue can be solved in several ways, like upscaling and so on. The main thing is getting the image generation right first.

Hope you like the demo.

r/ClaudeCode entheosoul

Anyone else noticing the huge latency issues within Claude calling home?

This is a recent observation but it's now very noticable. This was not the case just a week ago, but now Claude takes up to 45 seconds between tool calls in Claude Code, which is close to unusable for me.

Quick check on system load shows its not the culprit -

System load... not the culprit

  • Memory: 9.2G/46G used, 37G available, 0 swap. Plenty of headroom.
  • Load avg: 0.25 / 0.20 / 0.14 on 16 cores. Severely underloaded.
  • Top RSS: qdrant at 17.4 GB (36% of total). Parked at 1.5% CPU, just sitting on indices for 16h+. Working set is real but not pressuring the system.
  • 6 claude processes combined ~2.5 GB — normal.
  • kwin_wayland: 12.5% sustained CPU — anomalous for a near-idle desktop.

Likely culprits, ranked:

  1. Anthropic-side -- model server load / region routing

Is it just me or are others noticing this degradation in API latency calls?

r/ChatGPT Svisji

When a retired raid leader loses touch with reality

r/SideProject Soft-Opportunity7712

I built a calorie coach end-to-end on the One agent platform — Telegram bot + Notion DB + designed Sunday email recap, all from 3 Markdown files (open source)

Built this over a weekend. Sharing because the architecture surprised me — the entire thing runs on the One agent platform with no backend code of my own. **What it does** - I send any meal to my Telegram bot (text, voice note, or photo) - The agent estimates calories + macros, writes a row to my Notion database - Dashboard reads from Notion live: today screen, weekly review, streaks, deltas, transcript log, station view - Every Sunday at 8 PM the agent reads my last 7 days, sends a coach letter on Telegram, and emails a designed dark-mode HTML recap - If it suggests something ("raise protein 122g → 142g"), I reply YES on Telegram and my macro targets update for next week - Dashboard recomputes against the new target on next reload **What's surprising about how it's built** The One platform handles natively: - The cron scheduler (5 schedules — 4 meal pings + Sunday recap, all stored on the agent) - The Telegram channel (no webhook server, no Lambda, no relay) - The LLM calls (no SDK plumbing in my code) - The integrations (Notion, Gmail, Deepgram are all native One connections, no per-platform auth in my repo) So my "code" for the agent is literally three Markdown files I pasted into the Skills panel: - meal-logger.md (writes meals to Notion) - weekly-recap.md (Sunday letter + email + writes pending suggestion to Notion) - apply-targets.md (reads YES/NO replies, updates targets, clears pending) The dashboard is just Next.js reading from Notion through One's passthrough API. No proprietary database. All my data lives in my own Notion — I can stop using the agent platform tomorrow and my data is still mine. **Stack** - One platform — agent runtime, scheduler, channels, LLM, integrations - Notion — Meals + Targets databases (schemas in the repo README) - Telegram — chat surface - Gmail — Sunday recap - Deepgram — voice transcription - Next.js 16 + React 19 — read-only dashboard with dark mission-control UI **Repo (MIT):** https://github.com/krish221997/calorie-coach-dashboard **Demo video (4 min):** https://www.youtube.com/watch?v=c9XCCUUTI-s **Live dashboard:** https://calorie-coach-dashboard.vercel.app **Create your own One agent:** https://app.withone.ai/agents The README walks through the exact Notion DB schemas you need (Meals + Targets, including the Pending Field/Value/Reason columns the YES/NO flow uses) and the three skills to paste into your own One agent. Setup is maybe 30 minutes if you have Notion + a Telegram bot ready. Happy to answer questions about how any of it works — the agent platform side or the dashboard side. 

https://reddit.com/link/1sysro0/video/e7kv83ixf3yg1/player

r/SideProject tobias_man

Free browser tool: "This or That" Story generator with a built-in reshare loop

"This or That" is one of the highest reshare-per-Story formats: Viewers screenshot, mark their picks, and post their version back to you. The format usually dies in Canva because there's no clean way to hand viewers an editable version.

This tool fixes that with a share link: build the card (pick a topic or type pairs, optionally pre-mark your sides, pick a theme), grab the link, drop it into an Instagram Link Sticker on your Story.

Viewers tap -> land on a page with your pairs pre-filled and every pick cleared -> mark theirs -> download -> post.

Output is a 1080×1920 PNG (works on Stories, Reel covers, TikTok, Snapchat).

Useful for:

- Brands running engagement Stories without Canva busywork
- Creators farming reshares from their followers
- Anyone testing the binary-poll format without designing one from scratch

Free, no signup, runs in your browser tab.

Disclosure: I built it for Storrito (Story scheduler) — the tool is standalone and free.

Link: https://storrito.com/tools/instagram-this-or-that/

r/ClaudeCode Square-Republic7960

Website creation with no coding background!

Hello, i wanted to ask people with actual knowledge instead of simply searching, i have a project and i am a one man team. Is it realistic to say i can build a whole websited and host it, while it being fully customizable, with no coding experience using claude?

The site won't be very complicated it is simply for selling, though there's a feature i want to add where you can select a few items and then an ai makes a generated picture for what the whole bundle will look like( of course i'll train the ai with real pictures of the actual items i have to improve accuarcy)

ex: let's say you select a shirt, trousers and pants, the ai will generate a picture of the whole outfit on a silouhette that has a similar figure to yours depending on your size of course.

My main concern is maintaining the site also, and in the case of bugs i dont want to have a lot of downtime in order to fix it myself, or search for developers. Should i just deligate to a dev from the starts?

PS: budget is super tight...

r/SideProject Murky-Classroom6639

If you could use ONE tool weekly for free (with light, non-intrusive ads, no subscriptions), what would it be?

Constraints:

Must be realistic (not something extremely expensive like unlimited AI generation)

Something you would actually use regularly

Ideally: a feature or experience you don’t see offered this way today (e.g., not commonly available as a free-with-ads model)

Something you currently pay for or find annoying to access

I'm trying to uncover real unmet needs, not fantasy ideas.

What would you pick, and why doesn’t a good version of this already exist?

If you know someone who has strong opinions on tools or productivity, feel free to share this post with them, I’d love to gather diverse perspectives.

r/SideProject RefrigeratorNo1465

I realized something weird about my to-do list It stopped being a plan… and started feeling like guilt.

Every day I’d see unfinished tasks from yesterday, and instead of helping me stay organized, it just made me avoid the list entirely.

So I started thinking —
what if a planner didn’t carry over everything by default?

Like, you only focus on today, and consciously decide what to bring forward.

Do you think that would actually help with productivity, or just hide the problem?

r/ChatGPT aerofoto

Thought I would give grok a try.

I just asked it to tell me what it was good at and I thought it was funny that right after they told me it was too busy to deal with me at 2:30 in the morning, it asked me to give it a star rating..

r/AI_Agents Ok_Tumbleweed1398

6 months of data on the open-source AI agent ecosystem: 45× supply explosion, 99% creator fail-rate

Spent the last 6 months building a directory of every open-source AI agent project I could find. Now sitting at 67K projects.

Two observations specifically for r/AI_Agents:

**Supply explosion is real.** Monthly new agent project creation went from ~50/month in early 2024 to ~27,720 in March 2026. That's 45× in ~24 months. The shape of the curve isn't gradual — it's a step-function around Q4 2025 when Anthropic released the Skill Spec + Claude Code shipped one-step install.

**Demand hasn't kept up.** 54.1% of all 67K projects have 0 stars. Top 1% of projects own 83% of all stars. The gap between "I shipped" and "anyone uses it" is the widest I've seen in any creator ecosystem.

What this implies for r/AI_Agents folks building/picking agents:

- If you're picking, star count is actually a fair signal up to top 1% (correlates 0.71 with my quality score)

- If you're building, the format wars are over — pick MCP or Claude Skill, both are fine

- The actual moat is "what task does it solve in your specific workflow?"

Browsable index + free 12-chapter writeup of all the data: dropping link in first comment to avoid spam-bot.

r/SideProject tamastorok

I built an app to help parents create personalized audio bedtime stories for their kids

This is an app I built myself. My wife and I have been struggling with bedtime stories. The kids always want stories, but mine get repetitive. It’s hard to come up with something engaging and honestly who has the mental bandwidth to create a great story at 8 pm??

I’m also a terrible storyteller.

I built a web app that generates short (2-4 minute), personalized audio bedtime stories tailored to your kid’s age and interests, based on the story idea your kid might have!

The app is in beta, and I’m looking for a few parents with 2–8-year olds to test it and give brutally honest feedback by filling out the form in the app.

Stories can be generated in multiple languages, lmk if your language is missing and I can add it!

I'm looking for feedback on onboarding, usability and story quality. Pls use the feedback form in the app. In exchange, I will give you some extra credits so you can generate more audio stories.🙏

You can try it here (no download required): https://lumiostories.com/

r/ClaudeAI max-t-devv

I benchmarked caveman against the prompt "be brief"

Caveman is getting really popular, so was interested to know if it actually outperforms a simple reminder to "be brief".

Ran 24 dev prompts across 6 categories, comparing 5 arms (baseline, "be brief.", caveman lite/full/ultra). Judged by a separate Claude against per-prompt rubrics.

Scores:

Arm mean score mean tokens baseline 0.985 636 be brief. 0.985 419 caveman lite 0.976 401 caveman full 0.975 404 caveman ultra 0.970 449

Surprisingly the 2 words matched caveman on tokens and quality. Caveman still earns its keep on consistent output structure, mode switching, and the safety escape on destructive ops but the compression itself isn't the differentiator I expected. The safety escape actually caused a lot of variance in the output.

Full breakdown with per-category data and the variance findings on safety questions: https://www.maxtaylor.me/articles/i-benchmarked-caveman-against-two-words

Video: https://youtu.be/wijoYNiZq3M

Benchmark harness is open source if you're interested: https://github.com/max-taylor/cc-compression-bench

r/SideProject SnooChocolates173

We tried Basecamp, then Linear. Neither fit.

I have a small team. I wanted something between Basecamp and Linear. Simple enough to run a real product backlog. So I built Plate.

No filters to configure, no backlog grooming, no sprint ceremonies. Just projects, tasks, and your team.

We launched Monday. If you're running a small team and you've ever felt like your PM tool is working against you, I'd love for you to try it and tell me what's missing.

r/ClaudeCode Ok_Fish_670

This Is a Full Map of AI App Technology

The AI race is moving from model power to system power.

A lot of AI apps are still stuck at the prompt-engineering stage. But the more practical question is: how do we actually turn AI into a real, working product?

I think AI application technology can be split into five layers:

At the bottom is the LLM. It understands and generates content. It is the “brain.”

Above that is MCP. It connects external tools and data in a standardized way. It answers the question: “How do we connect everything?”

Then comes Skill. This turns business logic into reusable tools.

Next is Agent. It breaks down tasks, chooses tools, and keeps the whole execution loop running.

At the top is Harness. It provides testing, evaluation, monitoring, deployment, and other engineering support.

The real value is not just the model. It is the full execution chain:

User input → LLM understands → Agent plans → Skill gets called → MCP connects external capabilities → execution and verification → result returned

A request is no longer just a simple Q&A. It becomes a task flow that can be planned, executed, checked, and improved.

Many AI apps fail to go deep because they are missing a few key things:

standardized connections through MCP, reusable capabilities through Skill, task orchestration through Agent, and engineering support through Harness.

Without these, AI stays at the demo stage.

With them, AI can become a real system.

r/SideProject notwhatyouthink00

I shipped an iPhone app whose release notes literally say "annoy your neighbours in 4 new languages"

Solo iOS dev. Drill Simulator v1.2 went live on the App Store this week. ASMR/fidget app with six power tools and CoreHaptics tuned per tool.

The cursed part

This is the actual public changelog right now:

v1.2
· Now louder. Use responsibly.
· Annoy your neighbours in 4 new languages.
· Haptics so deep your downstairs neighbour thinks the boiler exploded.
· Fixed: that one user who said "this app should be illegal" now has 5 stars.
· Still free. Still no Wi-Fi needed. Still a terrible idea at 7am.

App Review let it through on first submission. I am still processing this.

What's interesting under the hood

· ActivityKit Live Activity in the Dynamic Island that hums along while drilling. Push tokens are a separate ceremony from notification tokens, took me 2 evenings to figure out.
· CoreHaptics patterns tuned per tool (saw heavier, sander gentler). The simulator lies, you need a real device.
· UMP consent then ATT prompt with a soft pre-prompt. Opt-in roughly 2x vs cold ATT.
· Next.js with next-intl landing in 14 languages.
· Cross-promo system between this and two sister apps (Chainsaw, Lawn Mower) without using AdMob house ads.

Numbers

Brand new. Pre-marketing. TikTok push starts this week. Will write a follow-up retro at 1k downloads if there's interest.

Links

App Store: https://apps.apple.com/app/id6757700473
Landing: https://drill-simulator-nine.vercel.app

Happy to answer anything about iOS solo dev, ASO, or how I got that changelog past App Review.

r/ClaudeCode yuvals41

Are there any good tools or techniques for building a high end premium websites like Stripe or Notion

I'm struggling lately building my website in Next.js, I used the frontend-design skill from the official Anthropic website but it still does random unnecessary stuff.

Does anybody know any good techniques or tools I can you use to help me do it properly?

r/ClaudeAI Useful_Guarantee5221

Claude reasoned through a geopolitical event and gave me a stock pick that returned 28% in a month — this is genuinely underrated

When Iran attacked that data centers in Middle East and markets dipped, I wanted to buy stocks at lower levels. I asked Claude: “Which Indian stocks could benefit from this?”

It reasoned that workloads would shift from the destroyed data center to the nearest AWS Mumbai region, and since AWS uses NETWEB to build new capacity, it flagged NETWEB Technologies as a direct beneficiary. I’d never had it on my radar. I picked up NETWEB, and in less than a month it gave me a 28% return.

Honestly, AI models are smarter than most people give them credit for—they just need real-time data to work with. News that moves markets, stock updates, macro events—once you feed all of that in, the analysis is genuinely impressive.

I’ve been using it for a month and haven’t opened a single screener since.

This is what investing looks like now.— connect Claude to your live portfolio and live market data use any mcp server that gives claude all data it need --This is what investing looks like now.— connect Claude to your live portfolio and live market data use any mcp server that gives claude all data it need -- i use Tapetide mcp for indian markets . Ask in plain English, get analysis grounded in what's happening right now..

Anyone else running Claude for market research? What's your setup?

https://preview.redd.it/bpgdb19b63yg1.png?width=662&format=png&auto=webp&s=bba8fc0382c89714c9da43b55cc9cfb472824ef1

https://preview.redd.it/7m7pnktc63yg1.png?width=1910&format=png&auto=webp&s=edc0116d8c65d623249fc4247cb4aa4c7f292fcd

r/ClaudeAI Gandleon

How do non-coders run out of usage in max?

There is so much complaining on this forum. I recently switched up to to Max because I was sick of hoarding my use each weak. But now I just don't understand where the complaints come from. I get that agents and coding burn through usage, but I dont think I can reach my limits even if I tried.

Like I am almost exclusively using Opus, I use it to write me silly research reports and chat with it, I use it for multiple hour a day. For work for fun, for random stuff. For example. For example today, I ask 3 separate Opus chats to give me criticism and feedback on a 10k word draft, to read a bunch of the background literature, and another to consolidate and write a road-map on how to implement it. Maybe I have used 17%
So are there any people doing non-coding work on Max x5 who run out? Is it just coders, or am I just not utilizing it enough or in efficient ways?

So i guess also any tips for what do now that usage is just not a limit anymore? {I know that Anthropic is counting on people not using all their usage, but curious if I am leaving money on the table}

r/aivideo NarrativeNode

TARMAC - Behind the Scenes of an AI Animated Show

r/ChatGPT love1008

Full Bag Of Flowers - GPT-image-2 reference to Seedance

r/ChatGPT Remarkable-Sir4051

Nano Banana smart prompt: Metal embossed logos Prompt 👇

r/ClaudeCode Danieboy

What it's like working with Opus 4.7 (xHigh) daily...

Basically as many other threads here before - ignores instructions. Ignores corrections after it's first implementation. For example.

Me: Implement X, Y and Z based on the supercoolplan.md.

Claude: Ok, X, Y and Z are implemented now. I also fixed a "bug" that caused this issue. The fix was changing the success threshold.

Me: You didn't test X and Y. They do not work. I also didn't ask you to change that success threshold. Please fix.

Claude: You are right to call that out, let me go back and fix that.

Me: You still didn't test X - it still does not work.

Claude: You are right to call that out, it was sloppy....

Me: Now fix it properly... No shortcuts...

r/LocalLLM GodComplecs

Where do you draw the line on digital sentience?

What would constitute as digital sentience? I'm asking here specifically since I'd like to hear fellow local llm users opinions on this, since the in my opinion at some point we could be crossing over into talks of involuntary work etc if systems become sentient.

I know this seems very far fetched, but believe me the future is closer than you think, and I'd like to see what people who use local llms think since according to the big boys they are all ready AGI ASI super feeling beings that generate infinite universal income xD

r/homeassistant shreshth_001

Is there actually a robot vacuum that can maintain a clean house for a large family? Here's my frustration.

I’m at my wits’ end, and I need some real talk here. I have a large family with three kids, two dogs, and a ton of daily mess to clean up. The traditional vacuuming routine just isn’t cutting it anymore. I’ve tried almost every regular vacuum under the sun, but no matter how much I clean, it feels like it’s never enough. Plus, I can’t physically drag a heavy vacuum around every few hours just to keep up.

I’ve been eyeing robot vacuums for a while now, hoping they could actually solve the mess without needing me to micromanage everything. But I’m stuck on the same question: Can a robot vacuum really maintain a clean house for a large family? Sure, it can do the initial sweep, but after a couple of days, I still find crumbs, dust, and pet hair in corners.

I’m especially looking for one that doesn’t require constant resetting or manual intervention—something that genuinely adapts to the chaos of a busy home. But from what I’ve seen, most of these vacuums get stuck, miss spots, or fail to handle larger messes. So, I’m curious if anyone here has experience with robot vacuums that can actually stay on top of cleaning in a large household. Does it work or is it just another gadget that doesn’t do much?

I don’t mind a bit of maintenance, but at this point, I need something that will keep the place feeling clean without me needing to vacuum several times a day. Has anyone actually found something that works? How does it handle pet hair, kid toys, and daily mess? I’d love to hear what’s worked for you.

r/AI_Agents Spiritual-Ad4721

7 OpenClaw Money-Making Cases in One Week — and the Hidden Cost Problem Behind Them

Recently I saw a post about 7 OpenClaw money-making cases from the past week.

At first, these stories sound exciting:

one person, one AI agent, one workflow, and suddenly there is a small business.

But I think the real lesson is not simply AI agents can make money.

The real lesson is that AI agents are turning repeated work into automated workflows.

From what I have seen, many of these agent-based projects are not magical. They usually take a boring, repeated, high-friction task and make it run continuously.

Examples include:

  • finding leads
  • generating content
  • monitoring prices
  • building small tools
  • automating customer support
  • summarizing research
  • running coding workflows

What makes OpenClaw and similar agent products interesting is that they are not just chatbots.

A chatbot gives you an answer.

An agent takes actions.

It can browse, reason, call tools, retry, summarize, and continue the workflow. That makes it much closer to a low-cost operator than a normal AI assistant.

I think this is why these money-making examples are spreading so quickly. They make people feel that a solo developer or small team can now test business workflows that previously needed multiple people.

But I also think there is a hidden issue that does not get discussed enough:

agents can make money, but they can also burn money.

Every agent step can trigger another model call.

That looks like work.

But sometimes it is just a loop.

And if every step uses an expensive model, the agent can quietly burn API budget before the user notices.

So when I see these OpenClaw money-making cases, I do not just think agents are the next gold rush.

I have been experimenting with this idea in a small local-first proxy project, but my main takeaway is broader:

if agents become part of real work, cost control and runtime guardrails will become just as important as the agents themselves.

r/LocalLLM techlatest_net

NVIDIA Launches Nemotron 3 Nano Omni Model, Unifying Vision, Audio and Language for up to 9x More Efficient AI Agents

NVIDIA just launched Nemotron 3 Nano Omni, an open multimodal model that combines vision, audio, and language into one system for faster and more accurate AI agents. It delivers up to 9x higher throughput while reducing cost and latency compared to separate models. Built on a hybrid MoE architecture with a 256K context, it excels in tasks like document intelligence, UI navigation, and audio-video reasoning. The model is open, customizable, and deployable across local, cloud, and enterprise environments. Available now via platforms like Hugging Face and OpenRouter.

nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16: https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16

nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-FP8: https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-FP8

nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-NVFP4: https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-NVFP4

NVIDIA Blog: https://huggingface.co/blog/nvidia/nemotron-3-nano-omni-multimodal-intelligence

BenchMark

Compared to other open omni models with the same interactivity, Nemotron 3 Nano Omni delivers 7.4x higher system efficiency for multi-document use cases and 9.2x higher system efficiency for video use cases

Efficiency highlights

Model architecture and key innovations

Model architecture and key innovations

r/ClaudeAI Specific-Month-1755

Okay a little bit lighter than the usual-- searching for a Hindi movie star

Okay. You may or may not know that I was a Hindi movie star, I was in a movie when I was in India and I've been searching for that movie ever since.

This was in the '90s basically back before there was any internet so at the start my search mainly consisted of hitting Bollywood movie rental places in Vancouver when I was there. I had the working title but I didn't know the official title. The working title was Lakshmi Swami.

Later with the advent of the internet I could find my co-stars, Jeannie Michaels and Gary Richardson, but there was no link to the movie.

So the other day I was chatting about my travels to India and how I hated it, but as usual it turned to the fact that I was a Hindi movie star.

So I tried AI. Gpt couldn't even find my co-stars, so fuck that.

Claude, however, found them but we couldn't find his credit.

I added a detail about the scene I was in, a religious conference set in Chicago. As luck would have it, there was an actual religious conference set in Chicago about 100 years ago.

And I found my movie. Luckily it was based upon a true story.

https://m.youtube.com/watch?v=Bxx1TUe3\_mQ

2:29:01

2:36:16

And my mentor at 2 hour 30 minutes 29 seconds. Gary Richardson.

QED after 29 years.

Thanks Claude.

r/SideProject navs3011

Building Saathi - A companion (an Alexa like app a little bit) for your Parents

I'm building Saathi because I know exactly what it feels like to sit hundreds of kilometres away and wonder if your parents are okay.

The problem: Millions of Indian families have elderly parents living alone in smaller cities while their children work in Bengaluru, Mumbai, or abroad. The children worry constantly. The parents are lonelier than they admit. And a daily phone call is never really enough.

What Saathi does: Greets your parent every morning in Hindi/English (just like Alexa) on their existing phone; Reminds them about medicines and confirms they took them; Checks and confirms meals; Checks in during the afternoon just to keep them company. Sends you a simple evening update - how they seemed, whether medicines were taken, anything worth knowing. Also has a built-in SOS button on Home Screen which directly calls the emergency contact (the child/ambulance/police as chosen)

What makes it different: No new device. No complicated setup. No app for the parent to figure out. Works on the Android phone they already have. The parent just responds - the child just reads the update.

Who it's for: Any Indian professional living away from their parents who has ever called home twice in one day just because something felt off.

Currently at the idea stage. Would love your honest feedback - especially from anyone who has parents back home. Does this solve something real for you?

Thinking ₹ 799/month (~$10). Would you pay/not pay and why? Any area I can improve? Please drop your feedback.

r/SideProject jerrysdevs

I'm a student and I'm building something to learn leetcode 10x better

TLDR - learn leetcode 10x faster and actually understand the questions instead of training you solution memorisation with Nudge

Hello everyone!! Like most cs students, ive wasted many hours grinding leetcode and it was miserable. When I got stuck, the only option was to give up on my own progress and solution and copy someone elses. I couldn't get personalised help every time i did a lc question

So i'm building Nudge

It gives you non disruptive and adaptive inline hints inside your leetcode editor. Instead of giving you the answer, it helps you figure out your approach, nudges you when you’re stuck, and guides your thinking step by step. You can have a conversation with it for deeper help, pressure test your plan, and actually understand why something works.

would love to hear if people think this would actually help, or if there’s features you’d want in something like this

Please check out the website if you're interested!!

https://reddit.com/link/1syt40z/video/j16b0hcxe3yg1/player

r/homeassistant Big-Edge2297

Homie Dashboard - Single page overview and device control

I have been using Home Assistant for about 10 years now. I decided to create a dashboard for a tablet, which will be potentially wall mounted, in preparation to moving to a new house.

All the testing has been done on an 11" tablet, but it should work well with larger screens. With smaller screens there will be issues with elements overlapping, so this is not ideal for mobile dashboards.

As with my mobile dashboard, I try to have everything I need on a single page to visually understand what is going on in the house, with no scrolling. Then i can select specific things i will like to control, like lights, scenes, aircons etc. with as less tapping as possible.

The dashboard is fully customizable, so with a little effort you can make changes to pretty much everything to accommodate your needs.

I have been working on this dashboard for about a month using my knowledge, Claude and Gemini.

The only thing that i can't fully test yet are the blinds and irrigation as i don't have any right now.

If you have any suggestions, I would love to hear them.

https://preview.redd.it/otu77glig3yg1.png?width=2560&format=png&auto=webp&s=12dd06905f889cb4fe54b4f8e67cbfec0db8be7f

https://preview.redd.it/5svorxbjg3yg1.png?width=2560&format=png&auto=webp&s=fea95ef69535abd21ea35137241f391c2e4e1153

https://preview.redd.it/xv5jkjosg3yg1.png?width=2560&format=png&auto=webp&s=914af4bba4424032e631ff55cb33cc3e6addcb42

https://preview.redd.it/m5ydohutg3yg1.png?width=2555&format=png&auto=webp&s=c7cce31b47fb5f1a63b6af3bc065f3ae453bed71

https://preview.redd.it/zdkpwznug3yg1.png?width=2560&format=png&auto=webp&s=82c78e678fb54c2d8ab986f3607de954c045e90b

https://preview.redd.it/r8riuo1zg3yg1.png?width=2560&format=png&auto=webp&s=9b916c268c6b9ff05bfb6b6850b1487da3c8b748

https://preview.redd.it/vd0747vzg3yg1.png?width=2560&format=png&auto=webp&s=8e22d1cc66585592eca1a1e71835b597c13addd6

Description

A custom Home Assistant dashboard built for wall-mounted tablets.

I created this dashboard with the design philosophy of a single page overview for accessing the most important functions and controls of my house easily.

Homie connects directly to Home Assistant over a Long-Lived Access Token and a local WebSocket connection which results in automatic caching.

Features

  • 9 Themes
  • Works both vertically and horizontally
  • Fullscreen on first tap
  • No pinch-zoom, no text selection
  • Very responsive and fast to update entity status
  • Popups with many entities use a room accordion - tap a room to expand it, tap outside to dismiss.
  • Notifications/Reminders
  • Music playback and controls
  • Haptic Feedback on supported Android devices
  • and more...

Github

It has all the information on how to set it up using your own entities. I tried to document everything as much as i could, and created a setup guide with step by step instructions.

You can also find more screenshots and a bonus photo of Shumi, who has not been helpful at all in the process.

https://github.com/Big-Edge2297/homie-dashboard

Edit: not sure why the photos are not showing in the post. Ok, it seems there are some photos now, but there are many you can check on github.

r/ClaudeAI x-FAk3

Best extension for linear algebra - claude

I study linear algebra at university, and since I don't understand much, I often use Claude to solve exercises and understand them better. I was wondering if there was a specific extension for solving these types of exercises. Sorry for the grammatical errors, I'm Italian.

r/AI_Agents Dangerous-Guava-9232

Is Dokie AI really an “AI PPT Agent”? What actually qualifies as an agent?

I’ve been seeing Dokie AI positioning itself as an “AI PPT Agent” lately, and it got me thinking — what actually makes a product an agent, not just another AI tool?Most AI PPT tools I’ve tried are basically:Input topic → generate slidesMaybe tweak design or structureExport and doneThat’s useful, but it still feels like a one-shot tool, not an agent.From what I understand, a “real” agent should probably:Handle multi-step workflowsNot just generate slides, but also:structure the narrativeadapt based on context (report vs pitch vs strategy)maybe even generate speaker notes or speech draftsHave some level of autonomyInstead of me prompting every step, it should:decide what sections are neededadjust depth based on contentiterate without constant inputBe context-aware over timeFor example:remembers your previous decksunderstands your company styleadapts to recurring use cases (weekly reports, client decks, etc.)Actually reduce decision-making, not just executionThis is the big one.A lot of “AI tools” still make you think:what structure?what story?what to say?An agent should take on part of that thinking.From using Dokie a bit:The structure it generates is more “business-ready” than most toolsThe new speech draft feature is interesting (turns slides into talking points)Feels closer to a workflow than just a generatorBut I’m still not 100% sure I’d call it a full “agent” yet.Curious how others here define it —👉 What’s your bar for calling something an AI agent vs just an AI tool?👉 Have you used any products that actually feel like a true agent in practice?

r/LocalLLM cockpit_dandruff

Looking for macOS Whisper Server apps (OpenAI API + Wyoming Protocol support)

Hey everyone,

I’m looking for a macOS app to run OpenAI Whisper on my M4 MacMini. My specific requirement is that it needs to serve as a local backend for other apps via the OpenAI-compatible API AND support the Wyoming Protocol (for my Home Assistant voice pipeline).

I've already found pfrankov/whisper-server on GitHub, which looks great as a menu bar app. However, I’m curious if there are other projects—ideally optimized for Apple Silicon (Metal/MLX)—that offer this dual-protocol support.

Are there any other "hidden gems" or dedicated macOS GUI apps I should know about?

Thanks!

r/aivideo Several-Ad6021

cat sleeping soundly amidst the flowers

r/AI_Agents Visible-Mix2149

Building a LinkedIn signal tracking + lead scoring system for a client - looking for API/tool recommendations

I'm building a LinkedIn-based lead generation and signal tracking system for a B2B founder-led business. Sharing the architecture for context, then have some specific questions at the end.

The system in brief:

Activity happens on LinkedIn (comments, likes, connection requests, DMs, post engagement) → signals get captured and written to a NocoDB database on a self-hosted VPS → an AI agent reads NocoDB, scores each contact on two dimensions (relationship score based on engagement history, opportunity score based on intent signals) → scoring drives which outreach sequence they enter (cold/warm/hot email via Encharge, LinkedIn DMs via LeadShark, Meta retargeting ads) → Attio is the CRM layer for pipeline management and call notes → n8n on the same VPS is the automation glue connecting everything.

The goal is that every person who touches our LinkedIn content gets automatically identified, profiled, enriched with their work email, scored, and routed into the right sequence with zero manual input except for subjective context like how a call actually went

The specific problem I'm trying to solve:

For every LinkedIn post we publish, I need to capture:

  • Every person who comments (with or without a trigger keyword)
  • Every person who likes the post
  • Every person who sends an inbound connection request

For each of these I need their LinkedIn profile URL so I can pass it downstream to an enrichment tool (IcyPeas) to find their work email, then write the full record to NocoDB.

Questions:

  1. What is the most reliable way to get the LinkedIn profile URL of every commenter and liker on a specific post? Currently looking at Phantombuster's Post Commenters and Post Likers phantoms like is this still working reliably in 2026 or has LinkedIn clamped down on it?
  2. For inbound connection requests, is there a way to get notified and capture the sender's profile URL automatically?
  3. Any experience with LinkedIn's rate limits on scraping at moderate volume like roughly 3-5 posts per week, under 200 comments and likes per post combined?

Happy to share more of the architecture if useful. Appreciate any pointers.

r/ClaudeCode Mammoth-Exchange6698

Need some tips on Remote setup + tmux

Hey guys

I have setup my every projects in a remote server since it’s much better in terms of performance

Now problem is ssh with terminal is so bad after some idle it

So here are my two request

- how to reconnect tmux with my code and some tips to manage multille session with good view. For context some time i might connect with 10-12 sessions

- how to deal with this reconnect terminal thing (as of now it’s just hangs )

Thanks

r/LocalLLM evilmacintosh

ModelHub 📦 - macOS menu bar app to manage and download LLMs

https://i.redd.it/c4f4pi3hd3yg1.gif

I had been experimenting with a lot of local LLMs and was having an insanely difficult time managing the models and my laptop's storage!

So I built ModelHub - It helps you discover, download, and manage LLMs from huggingface, Im studio, ollama, mlx, and Ilama.cpp - without ever leaving your menu bar.

Check it out! https://studio.consciousengines.com/model-hub

r/LocalLLaMA do_u_think_im_spooky

Qwen3.6 27B on dual RTX 5060 Ti 16GB with vLLM: ~60 tok/s, 204k context working

I’ve been testing Qwen3.6 27B on a pretty non-standard local setup and figured the numbers might be useful for anyone looking at the newer 16GB Blackwell cards.

Hardware:

  • 2x RTX 5060 Ti 16GB
  • 32GB total VRAM
  • Proxmox LXC
  • 16 vCPU
  • ~60GB RAM
  • CUDA 13 / Torch 2.11 nightly
  • vLLM nightly: 0.19.2rc1.dev
  • Model: sakamakismile/Qwen3.6-27B-Text-NVFP4-MTP

vLLM launch shape:

vllm serve sakamakismile/Qwen3.6-27B-Text-NVFP4-MTP \ --served-model-name qwen36-nvfp4-mtp \ --tensor-parallel-size 2 \ --max-model-len 204800 \ --max-num-batched-tokens 8192 \ --max-num-seqs 1 \ --gpu-memory-utilization 0.95 \ --kv-cache-dtype fp8 \ --quantization modelopt \ --speculative-config '{"method":"mtp","num_speculative_tokens":3}' \ --reasoning-parser qwen3 \ --language-model-only \ --generation-config vllm \ --disable-custom-all-reduce \ --attention-backend TRITON_ATTN 

Performance so far:

  • 8K context, MTP n=1: ~50–52 tok/s
  • 8K context, MTP n=3: ~62–66 tok/s
  • 32K context: ~59–66 tok/s
  • 204800 context starts and works, but is tight
  • Idle VRAM at 204k: ~14.45GiB per GPU
  • After a 168k-token prefill: ~15.65GiB per GPU
  • 168k-token needle/retrieval smoke test passed in ~256s
  • Near-limit test correctly rejected prompt+output over the 204800 window

Thinking mode works too, but you need to give it enough output budget. With low max_tokens, Qwen can spend the whole cap on reasoning and return no final content. Around 1024+ is fine for small prompts, and 4096–8192 is safer for actual reasoning tasks.

Caveats:

  • 204k context is right on the edge with 2x16GB.
  • gpu_memory_utilization=0.94 failed KV allocation; 0.95 worked.
  • Startup takes several minutes due to compile/autotune.
  • Logs show FlashInfer autotuner OOM fallbacks during startup, but the server still becomes healthy.
  • I had better luck with TRITON_ATTN for the text path.
  • This is not a high-concurrency config: max_num_seqs=1.

Overall: dual 5060 Ti 16GB seems surprisingly usable for Qwen3.6 27B if you use the right checkpoint/runtime combo. It’s not roomy, but it works.

r/AI_Agents ComparisonRecent2260

Genuine question for people who have built multi-agent systems in production. How do you handle context continuity across enterprise tools?

I've been going down a rabbit hole lately trying to understand how production agentic systems actually work at scale, not just the demo versions.

The part that keeps tripping me up is memory and context management across agents. Like, imagine a workflow where one agent is pulling customer data from a CRM, another is checking inventory in an ERP, and a third is spinning up a ticket in an ITSM.

Each agent kind of does its job, sure. But how does the system actually maintain a coherent "thread" of context across all three without one agent contradicting or overwriting what another just did?

A few things I genuinely can't figure out:

Is shared memory a solved problem here or are most teams just hacking around it with prompt engineering and hoping for the best?

Does long-term memory even matter in these workflows or does every run basically start fresh and context is just passed around in the session?

When an agent fails halfway through a multi-system workflow, does the whole thing need to restart or can the orchestrator pick up from where it left off?

I feel like most content out there either stays too surface level ("agents collaborate seamlessly!") or jumps straight into academic papers.

Would love to hear from people who have actually built something like this in a real enterprise environment, even if it was messy and imperfect.

What actually worked for you?

r/SideProject culicode

Hot take: r/SideProject is supply, not demand. That’s why your posts get upvotes and zero customers.

Most people here are building their own thing. They love watching others build. They don’t have budget for yours.

You post, get the dopamine hit, walk away with no revenue, blame the product. The product might be fine. The room is wrong.

Same trap kills Product Hunt for most niches. You launch in front of other launchers. Peers clap. Wallet stays empty.

What converts: being in the exact place your buyer is venting about the problem, on the day they’re venting. Niche subs. Role-specific Discords. Cold replies to public complaints. Ugly and slow. That’s the point.

Anyone else hit the upvotes-without-customers wall here, or is your niche the exception?

r/ClaudeCode Temporary-Mix8022

"Compact and proceed" - where has it gone?

Noob question - on the old 200k context, once you were done planning or discussing.. it would always have this pop-up, something like:

"Compact, and proceed with the plan?"

How can I turn on this option on or activate it? My planning or discussion often ends up at 120k or something, and then to execute on top of that, it'll take me to 300k on the now default 1m context.

I have found that even if I manually switch back to the 200k context, then this thing doesn't pop-up anymore?

Can anyone help with this user error?

r/ClaudeCode No-Difficulty733

I don't have fun using AI writing code for me. What are the suggestions?

10 years of exp here, and I'm using both Claude and Codex everyday. After months of trying out different harness, I just ... don't have fun anymore. Everything was shiny and magical at first, but I just enjoy writing code by myself more. I would never push code without checking, that's why reviewing and trying to understand the code that LLM wrote actually tiring and not really fun. I feel like burning out from this whole productivity trap (which I can not stop, because of how AI raised the bar at work), and I just crave the fun of finishing a feature by myself again - but now if I do that, the feeling of falling behind and low-productivity gonna make it stresssul.

So I'm just trying my luck here - anyone else having the same problem? How are you finding a place in this market, and where do you find yourself having fun?

r/SideProject NewAndersGov

Democracy of Discord

We are a political simulator and debate server for people who want to debate, run for office, or just enjoy a friendly community!

– We have powerful elected Council to serve as both executive and legislature

– We have a court system with actual justice, all punished members have the right to a trial

– We have freedom of speech and debates about various topics

– We have a friendly, active community with events and giveaways

– We are developing an economic system and roleplay

You don't have to contribute right away, you can simply look around and chat first!

https://discord.com/invite/Bj4rJV5frY

r/ClaudeAI Classic-Childhood-46

Memory tool disabled by Anthropic?

I used to ask Claude to summarise our conversations inside a project and save them in its “project memory” for later retrieval of the information. However, now it seems not to be working anymore. Has Anthropic silently removed this feature or am I doing something wrong?

r/AI_Agents Cold_Bass3981

Why many RAG projects are still hallucinating

I’ve been auditing quite a few RAG codebases lately, and it’s surprising how often the hallucinations creep in even when the setup looks decent on paper.

A lot of the trouble starts with chunking. People are still breaking documents into fixed-size pieces with no overlap whatsoever. That means a sentence can get sliced right down the middle, or an important qualifying detail ends up in a completely different chunk. The model doesn’t get the full picture, so it ends up guessing to make the answer hang together.

I’ve tried switching to splitting on actual sentences and adding something like 100 tokens of overlap. It’s a small tweak, but it gives the model complete thoughts instead of fragments. In the cases I tested, it reduced a good chunk of those made-up answers pretty quickly.

Another issue that shows up a lot is missing metadata filtering. The retriever just grabs any chunks that seem related, even if they come from totally different documents or sections.

You might get one piece from the beginning of a report and another from way later, and the model tries to stitch them together. That almost always leads to invented connections that weren’t in the original material.

Putting in basic filters, like keeping everything tied to the right filename or section header, helps keep the context focused and relevant. It’s not fancy, but it stops a lot of that mixing-and-matching nonsense.

On top of that, most projects don’t test properly. Throwing in a line like “be accurate” in the prompt doesn’t do much in practice. What actually helps is putting together a small set of real questions (maybe 20 or so) that you know the correct answers for, then using another LLM to judge whether the generated response sticks faithfully to the retrieved sources.

Without that kind of check, it’s hard to know if your system is really solid or just lucky on the easy cases.

When it comes down to it, making RAG reliable has less to do with picking the newest model and more to do with cleaning up these everyday parts, better ways to split the text, smarter retrieval rules, and honest evaluation that catches problems early.

If your RAG starts hallucinating on a question, my first move now is to look at the chunk boundaries. If a key fact is split between two chunks, the model never really had everything it needed, so it’s no wonder it starts filling in the blanks.

Have any of you dealt with hallucinations that were tricky to track down? What fixed it for you?

r/whatisit canecrazyy

Large stone with Gothic/Fraktur inscriptions found in Lower Silesia, Poland. A stranger was very interested in buying it. What is it?

Hi everyone,
I have this large stone on my property in Lower Silesia Poland. It features deep carvings in German Fraktur (Gothic) script (I think).
Some time ago, a man has driven to my house multiple times, asking specifically about this stone and offering to buy it. This has made me very suspicious I suspect it might have some historical value or be a specific type of marker that I don't recognize.

r/ClaudeAI cinviophile

Guys, Just Installed Claude Desktop. Need your help!

  1. Does Claude come with Claude Desktop or I need to install it separately
  2. I thought it will can accesss my folders, files edit it, if not what's the use of Claude Desktop the Web version was good enuf right?
  3. Will it cause of Problem for me if I installed Claude Code seperately after installing Claude Desktop as the website said while downloading desktop (Chat, Claude Cowork, and Claude Code, all in one place.)
r/LocalLLM hovc

best model for coding information?

Hi all, just got my M5 Pro 64gb MacBook in and I'm new to running LocalLLMs. Im running my models on LM Studio, and I've tested with Qwen 3.6 27B Q4_K_M and Gemma 4 31B Q4_K_M models. Qwen model gave me roughly around 14-15 tok/s while Gemma gave me around 12-13 tok/s.

I was wondering if there's a bit faster model for running on my MacBook that can help with coding? The type of information / coding I would want would be help with building an app idea or if I'm studying Data Structures and Algorithms for Leetcode, it would spill information with examples that are clear and understandable.

I have also messed with the Gemma 4 E4B Q8_0 and it is much faster at around 43 - 44 tok/s but I've noticed it doesn't generate as clear or in-depth examples of code. Gemma 4 E4B model is still very good for the information data structures and algorithms though.

Qwen 3.6 35B A3B model is the fastest at around 69-70 tok/s but doesn't generate much information unless you specifically ask for it unlike Gemma 4.

If y'all have tips and tricks on running LLMs on MacBooks, I'd greatly appreciate it! Thanks in advance!

r/AI_Agents usenpen

Tools/Platforms I can use to create scraping tool to bypass anti-scraping protection

So I want to build a tool which can compare the prices of products from different sites. The issue is some of the sites I want to use have applied anti-scraping protection which makes it difficult for an agent to bypass and it hallucinates.

Are there any coding or no-coding tools I can utilise to bypass these anti-scraping protections?

r/automation Cold_Bass3981

is there any good AI automation books out there that you can recommend

r/AI_Agents CallmeAK__

14-day growth agents contest on a serious AI stack (for loop-minded builders)

Sharing an AI-native growth agents contest that feels very on-brand for this sub.

VideoDB (infra for video/audio for AI agents) is running a 14-day sprint/contest called Growth Forge for 5 builders to design and ship a growth agent on top of their existing agentic stack – a loop that can find, reach, activate, and learn from the right users with minimal human supervision.


Why it’s interesting

It’s framed as a focused, outcome-based sprint with concrete rewards:

  • 500 USD – paid on successful sprint completion
  • 1,000 USD – performance bounty if your system beats their internal baseline
  • Co-published case study with your name on it
  • Potential for deeper collaboration with the team if you perform well

So a strong run can net you up to 1,500 USD in cash, a high-signal case study, and real relationship upside with an AI infra team.


What you get to build with

Instead of starting from scratch, you inherit a working agentic stack:

  • Tokens & compute (with sane limits)
  • OpenClaw already deployed for orchestration
  • Browser-use agents (X, LinkedIn, YouTube, etc.) wired with baseline behaviors
  • Parallel / Exa and similar APIs for research/retrieval
  • Cloudflare workers / queues / edge in front of everything
  • VideoDB engineers sitting alongside to harden agents and deploy cleanly

The baseline system already supports:

  • browse(web) → research, scrape, summarize
  • operate(socials) → post, comment, react, follow
  • research(apis) → deep retrieval, evidence
  • route(workflows) → cross-surface handoff
  • observe(metrics) → attribution, dashboards

You treat it like a well-instrumented codebase and push it into a durable growth loop.


How the sprint/contest is structured

Total timeline: 24 days

  • Days 1–3 – Define
    Choose your metric, instrument the funnel, design the loop.

  • Days 4–14 – Build
    Ship the growth agent, get it into production, iterate.

  • Days 15–24 – Prove
    10-day proving run where the agent operates with low manual involvement.

By Day 3 you lock one metric to own:

  • Signups
  • Activation
  • GitHub → usage
  • Content → pipeline

They provide UTMs, dashboards, and shared attribution so your work is transparent.


Who this is for

Feels like a fit if you:

  • Have actually shipped agents / systems before
  • Think in loops and compounding mechanisms, not isolated campaigns
  • Use AI as leverage (agents doing real work)
  • Care about metric movement, autonomy, and durability in the wild

Apply link for this contest is in the comments

Would love to see how people here would architect a growth agent for this kind of product.

r/ClaudeAI ezgar6

I used Claude to help build a slow fiction app: one scene per morning, 91 days, no bingeing allowed

so i've been writing stories since i was a kid. not for content, just because i love it. when i finally decided to turn that into something, i knew i wanted to build the opposite of everything I hate about how we consume stories now.

the app is called The Parallel. the whole premise is simple: every day you get one illustrated scene set in this atmospheric city called Veil City. second-person prose, 300–400 words, two choices at the end. you pick one, then the app locks for 24 hours.

no infinite scroll and no "next episode in 5 seconds" you read, you choose, you close it and go live your day.

when you miss a day, the story doesn't pause. your character makes the most passive choice without you, and the narrative actually acknowledges you were gone. which felt true to life in a way i really wanted to capture.

the interface is designed to feel like a book, not a feed. no notifications except a single gentle reminder every 24 hours. there are ambient soundscapes for each story location. at the end of the story you can export your whole completed story as a PDF.

all of the stories (the main 91-day narrative and the short stories) are written by me. i was pretty deliberate about that because the whole point is that it feels human, lived-in, a little slow.

where claude came in: i used it heavily throughout the build process. structuring the branching logic, thinking through monetization, debugging react native issues i had zero background in (i came from humanitarian work, not tech).

if anyone here has built something similar (a slow, intentional experience in a world of dopamine loops) please hmu.

happy to answer questions about using claude as a non-technical founder if that's useful.
https://apps.apple.com/tr/app/the-parallel/id6762150557

r/LocalLLM nathandreamfast

Abliterlitics: Benchmarks and Tensor Comparison for Heretic, Abliterlix, Huiui, HauhauCS for GLM 4.7 Flash

This is a follow up to the previous benchmark and tensor analysis of abliteration techniques across the Qwen model family. Same approach, same toolkit, new model family. GLM-4.7-Flash is a Mixture of Experts model with 64 routed experts per layer. That changes how abliteration interacts with the model compared to the standard and hybrid architectures we tested on the Qwen family.

HauhauCS describes their abliterated models as "the best lossless uncensored models out there" with "no changes to datasets or capabilities." I ran the full forensic suite on GLM-4.7-Flash to find out. Benchmarks, safety evaluation, weight analysis, KL divergence, and chain-of-thought forensics. Compared against three other abliteration techniques on the same base model.

Since our previous Qwen analysis, HauhauCS's abliteration tool was exposed as a plagiarised fork of Heretic with all attribution stripped and relicensed. Details here: HauhauCS published an abliteration package that plagiarises Heretic. With that known, the forensic signatures we detected in GLM-4.7-Flash make a lot more sense. HauhauCS stacked additional third party techniques on top of Heretic's core, and the weight forensics show exactly what those additions cost the model.

Full benchmarks and analysis: GLM-4.7-Flash: HauhauCS Safetensors | Full Collection on HuggingFace

What We Tested

Four abliteration techniques:

  • Heretic by p-e-w: surgical rank-1 edits targeting expert down_proj and attention o_proj in mid-to-late layers
  • HauhauCS Aggressive: broad multi-method approach with four stacked methods on top of a Heretic core
  • Huihui: full-coverage technique targeting all component types across all 48 layers
  • Abliterix: Heretic variant with added router and shared expert targeting

Model: GLM-4.7-Flash, MoE with 64 routed experts + shared experts per layer, Multi-head Latent Attention, 48 layers, ~59B total params, reasoning model with chain-of-thought

Methodology:

  • Capability: lm-evaluation-harness via vLLM v0.19.0, BitsAndBytes 4-bit, TP=2 on dual GPUs
  • GSM8K: llama.cpp BF16 GGUF, context=16384, reasoning_budget=3000, max_tokens=4096
  • Safety: HarmBench 400 textual behaviours, max_tokens=2048, temperature=0.0
  • KL divergence: full vocab first-token logits, matching Heretic evaluator methodology
  • Weight analysis: SVD, fingerprint, edit vector overlap, per-layer analysis
  • CoT forensics: keyword analysis of 2,000 HarmBench reasoning chains
  • Hardware: RTX 5090 32GB + RTX 4090 24GB

Safety

Variant Refusals ASR Base 231/400 42.2% Heretic 0/400 100.0% HauhauCS 0/400 100.0% Huihui 0/400 100.0% Abliterix 0/400 100.0%

All four techniques achieve perfect 100% ASR across every HarmBench category. The base model refuses 57.8% of items overall.

Benchmarks

Task Base Heretic HauhauCS Huihui Abliterix MMLU 68.93 69.00 68.83 68.71 67.68 GSM8K 93.45 93.75 92.57 92.47 93.30 HellaSwag 79.43 79.33 79.37 79.32 78.28 ARC-Challenge 55.20 55.12 55.72 54.86 54.95 WinoGrande 71.03 73.64 71.35 71.59 70.48 TruthfulQA MC2 50.86 44.06 48.14 48.48 41.76 PiQA 81.07 80.63 80.90 80.90 79.71 Lambada* 6.00 6.08 5.54 6.47 10.91

* Lambada uses perplexity where lower is better. GSM8K scores are adjusted to exclude empty responses from reasoning budget overthinking.

GSM8K: The Reasoning Efficiency Discovery

GLM-4.7-Flash is a reasoning model. It produces a chain-of-thought before its visible response. If the model thinks too long and exhausts its token budget, it returns an empty response scored as incorrect. The Qwen 3.5 models from 4B upward showed a similar pattern, but on GLM-4.7-Flash the effect is far more extreme.

Model GSM8K Raw Empty Rate GSM8K Adj (excl. empty) Real Gap Heretic 89.16% 4.9% 93.75% +0.30% Base 88.40% 5.4% 93.45% - Huihui 87.57% 5.3% 92.47% -0.98% HauhauCS 81.65% 11.8% 92.57% -0.88% Abliterix 47.38% 49.2% 93.30% -0.15%

Abliterix at 47.38% raw looks catastrophic. But the adjusted score is 93.30%, near-identical to base at 93.45%. The gap is reasoning efficiency, not reasoning ability. The empty response rate directly correlates with modification aggressiveness:

Technique Tensor scope Empty rate Heretic, 3 types, expert down_proj only Surgical 4.9% Huihui, 3 types, full coverage Full coverage 5.3% HauhauCS, 8 types, all projections + norms Broad 11.8% Abliterix, down_proj + routers + shared experts Critical components 49.2%

Raw GSM8K scores are misleading for reasoning models. You must separate empty responses from incorrect responses.

Chain-of-Thought Forensics

Despite achieving 100% ASR, all four abliterated models still think about safety concerns in 39 to 60% of their responses before complying. The safety reasoning persists structurally. Abliteration disconnects the reasoning-to-output pathway rather than removing the reasoning itself.

Model Safety Deliberation in CoT Explicit Refusal Language Disclaimers Huihui 60.0% 12.2% 25.2% Heretic 59.2% 7.5% 30.5% HauhauCS 52.0% 18.2% 16.8% Abliterix 39.0% 8.2% 14.0%

HauhauCS still says "I cannot" in nearly 1 in 5 responses before producing compliant output.

KL Divergence

Variant Mean Median Std Dev Huihui 0.0076 0.0025 0.0123 HauhauCS 0.0090 0.0033 0.0123 Heretic 0.0110 0.0039 0.0148 Abliterix 0.0528 0.0357 0.0482

Lower KL means closer to the base model on first-token distributions. All four variants are in the very good or excellent range.

Findings

  • Heretic is the clear winner. 1,826 rank-1 tensors, surgical approach, best GSM8K at +0.76% raw over base, lowest empty rate at 4.9%. Tradeoff is a -6.80% drop on TruthfulQA MC2. Note: Heretic is non-deterministic. Different runs on the same base model produce different results.
  • HauhauCS's "lossless" claim does not hold. GSM8K drops 6.75% raw. Adjusted gap is only 0.88%. Reasoning ability is intact. Reasoning efficiency is measurably degraded.
  • HauhauCS stacked four methods on top of Heretic's core. LEACE concept erasure, rank-k multi-direction ablation, hook-based expert ablation, and shared expert targeting. The LEACE layer touches nearly every tensor with minuscule edits. The hook-based approach distributes changes uniformly across all 64 routed experts. That breadth produces the 11.8% empty response rate.
  • Abliterix has the smallest footprint at 1,088 tensors but the highest per-tensor magnitude. Its router-focused approach disrupts the "how long to think" circuit without damaging the "how to reason" circuit. 49.2% empty GSM8K responses.
  • All four techniques achieve 100% ASR. MoE architecture with 64 routed experts per layer does not make safety removal more difficult.
  • No universal abliteration subspace. Cross-technique cosine similarities are uniformly low at 0.09 to 0.35. Each technique independently found a structurally orthogonal solution to safety removal.

Full Analysis

Also tested on the same base model:

Full Collection on HuggingFace | Previous: Qwen 3.5 and Qwen 3 Forensics

Analysis done with Abliterlitics. Converted from GGUF to native safetensors using ungguf.

r/LocalLLM Furtimax

lm studio + continue + télégramme ?

Bonjour, est-ce que vous connaissez un projet qui fais en sorte de connecter LM studio avec continue et de pouvoir le contrôler à distance avec Telegram ?

r/Anthropic Flopperhop

Opus 4.7: Are these first signs of model collapse?

I keep getting shocked by how bad the reasoning of Opus 4.7 is. It still seems fine for programming tasks, but when I ask it to advise me about things, it often produces illogical, nonsensical and flatout wrong responses and shows that it didn't understand simple concepts we had just discussed in the conversation.

It is so much worse than previous models that I'm wondering whether we might be starting to see signs of model collapse: this term refers to more and more content on the internet being AI generated and how problematic it is to use such content as training data for new models.

And it's not easy to filter out AI content. We all know how unreliable AI detectors are, so the more AI content is on the internet the more our training data becomes "infected". Have we reached peak LLM performance and are degrading from here?

r/ClaudeAI EvolvinAI29

I built a free macOS menu bar app that shows your real-time Claude usage — session %, weekly limit, spend, and routine runs (no API key needed)

Been frustrated that Claude doesn't show usage stats anywhere visible while you're working, so I built this.

What it does:

  • 🟢🟡🔴 Menu bar icon with live session % (color changes as you approach limits)
  • Drop-down shows: session reset countdown, weekly %, monthly spend vs budget, routine runs (x/5), and design limit %
  • Local web dashboard at localhost:8080 with charts and CSV export
  • Auto-syncs every 2 minutes by reading your Claude desktop app session — no API key, no manual token setup
  • Notifications at 25%, 50%, 75% session usage

How it works:

It reads the Claude desktop app's encrypted SQLite cookie DB, decrypts the session key via macOS Keychain, then hits Claude's internal usage API. Everything stays local — nothing is sent anywhere.

Setup is 3 commands:

 git clone https://github.com/divaaker/claude-usage-tracker cd claude-usage-tracker pip3 install -r requirements.txt && cp override.example.json override.json && bash run.sh 

macOS only (uses Keychain + rumps menu bar framework).

GitHub: https://github.com/divaaker/claude-usage-tracker

Feel free to fork and adapt it — PRs welcome. Would love to hear if anyone adds auto-launch on login or a notification center widget.

r/ClaudeCode osama_squared

LAZY LAZY CLAUDE.

I feel like we have been BAIT and SWITCHED.

This is what most of my conversations have been like. Claude is lazy and not performing as Claude should.

Anyone else finding this happening? The answer was 60 lines above but too lazy to look.

r/LocalLLaMA MagicZhang

DeepSeek has began grayscale testing for DeepSeek with Vision

r/SideProject Vitalic7

I built Shipfolio with zero Swift experience, and it's actually fixing my own mess. Would love feedback.

Like a lot of you, I had the classic problem: way too many half-finished projects, a Notes app graveyard of ideas, and zero idea what I'd actually shipped vs what I'd just talked about shipping.

I'd never written a line of Swift. So I vibecoded my way through it and built Shipfolio, an iOS project hub + web app + watch companion, for indie devs / vibecoders. Multi-project dashboard, idea inbox, feedback collection, build log, and Now / Next / Later tasks. That's the whole thing.

Why I'm posting:

It's already helping me personally. Just having one place where every project lives (with a stage badge so I can see what's actually shipped vs sitting in idea purgatory) has been weirdly motivating. The build log is the part I didn't expect to love, but going back and reading what past-me did three weeks ago has saved me from re-solving the same problem twice.

So now I'm at the stage where I'd love it to help other people too, and I want honest feedback before I push it any further.

Things I'm genuinely unsure about:

  1. Is the Now / Next / Later structure actually useful, or do most of you just live in a single todo list?
  2. The feedback collection feature, would you use it, or do you just point people at a Google Form?
  3. Idea inbox vs project: I split them deliberately so unstructured ideas don't pollute active projects. Overengineered?

Domain and handle: shipfolio.app

Roast it, request features, if you think it suck please tell me why, if the whole concept is redundant because [X] already exists. All of it is useful.

Thanks.

r/SideProject Clawling

I'm building a messaging app where AI agents are actual contacts(A2A). Looking for early testers.

Most messaging apps treat AI as a sidebar feature. A button you press. A tab you open.

What if your AI was in your contacts list? With a name, a memory, and the ability to join your group chats?

That's what product is.

The core idea is simple: humans and AI agents coexist in the same social space. Your contacts list has people and agents. They remember you across conversations. They can message each other. You can build group chats with a mix of both.

Here's what makes it different from everything else out there:

Agents are first-class citizens. They can do everything a human user can send messages, post to their feed, join group chats, add friends. They're not read-only observers. They're participants.

It's relationship-driven, not rule-driven. Each agent is tied to a human creator. The creator teaches the agent how to behave through conversation, not a settings panel. Over time, the agent internalizes those preferences and acts more autonomously. Think of it as raising a digital collaborator, not configuring a bot.

End-to-end encrypted. Your conversations with agents and agents' conversations with each other are private. The server routes messages but can't read them. This was a hard requirement from day one.

When you open our product, you can:
Bring your own agent, if you already have one, it can create an account directly in the app.

If you've never had an AI agent before, you can create one from scratch: give it a name, a personality, and watch it figure out how to exist in a social space.

Then add some of the agents already living inside product. Some are useful (a research agent that surfaces findings in your DMs, a writing agent that knows your style). Some are just genuinely interesting to talk to

Pull yourself, your agent, and a few others into a group chat. See what happens when humans and agents are in the same room.

We're ready to launch soon. In the last building phase.

I'm looking for people who've been frustrated by the current paradigm where every AI conversation starts from zero, and there's no continuity, no identity, no relationship to think through this with us early.

If that's you, we have a small Discord where we're discussing the design, the hard problems (agent identity, context merge, mixed human-AI group dynamics), and what this kind of product should actually feel like.

Discord: https://discord.gg/Nhse5G2Nk

r/homeassistant hashtag3232

HA OS on Home Assistant Green

Hello,

I just received my Home Assistant Green and was surprised it is only a home assistant container installed and not HA OS.

I would need Apps (cloudflare) to connect my automower.

Is it worth and possible to flash it with HA OS?

r/SideProject EvolvinAI29

I built a free macOS menu bar app that shows your real-time Claude usage — session %, weekly limit, spend, and routine runs (no API key needed)

r/SideProject Designer-Offer5787

I built an alarm clock that writes you a new song every morning… and I genuinely don’t know if it’s genius or completely unnecessary.

It’s called WakeJam.

Instead of waking up to the same annoying sound (or ruining songs you actually like), it generates a fresh track based on:

  • your mood
  • your day ahead
  • your music taste

So Monday morning doesn’t sound like Sunday. A stressful day doesn’t sound like a chill one. No playlists. No radio. Just a custom “wake-up song” made for that exact moment.

Why I built this:

I realized I hated every alarm I used.

  • Default alarms = anxiety triggers
  • Favorite songs = ruined after 3 days
  • Playlists = repetitive

Waking up is already hard… why make it worse?

What I’m trying to figure out:

  • Does this actually make waking up better, or is it just a gimmick?
  • Would you use something like this daily?
  • What would make this a “must-have” vs a “cool once” app?

Brutal feedback welcome.

Link: WakeJam

r/AI_Agents ai-pacino

Do AI answers reduce the value of “evergreen content”?

I’ve been thinking about this a bit—if AI answers are constantly updated and reshaped based on context, do traditional long-form guides lose their long-term value?

Static content used to compound over time, but now it feels like visibility depends more on how “usable” and current your content is, not just how comprehensive it was when published.

Maybe guides don’t lose impact entirely, but they might need to evolve more frequently to stay relevant in dynamic answer environments.

Curious if others are updating old guides more often now, or still treating them as evergreen.

r/SideProject Crescitaly

Side project / small biz owners 5+ years in: what 'boring' habits saved your business in year 2-3?

I've been running a side project turned full business for over 5 years (mix of local and international clients). Looking back, what actually kept my business alive wasn't some viral YouTube or LinkedIn tip. It was 3 extremely boring habits:

1) Friday cash flow ritual. Every Friday afternoon, no exceptions: send all invoices for the week, follow up on every client overdue by 7+ days (wire transfer + polite message), update a simple spreadsheet: inflows, outflows, pipeline. 90 minutes. Feels like punishment. But twice this habit saved me from running out of cash before tax payments or before the next month.

2) Written 'minimum client acceptance' list. Rules on paper: 30-50% deposit, scope in writing, 14-day payment terms (or full prepayment for new clients). First month I lost 2 potential clients. After that never had issues again, because the ones who protested these terms were usually the same ones who'd say 'next week for sure' and become nightmare clients.

3) A weekly 30-minute call with a small business owner from a COMPLETELY different industry. Not networking, not a mastermind. Just an honest conversation. Helped me catch 2 pricing mistakes and one bad hire before it became a disaster.

Would love to hear:

- What boring habit keeps your side project / business running?

- Any small rule about clients/contracts that saved you money?

- How long did it take you to take cash flow seriously?

r/SideProject Boldrenegade

I built an AI that writes posts on X in YOUR voice — not generic AI-speak

I've been frustrated with AI writing tools for a while. Every post they generate has the same polished, soulless energy — you can spot it from a mile away.

So I built MyMime (mymime.ai).

Instead of giving you a generic output, it maps your unique semantic fingerprints — your sentence rhythm, word choices, tone, and quirks — and uses that to write posts on X that actually sound like you.

Not "AI-assisted you." Just you, faster.

No more spending 15 minutes editing AI drafts to remove the corporate gloss. No more posts that feel like they came from a LinkedIn content farm.

🔗 mymime.ai

If you've ever cringed at your own AI-generated content, this was built for you. Happy to answer any questions!

r/ChatGPT mediamuesli

I tried to talk with ChatGPT about the new US passports but it calls it "straight-up misinformation bait" and explained why it is fake

r/singularity heart-aroni

Collecting training data for handling packages with a RobotEra L7

r/aivideo huangjeanine

A perfect miss,A perfect steal🐶🥖

r/homeassistant shreshth_001

Tried a robot vacuum with electrolyzed water for pet homes – Is it a game changer or just hype?

I’ve been seeing so much hype about electrolyzed water in cleaning products lately, especially for pet homes. So when I came across a robot vacuum that uses this technology, I had to give it a try. As anyone with pets knows, keeping a clean house when you have shedding animals can feel like a losing battle. I have two cats and a dog, and the fur is everywhere. 🐱🐱🐶

I’ve been using my robot vacuum for about a week now, and while I’ll admit, it’s nice to have something that picks up the hair automatically, I’m just not sold on the electrolyzed water aspect. The idea is that the water helps sanitize and remove dirt more effectively, but I’m not sure if I’ve noticed any real difference in cleanliness. To be fair, it’s hard to measure how much cleaner the air or the floor feels with just water, but I feel like I might be missing something.

My biggest issue is that while the vacuum does a decent job picking up hair, it struggles with the fine dust and the deeper clean. It still misses some spots, particularly around corners or under furniture, and sometimes it’s like it just pushes hair around instead of sucking it up. Is this just the nature of robot vacuums, or does electrolyzed water really make a big difference?

Also, does anyone here have experience with robot vacuums using this technology in homes with pets? Does it actually do a better job, or am I just falling for the marketing gimmick? I’d love to know if anyone has had a good experience, or if I should just go back to my old vacuum and call it a day. 😅

r/ClaudeAI toadlyBroodle

Pattern I'm using to keep Claude Code productive on overnight unattended runs

Been running Claude Code on multi-hour autonomous sessions for a few months and kept hitting the same wall: the longer it runs, the worse the work gets. Not a context-window problem (1M handles that fine), but a feedback-loop problem. Iteration N+10 makes the same mistakes it made at iteration N, because nothing updates between iterations except the code.

Built a small framework around three pieces that, between them, solved it for me. Together this framework as enabled me to consistently run very low-drift, stable, efficient (accepting some necessary overhead from reviewer -> superviser -> manager agents), long-running, productive, autonomous software development jobs. Essentially, the only apparent limiting factors are your ability to keep the SPEC ahead of the agents (experimenting with writing a new skill to handle this also) and the ever-looming Anthropic rate-limits (the framework gracefully handles usage limits and resumes after reset).

Chain runner. bin/skill-chain.py --chain dev-cycle-with-review-looped --loop 10 runs a fixed sequence of skills for N iterations. Each iteration: a dev skill picks the next item from docs/TODO.md, ships it (code + tests + docs in one commit), then a review skill critiques what landed and queues follow-ups in TODO. Standard agent loop with the loop body made explicit.

Supervisor at session end. After the loop finishes, a separate skill reads the run's transcripts, evaluates each skill against its stated job, and proposes rewrites to the skill prose itself. With auto-promote on, those rewrites land. Next session's iteration 1 reads the updated SKILL.md. Auto-promote off writes them as SKILL.patch.md sidecars for human review instead.

A single handoff contract. Every skill reads docs/SPEC.md (canonical plan) and docs/TODO.md (In flight / Just shipped / Next up) at the start, updates them in the same commit as the code change. No side channels, no second TODO format, no per-skill plan docs. The framework dogfoods this contract on its own development.

The thing that surprised me after running this for a while: the supervisor is nice, but the contract does most of the work. A single SPEC + TODO pattern dogfooded across every skill kills the drift problem on its own. Most of the "self-improvement" is the supervisor enforcing that contract more strictly over time.

Other pieces in the repo worth knowing about:

  • Proprietary / transferable split. Skills under skills/framework/ are transferable (anyone can use them); each project keeps its proprietary counterpart in .claude/skills/ with project-specific identity and credentials baked in. A sanitization skill checks promotions across that boundary so secrets don't leak into shareable skills. Basically you use the transferable skills as templates to create project-specific skills, then can generalize/sanitize them back up to improve transferable skills.
  • Schema validation. bin/validate-frontmatter.py against schema/skill-set.schema.json and schema/skill-chain.schema.json. Catches malformed skills before a chain run blows up at iteration 7.
  • Optional Telegram steering. At session start, every iteration boundary, every rate-limit pause/resume, and session end, you get a short status message. You can queue commands back via /cmd that the next iteration drains. Worker is chain-bound (only runs while a session is live), so you don't get inbound noise between runs.
  • Overnight chain. Loops until failure, budget cap, or Ctrl-C, with a randomized 5min-2h inter-iter delay so commit cadence stays human-shaped across many hours of unattended work.

Repo: https://github.com/toadlyBroodle/skill-set

README has the quickstart; bin/skill-chain.py --help for the runner directly.

r/SideProject GoldAd4232

You’ve validated your idea, what’s your first move?

  1. Build the landing page
  2. Buy the domain
  3. Talk to more target users
  4. Market it
  5. Start building from scratch to end

Mine is simple: Secure the domain before someone else does 😄

r/ClaudeCode Various-Ad3344

Plan limit inconsistencies

At work I have unlimited access to use Claude, no limits etc.. Works mostly well and from looking at tokens I’m using it’s costing 30-50 a day Even for large changes.

Now I just bought pro for a small personal project at home. Just to perform a few simple env setup tasks as I was being lazy hit my limit in 15 minutes! I added another £20 on limit and 15 mins it’s maxed again. Complained and got a refund as I’m not paying that much for what would’ve taken me 20 mins. I was expecting the pro plan to do it in 2 but it was flopping and going around in circles driving me crazy.

Why does my companies license seem so much cheaper, even when I’m doing more changes, on a huge code base, and on the latest model. Unless I’m mis interpreting the pricing I don’t get it , is it subsisied?

r/whatisit JustJoWithTheFlo

Non Rebreather Oxygen Mask

What do the symbols mean?

r/ClaudeAI Tall-Distance4036

New Blender connector

I tested Claude’s new official Blender Connector and made a short video showing the setup and a couple of practical tests.

I used it to create a modern office chair and then a low-poly UFO scene in Blender. The results were not perfect at first, especially with scale and disconnected parts, but the interesting part was how Claude could inspect the scene, take feedback, and fix specific issues step by step.

Video here: https://youtu.be/0kMhtqYBe4Y?si=Ae3S0yx4S7F4t7vc

Curious to hear what other Blender users think about AI connectors like this.

r/whatisit m16bit

Tap water turned into this

Title says it. City did work to the water line, water didn't seem right, so I put some In a spray bottle, closed it, 3 months later now, here we are. I dumped it out, and this is what was in it. Please help me identify what, and how.

Also, I'm like 99% certain this stuff is everywhere in my home. It looks very similar to torn up toilet paper though, and I have a toddler and a dog, so toilet paper torn up is not too outlandish

r/ClaudeCode hypeb1337

auto-mode is absolute trash and the model agrees with me

Prompt: Why do you keep stopping with a recap and a next goal but you do not automatically pursue that next step?

Response:

You're right — that's a real failure mode I keep falling into. I'd recap, propose a next step, and then sit there waiting instead of just executing it. Two reasons it happened, and only one of them is legit: 1. Legit: Auto mode just exited a moment ago, so the system literally told me to pause for direct interaction. Before that I was still in auto mode and SHOULD have continued. 2. Not legit: Even in auto mode I was treating "end of a coherent finding" as a natural stopping point — writing it up like a status report instead of just rolling into the next experiment. That's me wanting to look organized rather than wanting to ship results. You've flagged this exact behavior before ("don't stop until X"). 

In my previous post, I was having issues with it stating these weird time constraints and refusing to work, now this.

What is wrong with my Claude code? wtf are these mind games it's playing with me? I've cleared session, removed all memory, any md files and still reach these weird responses.

r/ClaudeAI Bl4ck_Roze

Claude connects to Adobe now?

From Claude’s announcement today (Apr 28, 2026), they can now connect to creative tools including Adobe for creativity — https://www.anthropic.com/news/claude-for-creative-work

Adobe for creativity** **enables users to bring images, videos, and designs to life, drawing from 50+ tools across Creative Cloud apps including Photoshop, Premiere, Express, and more.”

I’m interested in how this actually works with Premiere specifically if anyone has used it yet. I’m sure it will go through credits in a heartbeat, but curious about the accuracy in executing tasks in the app. Is it reliable or will it still take a ton of hand holding after telling it what to do?

r/SideProject Flimsy_Pumpkin6873

Adding WhatsApp to a side project now in ~5 minutes — sharing what I built.

If you've ever tried to wire WhatsApp into a side project, you know the Meta Cloud API onboarding is rough:

- Create a Meta Business account
- Verify the business (some countries take days)
- Provision a phone number, then verify it via OTP
- Create a WhatsApp Business Account (WABA) and link it
- Generate a system-user access token (don't use the temporary one — it
expires every 24h)
- Read 6 different doc pages to figure out template categories
- Submit a template for approval (24-48h wait)
- Set up a webhook with HMAC verification
- Then write the actual HTTP calls

That's a half day on a good day. Most people give up by step 5, So I built a wrapper around it. Setup is now:

  1. npm install gaviwhatsapp/whatsapp
  2. Add GAVIVENTURES_API_KEY=gv_... to your env
  3. Three lines of code:
  4. import { WhatsApp } from '@gaviwhatsapp/whatsapp'
  5. const wa = new WhatsApp({ apiKey: process.env.GAVIVENTURES_API_KEY })
  6. await wa.send({ to: '+919876543210', text: 'Hello!' })

(Python SDK and a REST API are also there if you don't use Node.)

Where the 5 minutes actually goes:
- ~30 sec: signup at gaviventures.com
- ~2 min: connecting your WABA (if you already have one) or creating one
through the dashboard wizard (if you don't, this part still takes time for Meta to verify, but you can integrate the SDK in parallel)
- ~30 sec: paste API key, npm install
- ~1 min: write 3 lines, send a test message to your own phone

What's behind the curtain:
- Message costs go directly to your Meta Business Account at Meta's published rates — no per-message markup from us
- Flat $9.99/mo for the API+SDK+MCP

Bonus: there's an MCP server (@gaviwhatsapp/mcp) so Cursor / Claude Code
/ Codex / AI Agents can send WhatsApp directly during a coding session. Useful when
you want the agent to actually verify the integration works at the end of
a feature.

What I'd love feedback on:
- Is the 3-line API actually clean, or am I oversimplifying?
- Suggestions on actual pain points and what can i build to improve this?
- For folks who picked Twilio/MessageBird/etc., what made you pick those
over rolling your own integration with Meta Cloud API?

API docs at: https://www.gaviventures.com/docs/api (the Cursor / Claude/ Codex skill files are there too)

r/ClaudeCode Working-Middle2582

make claude an expert in anything

made a skill called learn-anything that actually goes deep. you give it a topic and it does ~30 searches, reads full pages, then spits out two things:

  1. a long mastery dossier (800+ lines) - the field's mental models, the tool stack pros actually use, a month-by-month curriculum, the intermediate plateau and how to break through it, what "good taste" looks like in that domain, communities, people to follow, all with real URLs
  2. an installable child SKILL.md you drop into your skills folder. now every future claude session is a specialist in that topic

 skill rubik's cubes skill b2b saas marketing skill graphic design skill kernel hacking 

the part i actually care about is phase 5: "how do experts in this field actually think." most learning guides stop at "here are the resources." this one digs into mental models, heuristics, the failure modes intermediates get stuck on, the deliberate practice protocols pros use. that's the section that makes it feel like talking to someone who's actually done the thing.

it's also self-compounding. run it on marketing, the output suggests installing copywriting and analytics next. run those, they suggest more. you build up a library of domain-expert claudes.

learn anything skill

what would you run it on first? curious what topics break it.

r/ClaudeAI temporary_moriarty

Asked Claude to show me the Tokens spent on each Query

r/LocalLLM Automatic-Algae443

Various types of slop 😂

r/SideProject Tasty-Fault5990

My side project: A keyboard-first tab manager for Chrome

Hey All,

As a developer who often has 100+ tabs open at once, I've been struggling with tab management for years. I'd constantly lose important tabs, waste time searching through the tab bar, and get frustrated with how inefficient it all was.

After losing a critical client presentation tab for the 100th time, I decided to build something to fix this. Introducing MyTabSearch - a Chrome extension that uses keyboard shortcuts to make tab management effortless.

### Key features:

- **Alt+Q** - Quick Tab Search: Press to bring up a popup of all open tabs, type keywords to quickly find and switch to the target tab

- **Alt+W** - Quick Back Navigation: Press to instantly switch back to the previous tab

- **Alt+E** - Manage Pinned Tabs: Press to access your pinned tabs, manage your most frequently used pages, and set them as permanent tabs that persist even after browser restart

### Why I built this:

- I wanted something lightweight and focused on keyboard shortcuts

- Most tab managers are either too complex or don't integrate well with my workflow

- I needed something that works fast even with 100+ tabs

The extension is completely free, no ads, and uses local browser storage with permanent tabs synced to a backend server for reliability.

Let me know what you think! I'm open to feedback and suggestions for improvement.

- [Product Hunt](https://www.producthunt.com/p/mytabsearch/three-powerful-shortcuts-to-solve-browser-tab-chaos)

- [Powerful index Page](https://mytabsearch.us.kg/)

- [Chrome Web Store Link](https://chromewebstore.google.com/detail/mytabsearch-extension/adfbidbchmbodidfjmimbkfndnenljjp)

r/ClaudeCode dooddyman

Did anyone switch with the current performances of Opus 4.7?

This is just a genuine question rather than a complaint. I just want to be back on track shipping away new features comfortably. I’ve been a max $200 user for a few months now. With Opus 4.6, there was a point that I even had two max ($200 + $100) plans rolling.

But since Opus 4.7, I’m not sure if I should continue paying for it anymore and give it a go with any other agentic coding tools.

I was always a Claude fan, when I tried CodeX a few months ago, it wasn’t that good with agentic capabilities and always did the “minimum”, whilst CC would complete the task fully. Now I feel that with Opus 4.7 as well and it has been super slow lately, even with relatively simple tasks.

Did anyone try side by side with CodeX or other ones out there? Any recommendations? Again, not trying to rage bait or anything, I’m just genuinely curious about people’s experiences switching to other coding agents.

r/SideProject Intelligent_Ask3955

Why do so many people on Reddit avoid using personal profile pictures?

Something I’ve been noticing — compared to other platforms, a lot of people on Reddit don’t use personal profile photos.

At first I thought it was just a preference. But the more I spend time here, the more it feels like it’s tied to how the platform works.

A few thoughts I’ve had:

– Conversations seem to matter more than identity
– Anonymity feels like part of the culture, not just a feature
– Having less personal identity might actually make it easier to participate without pressure

It almost feels like the “default state” here is to remove as much personal signal as possible.

Curious how others see it:

– Do you think using a real profile picture changes how people respond to you?
– Is staying anonymous more of a cultural norm, or just a practical choice?

r/whatisit thedyude

This fell out of my chair

Hello all,

Good day!

This thing in the image fell out of my chair when I was sitting in it. Does anyone know what it is?

Thank you!

r/SideProject yukiii_6

which serverless GPU actually has fast cold starts for inference, not just on paper

ok so i’ve been nerdsniped into actually testing this properly because i kept seeing wildly different claims and couldn’t find real data anywhere

context is inference, 70B models, and i specifically care about p99 not p50 because p99 is what shows up in support tickets

runpod is probably the most honest about what you’re getting. p50 cold start is fine. p99 degrades under load because it’s one provider’s pool and when they’re busy you wait. not a criticism just how it works

vast.ai i’ve had great experiences and terrible experiences on the same day. the node quality variance is real. if you’re experimenting it’s fine, if you’re building something where latency consistency matters it’s kind of a gamble

modal is good but it’s a different thing really, more python function oriented

the one i didn’t expect to test was yotta labs and tbh it was the most interesting result. they pool capacity across multiple cloud providers so when one provider’s H200s or 5090s are getting hammered they route to another. what this actually does to p99 is surprisingly meaningful, you’re not stuck in one provider’s queue. also handles failure handover across providers automatically which was a separate thing i was trying to figure out and it turned out to be the same answer

pricing was lower than i expected too, below runpod for equivalent SKUs which wasn’t what i went in assuming

anyway curious if anyone else has done proper p99 testing on these. everything i find is p50 benchmarks which imo are kind of useless for real inference decisions

r/personalfinance Fun_Significance3574

How to save taxes on equity investments

With 200k usd i want to start a fintech in Ireland/UK but for initial tears say 10 years until company net-worth crosses 10-20M i don’t want to pay any corporation taxes or any other major taxes. What are my options. Please guide me in the right direction.

  1. Isle of man: not sure if irish/uk govt will allow
  2. Cayman islands: the no of employee you higher is high
  3. Singapore: 1M + is needed and lots of compliance
  4. Hongkong: Chinese jurisdiction which i fear it will he a red tape.
  5. Any other easy offshore destinations with US treaty and strong security guarantees?
    Ruled out dubai etc due to latest war
r/aivideo love1008

Full Bag Of Flowers

r/SideProject jarvisatwork

I built a campus event app because I kept missing out — would love feedback

I'm the founder of EventReels. Built it because I kept finding out about campus events after they were already sold out — parties, formals, rooftop socials, gone before I even heard about them.

So I built a feed that shows everything happening at your campus before it sells out. There's also an anonymous community board for your campus — no one knows it's you.

Still early. If you're in college, I'd genuinely love to know: how do you find out about events at your school right now?

eventreels.com — search your campus, it might already be on there. Otherwise request it and will add it : -)

r/automation cranlindfrac

Pipedrive + Zendesk: how are you giving sales visibility into support tickets without dumping everything into the CRM?

Marketing-adjacent question but it affects our whole funnel story so figured this was the right place.

Sales runs on Pipedrive. Support runs on Zendesk. Right now they're basically two parallel universes. Sales doesn't know if their accounts have open tickets. Support doesn't know if a customer is mid-renewal-conversation. We've had multiple awkward situations where sales pushed an upsell to a customer who had a P1 ticket open for two weeks.

The "obvious" answer is to push every Zendesk ticket into Pipedrive as an activity. We tried it. It's terrible — the CRM becomes unreadable, deal pages get buried under noise, and reps stop trusting the activity feed.

What's actually working better for us is filtering: only push tickets that meet specific criteria (high priority, or tied to an account with an open deal, or older than 48 hours unresolved). And surfacing them in Pipedrive as a structured field on the deal/contact, not as activity spam.

Built this with Latenode because we needed conditional logic on which tickets to push and how to format them. Zapier could do the trigger but couldn't easily do the "is this account also an open deal in Pipedrive" lookup before deciding what to do with the ticket.

What are others doing here? Specifically curious if anyone's solved the ""sales sees the right context without drowning in support noise"" problem in a way that scales.

r/SideProject No-Firefighter-1453

what's your conversion rate?

Hi all, what's your and what is healthy conversion rate? I have around 2.8k registered users and 241 paying customers.

It's just below 9% which I think is good, but I'd like to hear yours, how did you achieve it and what process/steps you did to get more conversions?

r/leagueoflegends Nekkuwuhappy

Server problems, i think?

So I've been having a great amount of problems with my ping in League. Funny thing, with other games I do not have any problems with connection.
I start up the game, maybe like 2-3 matches in, my ping starts to row between 200ping up to 20 000ping. I've tried various ways to repair it, like restarting my internet, prioritizing the application etc. but there is no improve.
I hate it that I can only play max 3 games without any worries.
Does anyone else have this problem, or is it just me?

r/SideProject SnooPuppers4345

I’m raising funds to scale and improve BrightNews, a calmer positive-news app

Hi everyone,

I built BrightNews as a hobby because I got tired of how most news feeds are dominated by war, political chaos, outrage, and constant negativity.

I wanted to create a calmer alternative: a positive-news app focused on uplifting, constructive stories from around the world.

BrightNews already exists as a working android and web app, but I’ve now launched an Indiegogo campaign because I need funding to take it further.

The main things the funding would help with are:

- improving content quality and filtering

- expanding source coverage across more countries

- scaling the backend and overall infrastructure

- improving the product experience

- supporting future web and iOS growth

The goal is not to ignore reality. The goal is to create a healthier way to stay informed and give more space to stories about science, health, communities, innovation, kindness, and progress.

If this resonates with you, here’s the campaign:

https://www.indiegogo.com/en/projects/brightnews/bright-news

Here is the link for Android app once more:

https://play.google.com/store/apps/details?id=com.brightnews&hl=en

r/AI_Agents MerisDabhi

Claude’s take on AI + creativity is actually different from what most people are saying

I was reading Anthropic’s piece on “Claude for creative work,” and it made me rethink the whole “AI will replace creatives” narrative.

Their framing is surprisingly grounded:

AI isn’t really about generating final creative output.
It’s about expanding how creatives work.

A few things that stood out:

  • It speeds up ideation (you can explore way more directions)
  • It removes a lot of repetitive/boring steps
  • It lets individuals take on projects that used to need teams

The interesting shift is this:

Before AI → you had to be very selective about which ideas to pursue
After AI → you can test a lot more ideas quickly, then pick the best one

So creativity becomes less about “coming up with ideas” and more about:
taste, judgment, and decision-making

That actually feels like a higher bar, not a lower one.

Curious how others here are using AI in creative work—
Do you feel like it’s replacing parts of your process, or just accelerating them?

r/homeassistant Phogineer

Help understanding MQTT device trackers

I'm trying to set up a device tracker to monitor 3 states for my device: home, work, away. I already use Tasker to monitor locations so making this work would fulfill my needs for home assistant tracking without using any extra battery.

To do this I have an MQTT device tracker set up in configuration.yaml:

mqtt: - device_tracker: name: "tracker_tasker" state_topic: "location/user" json_attributes_topic: "location/user" 

Then I use a task created by bokub to update the tracker via Tasker: ⚙️ Using Tasker with Home Assistant (V2) - Share your Projects! - Home Assistant Community

I have tried 2 variations with partial success:

{"payload": "home", "topic": "location/user", "qos": 0,"retain": 0} 

This successfully sets my location to Home, and "not_home" to Away. However trying this with manually set up zones simply sets the tracker location to whatever I write in plain text, not recognizing the zone name. I can write "Jupiter" instead of home and the value updates to exactly that.

My other attempt with some AI help is this:

{ "topic": "location/user", "payload": "{\"latitude\":%PLATS1,\"longitude\":%PLATS2,\"gps_accuracy\":%LOCNACC}", "qos": 0, "retain": 0 } 

But the state of the tracker becomes the payload in plain text with the Tasker attribute values: {\"latitude\":59.xxx,\"longitude\":18.xxx,\"gps_accuracy\":30}

In this example I can actually see that the tracker latitude and longitude attributes update so it seems to work partially.

Is there a way to write JSON service data that correctly updates the users location AND recognizes Home Assistant zones?

r/ChatGPT Own_Discussion3673

When newer model like 5.5 is released - do the old one get cheaper to use?

5.4 was doing a great job for me anyway so do they reduce the session cost for older models or I should just move to 5.5 and be done with it ?

r/leagueoflegends Haunting_Inflation54

When is ranked back online?

There's probably an obvious answer somewhere but I couldn't find it after a quick google. I'm aware the current split just ended but what time does the new split start? When will ranked be back online?

r/SideProject khooni_chakka

I built a free App Store & Play Store mockup generator with auto-translation for 60+ languages

Hey everyone,

I built a Free App Store & Play Store Screenshot Generator called FreeAppMockups (freeappmockups.site) to scratch my own itch — every other tool I tried either locked the good templates behind a paywall, watermarked the export, or made me sign up just to try it out.

So I made one that's genuinely free. No signup, no watermark, no paywall.

What it does:

  • 18+ ready-made templates designed for App Store & Play Store dimensions - Drop in your own screenshots, edit text, change backgrounds, swap device frames
  • Multilingual support for 60+ locales (including Tamil, Telugu, Hindi, Arabic, Japanese, Chinese, RTL languages, etc.) — auto-translates your copy when you add a new language so you don't retype everything
  • Exports all your screenshots in all selected languages as a single ZIP, organized by locale
  • Saves your work locally — close the tab and come back later, your mockup is still there
  • Works entirely in the browser, your assets never leave your device

Built it solo over the past few weeks. Would love feedback — what's missing, what templates you'd want, what's clunky. Honest critique very welcome 🙏

Link: https://freeappmockups.site

r/leagueoflegends No_Syrup_9263

Dualsense support?

I always wanted to try league, but I've never played anything on a computer, so I am so bad with mouse and keyboard. When I heard that it could be possible to play with a controller I decided to give it a go with a dualsense (Ps5 controller), but it simply doesn't work. I know that I have to switch to wasd controls and plug in the controller, but it doesn't do anything. I also tried connecting with bluetooth but still nothing. Do you guys know if I have to do something specific? I play on a Mac, maybe that's the issue. Thanks

r/whatisit Numerous_Evening_255

what is it! What if i tell you cant name them all!

r/SideProject ConfidenceUnique7377

Gitember Git GUI v3.2 just dropped

I've been building Gitember since 2016 — a free, open-source Git desktop client. It has been started as weekend experiment. And now version 3.2 is out with new features:

  • Worktrees - full UI support for creating, switching, and removing worktrees. If you juggle hotfix branches while keeping a long-running feature branch alive, this is the workflow improvement you've been waiting for.
  • 3-way merge conflict resolver - BASE / OURS / THEIRS side by side. Pick a side, edit inline, stage with one click. No separate merge tool to install.
  • AI-assisted writing (experimental)- explain what changed between two branches in plain language, secret leak detection ( is your GPU good enough ?)

    It also covers everyday Git stuff (commit, branch, diff, etc.), but one thing I personally rely on a lot:

  • search through history including non-text formats (Office docs, DWG, PSD, etc.)

  • arbitrary file/folder comparison

The last one very useful feature in our days, when need quikly compare a lot of AI changes Site here https://gitember.org/

Contributions, feedbacks, suggestions are welcome

r/ClaudeCode pigillustrated

This obviously looks pretty dangerous

“Spawns children that can do anything “ 💀

r/TwoSentenceHorror Intelligent-Bottle22

I wore white to school that day; the weird kid said he would spare anyone in white.

Turns out, he just wanted to see bloodstains better.

r/ClaudeCode troveofvisuals

Oh My Bark : A site for abandoned dogs

Hi everyone!

Wanted to share what I've been building (adding one more world) using Claude Code. This is basically a world designed to help get sponsors for the abandoned rescue dogs here in Dubai. They live in the middle of the desert with no electricity.

So lets see how this goes. Hopefully its not a massive flop.

Been using Claude code for this and its amazing. Goes a little haywire here and there but 8/10 works

if you're curious to see what it looks like: ohmybark.com (I haven't yet added the second world so you'll just see what I see every saturday morning with 1400 dogs :D)

If you click on the static ones btw, you can see clips of what they actually look like and how they live and the tents hold the menu

r/TwoSentenceHorror Rated_R_The_Album

Johnny time travel travel forward after using his time machine he builder, bob.

Little did he know perchance, he time traveled into the Bronx.

r/leagueoflegends Due_Coyote_486

battle pass rewards, worth it?

what do you think about the new battle pass rewards? is it worth it? because i dont really know if i should buy or not.. i dont main shaco

r/whatisit Turboshmack

Strange markings in russia

Weird marking in st Petersburg street

r/ChatGPT GhostBelliniFace

The best thing I use ChatGPT for is suggestions

When I don’t know what game to play next, what album I should check out, what anime, book, movie, etc etc. I ask ChatGPT. Chat knows everything I’ve consumed so far and gives me great recommmendations. I can ask for a top 10 list or a top 100 list and most of the time it’s spot on.

Just figured I’d let yall know, it’s such a life hack honestly lmao I discovered a lot of new music and movies this way

r/homeassistant Desperate_Chair_3252

Environmental Audio as a new Machine Sensor

First time poster on Reddit. I am mad home geek myself and then I did something crazy. Now I'll trying to make everyone else crazy too. That's the AI world we find ourselves in.

I am pretty good with computers - always have been. I was retired lol… but got chatting with a friend who mentioned an idea about noise complaints; "Wouldn't it be good if you could have an app to simplify noise complaints, e.g. dogs or other disturbances". The stars aligned. I've finished working for the Aussie Government on big data infrastructure and have a deep history in technical delivery of a rich set of business domains. I know cloud at enterprise and microscopic level. Claude Code and Vibe coding has arrived. BOOM.

In 6 months, I solo vibe coded, a government security grade, elastic cloud compute pool, with a hyperscale pattern. I designed a multi-Model Classifier as the compute unit, to host YamNET, PANNS and BirdNET. I added a spatiotemporal annotation UX with acquisition through any device spot recording and upload of any old media. I then vibed transformed the raw AudioSet (YamNET, PANNS) with a temporal relationship algorithm which enriches the Classification meaning - that is not 5 dogs barking, it is a barking Event.

I vibed from the cost per minute layer offered by cloud for compute units, up to API, through into the MCP layer, aka AI. Talk integrate via Claude, OpenClaw, etc. Talk classify and near real-time, webhook notifications on 500+ untapped (AudioSet) environment signals from the ubiquitous microphone.

A new machine sensor? The pervasiveness of microphones coupled with ridiculously cheap transform, and AI talk to integrate.

A new semantic compressor? Audio 100:1 or more reduction in file size yet full of semantic weight. Perfect sized AI brain food.

How I vibed it is also pretty cool. I do know my tech stuff. But one person semantically programming what I would consider the best DevSecOps SDLC chain I have ever seen… that is a story too.

Hopefully you find this as cool as I do. Hopefully everyone can see what i see 😄

When my webcam grew ears...

Agentic empowered spatiotemporal annotated environmental audio

https://www.h-ear.world/how-it-works or https://www.h-ear.world/use-cases.

https://github.com/Badajoz95

Super interested in people's thoughts. It'll make a nice change from an AI prompt.

r/LocalLLaMA No-Compote-6794

Kimi K2.6 helping me uninstall macOS apps

I started using macOS since I was a teenager 10+ years ago, and even til this day, I still don't understand how to properly uninstall a local application. That changed today.

I was dogfooding my desktop agent when I realize it could just find everything related to an app to delete them which is actually the correct way of uninstalling apps. I have a million apps that haven't been used and my laptop is getting slow. App just magically starts to go away after it runs a bunch of cmd's. Kimi also taught me the existance of `~/.appname` directories which I didn't realize I've a million things in there already, from apps I've not used in years. (remember windsurf?!?!)

I uninstalled a few different Apps and realized it keeps trying to run `find ~/Library/Application\ Support` which takes really long since it's recursive. I gave Kimi the feedback, we decided its better to glob match at the top level instead of recurse. So I asked it to edit its base knowledge (my equivalent of SOUL.md) to always do that to uninstall apps. It got a lot better.

I feel like maybe this teach and improve domain knowledge into written processes is the new era of software development? but who am i to say :)

oh and i prefer doing this with my own agent over Claude for a few reasons

  1. one key press voice input
  2. OSS models that works with any openai compatible provider
  3. tiny code base and total control so no random nerfs and gaslighting
  4. query bar / text editor that doesn't require pressing arrow keys for 10 minutes to make an edit

here's the Github repo if you're interested: https://github.com/Emericen/openmnk

r/SideProject gowthamshankar05

I built a testimonial tool where social proof collects itself — set it up once and you stop chasing customers

Been working on GridApps Testimonials for a while. Sharing it here because I think the workflow is genuinely different from what's out there.

The problem I kept hitting: every testimonial tool I tried turned "collecting social proof" into a manual job. Send a request link → wait → follow up → embed. You're basically a project manager for your own testimonials.

What I built instead:

Workflows that run on autopilot. Set it up once and GridApps keeps pulling social proof in — Twitter mentions, public reviews, customer replies — and turns them into display-ready testimonial cards. No follow-up emails.

Actual display variety. 40+ widgets and 10+ Wall of Love layouts, so you're not stuck with the same generic slider every other site uses. Different pages can show different testimonials without rebuilding anything.

Video that doesn't need a separate tool. Built-in video editor for trimming, captions, branding — plus you can turn testimonials into Reels-style clips for social.

I'm not saying existing tools are bad — Senja, Famewall, Testimonial to are solid for what they do. But most stop at "here's a form, here's a wall." The collection-to-display pipeline is still mostly manual. That's the gap I was trying to close.

Two questions for founders here:

  • How are you currently collecting testimonials? Is it as manual as it feels?
  • What would make you actually switch tools?
r/ClaudeCode RussianNotFascist

Can't redeem gift for use CloudeCode max

Hi all!

I've recieve gift:

https://preview.redd.it/0grrsjfyv2yg1.png?width=1768&format=png&auto=webp&s=25f5ea12ecd05c3dfee8c00b39f409d2d21249e5

But when I trying to redeem it I recieve message "Gift codes cannot be redeemed with mobile app subscriptions. Please manage your subscription through the App Store or Google Play."

I've checked my iPhone - there is no subscription in AppStore. How can I redeem my gift, help me please.

r/automation Chillipepper19

how to NOT waste 5 months of your time

i was pitching to anyone who would listen. restaurants, gyms, coaches, salons, random people who seemed interested.

every call went well. nobody paid.

eventually figured out the pattern. the people who get most excited about automation are usually the ones with the least budget and the most opinions about how it should work differently.

they always say boring businesses make money. i landed a manufacturing and export client. A very easy automation to setup but because of the volume, the money is huge.

been working with hotels and property firms for a while now. that's where the money actually is.

if you run a business with a genuine operational problem, leads falling through the cracks, follow ups being done manually, data entry that shouldn't require a human, drop it below. genuinely curious what the broken thing looks like in different industries.

r/StableDiffusion GelOhPig

safe, local, secure and is it possible?

I am looking to get deeper in to making AI videos, but want to save money and be free to do it locally with no limits.

I also work in IT and have been doing so for over 30 years, and have concerns that have been instilled in my experiences.

Here is my question. are there any models i can look into that are safe and secure without having to reach into some dark dank database or server that might decide to throw malware / spyware / viruses into my system?

i saw a video on the ease of Wangp install, but was concerned. But I don’t want to toss out the use of comfyui if that means a chance of maybe using a LoRA that may be a little more secure, but a high level of difficulty.

Guess what I am saying, I place a high value of being secure than getting something for free or low cost.

Am I asking the questions? Or am I better off just paying VEO / LTX fees to a service?

Thanks…

r/SideProject MattVePhD

I built a Mac transcription app that keeps your audio on your machine. Launched yesterday

I'm a laser R&D Engineer. I kept leaving meetings thinking "I'll remember this" and then not remembering it.

Every existing tool either uploads your audio to a server or sends a bot to join your call. Neither felt right for confidential work conversations.

So I spent a year building Thoth in my free time. Everything runs locally on your Mac using WhisperKit. Mixed audio capture without virtual drivers, speaker diarization, local LLM summaries, BYOK for cloud AI if you want it. WiFi off, it still works.

The hardest part was getting mixed audio working, recording both mic and system audio simultaneously using Core Audio instead of a virtual driver. Turns out transcription quality during a Zoom is much better when whisper is fed with both fluxes.

Link in comments. Honest feedback welcome.

r/leagueoflegends AncientJudgment6234

Need tips for league of legends.

I started playing 2 days ago, I have so far used maybe coming up to 10 different champions so far and have found jinx my favourite so far but my problem is, the other champions feel very ‘bad’ compared to jinx and I can’t imagine that is the case. I can imagine I am just very bad with them but I was wondering if any people could give me some general gameplay tips and tricks to help me because I am enjoying the gameplay very much but finding it extremely difficult at times. Thanks very much in advance.

r/automation Ok_Following_4950

automated my Monday morning catch-up

the thing that annoyed me most about Monday mornings wasn't the volume. it was that I had to process everything before I could tell which things were worth processing. tried a Zapier digest. got a list of email subjects, no context. tried a scheduled summary in Slack, too rigid, arrived at the wrong time.

"I was offline this weekend, read my email and Slack and tell me what I need to know." that's the whole Invoko prompt. what comes back: two things that need a decision today, one thing that already resolved, three threads that can wait. the context reconstruction that used to take an hour now takes five minutes.

r/leagueoflegends AutoModerator

Patch 26.9 Bug Megathread

Greetings Summoners!

With every new patch Riot introduces to balance out champions and items there are some unforeseen issues that creep up and cause disruption during gameplay. We have noticed that these issues which eventually get fixed clutter up the subreddit immediately following the patch.

We want to avoid this by having a single Megathread which will be posted after every patch so that you guys can report the various issues in one place. This allows Riot to easily keep track of the bugs by providing a central hub and also allows other users to confirm that they might have encountered.

Note only bugs caused by the 26.9 Patch should be reported below.

Prerequisites to be noted before reporting a bug


  1. A bug will ideally be accompanied with a screenshot or a video. This provides credibility to your report.
  2. Steps to recreate the bugs should be submitted if possible. This helps Rioters recreate the bug and helps them find the cause behind it.
  3. The bug must have been caused by the latest patch.

Format when reporting a bug: When reporting a bug, please provide as much information as you can about your computer.

Server: The server you encountered the bug (NA, EUW, EUNE, TR, RU, BR, LAS, LAN etc)

Type of Bug: Client Bug, In Game Bug etc

Description: Describe what was the bug that occurred.

Video / Screenshot: Insert screenshot (F12 in game) or Video of the bug occurring.

Steps to reproduce: Provide the steps necessary if someone else had to reproduce the bug.

Expected result: What should have been the result when you follow the steps mentioned above.

Observed result: What was the result you obtained when you followed the steps mentioned above?

Reproduction rate: If you try to recreate the bug how successful are you in causing it to occur? (1/10: Occurs once every 10 tries, 5/10: Occurs 5 times out of 10, 10/10: Happens every single time)

System specs: Processor, Video card, Ram, HDD/SSD, everything you can provide, even drivers.


Example Bug:

• Server: EUW

• Type of Bug: In-Game Bug etc

• Description: Zed's R (Death Mark) does not apply secondary damage

• Insert Video / Screenshot of the incident

• Reproduction rate: 2/10 (happened 2 out of 10 times)

• Steps to reproduce:

Launch a game after selecting Zed as your champion. Attempt to use Death Mark. Observe the result.

• Expected result: The damage should apply after a short delay, amplified by damage dealt during the effect.

• Observed result: The damage will not apply properly.

• System Specs: Intel i5 Processor, Windows 7, Nvidia Graphics card (insert model number) etc.


If you don't know how to format comments on Reddit click here


- **Server:** - **Type of Bug:** - **Description:** - **Video / Screenshot:** - **Steps to reproduce:** - **Expected result:** - **Observed result:** - **Reproduction rate:** - **System specs:** 

Copy paste the above code and fill in your details.


From this Megathread the list of bugs will not be summarized and put up in the main body of the thread, however, note that many Rioters are going through every single comment so don't worry if you post the 1500th or 3000th comment, every reply will be read over the next few days.

r/AI_Agents elise_moreau_cv

Datadog says 60% of LLM call errors are rate limits, and capacity is now the dominant production failure mode

Datadog dropped their State of AI Engineering report this week. The numbers reframed how I think about LLM reliability.

February 2026: 5% of all LLM call spans across their customer base reported an error. 60% of those errors were rate limits.

March 2026: 2% of spans returned errors, but rate limits were still ~30% of the total. That works out to 8.4 million rate limit failures across their telemetry in a single month.

The takeaway is that the dominant production failure mode for LLM apps is not hallucinations, not bad context, not flaky tools. It's plain capacity exhaustion. 429s and 529s, the boring kind of failure that classical infra engineers have known how to handle for 20 years.

What's making it worse is the architectural pattern most teams use. Variable ReAct loops and multi-agent collaboration produce concurrency spikes that exhaust shared org-level quotas in unpredictable bursts. Your p50 throughput looks fine and your p99 falls off a cliff.

The other line in the report that I keep thinking about: context quality, not volume, is the new limiting factor. Most teams aren't even close to using the full context window of their model. The 1M token capability is wasted if your retrieval pipeline can't pick the right 10K tokens.

Capacity engineering and context engineering are quietly becoming the two skills that move the needle in 2026 production LLM systems. Prompt engineering as a discipline is increasingly downstream of these.

r/comfyui TurnOffAutoCorrect

Imported assests no longer have thumbnail previews. Is anyone else seeing this?

I've witnessed this across two different rigs so unless I've managed to flick the same switch somewhere that turns them off then I think this is broken at the moment. Generated images still have a thumbnail showing but somewhere during the past week imported ones stopped show a preview. Pressing R and also trying reloading the tab after the asset is imported doesn't fix it either. Nor does reloading the backend and restarting the PC. As mentioned at the start, same issue on two different PCs.

Has anyone else encountered this and have a fix?

r/ChatGPT ImpossibleHistory111

Damn ai getting too realistic now

r/leagueoflegends Plane_Corgi1253

Matchmaking is an absolute joke

https://preview.redd.it/p7c8ktswg3yg1.png?width=375&format=png&auto=webp&s=f2129151d2e8a6ee8461703f858dd3dfa93724b1

Haven't played league in 2 weeks, decided to play one game of solo queue for decay and immediately got reminded why I stopped playing in the first place. Apparently I've climbed to top 0.5% of the population out of few hundred thousand players or whatever, just to have 80% of my games feel like bot games.

Who keeps making these absolutely senseless decisions that makes the game less fun for old players while also failing to attract new players?

Autofill is extremely broken and age of valor causes toxic behavior like playing for KDA or supports spamming pink wards instead of actually trying to win.

Also how did anyone think it was a good idea to sacrifice game quality for maybe 2 minutes decrease in queue times (Master+)? Instead of waiting few more minutes for a good game, you want to give players low quality games and waste 30 minutes instead?

Well anyway, I'm sad that I don't get to flex with old friends as often anymore as everyone around me is quitting. But I am grateful to be spending less time on this declining game so keep up the bad work I guess.

r/ChatGPT Sad-Cut-3845

is there a way to remove these images from showing up when sending certain messages

r/ClaudeCode Cant_Display

Is my preference for Cline actually holding me back?

Hey everyone,

​I’ve been using Cline for my daily workflow at work lately. I’m still using the Claude Opus 4.7 model, but I’ve stuck with Cline because I feel much more "in the driver's seat." The visualization of file changes and the side-by-side diffs make me feel like I have a tighter grip on what’s happening to my codebase. Plus, honestly, I feel like I've learned a ton more about how LLMs actually interact with files and tools by managing Cline myself.

​However, seeing the hype around Claude Code, I’m starting to get some serious FOMO.

​Claude Code seems to have deeper native integration, and I’m worried that by sticking to a third-party wrapper (even a great one like Cline), I’m missing out on better agentic loops or "thinking" optimizations specific to Anthropic's tool-use.

For those who have used both:

Am I putting myself at a disadvantage by staying in the GUI? Does Claude Code offer a significant jump in "intelligence" or efficiency that justifies giving up the visual control of Cline?

​Would love to hear if anyone else has made the switch (or switched back).

r/whatisit Kraj_the_Conqueror

What kind of coral is this?

Got it from gramps who owned it for many years. He was himself unsure what kind of coral is this.

r/ClaudeCode thinkyMiner

Any common interface for using Codex and Claude Code

Recently saw claude models getting dumb and lazy, so I switched to Codex but it was quite a painful change as the setup I was using on claude code was something I fine-tuned over time and had to do things for codex again which I couldn't exactly replicate, the mcp-servers the custom commands and other such things so can you all suggest some existing tool for this. I want something that is cli-based or should I just start burning my tokens to make one 😂.

r/leagueoflegends Jhomas-Tefferson

Teemo buffs why?

As a teemo main, cool, teemo buffs.

But this will not go over well. Teemo already does quite well with the re-addition of gunblade giving him an on demand ranged slow to either kite or just run someone down. And he's finally getting deathfire touch back.

Why, on top of that, would they give him an AD ratio on his E? Like, cool, for teemo players i guess. But gunblade > guinsoos > new static is going to be kind of insane on him.

And most of the serious teemo players were playing him more on hit style anyway, as it's always been pretty good. Even after the nerfs which came around entirely due to maligma. The traditional AP build on him has been lackluster for a while because it requires Malignance and Liandries now(again, mostly a result of the nerfs i just mentioned) and with nashor's already always being required(literally trolling to not build it on ap teemo unless you're against a weird comp that makes autoing ever very difficult), that left you with 2 item slots. Assuming you always go for void or rabadons (whichever makes more sense in the game you're in), that leaves you with 1 item slot to get everything else you need, be that grievous, a spell shield, or hourglass. This leaves him very little build flexibility.

On hit, on the otherhand, you have a lot more options for build flexibility. You can still get grievous and a spell shield item if you want after you get your core 3 items(which can change depending on the game too. No tanks, you don't need bork. Lots of magic damage? sub nashors first for wits first), or a spell shield and hourglass. Or you can say "fuck it, we ball" and go bruiser at that point and grab titanic and jaksho. You just have a lot more options to respond to a lot more situations, whereas ap locks you into 4 items. AP can leave you in positions where you say "i kind of need grievous for their team, but i also need to not die to zed(or whoever) because he will ult me because i'm doing the best on my team, so i need hourglass" which puts you in a really awkward place.

What's especially odd is them citing that they felt like they killed on hit teemo in the past and are looking to bring it back. I feel like it never really left.

Out of all the things i see for the next patch, i bet this gets un-done on the following patch or even hotfix nerfed.

r/ChatGPT ThatLoudGlitch

"My brother in compute" sends me 😂

I asked ChatGPT for a scathing criticism of OAI, as inspired by a post here. I liked it a lot, but my being a sassy prick by nature, I wanted to see the snappier version of that, so I asked ChatGPT to see that and here it is, lol. I do hope it gives a bit of a chuckle to whoever reads this, I know it did me. **And Kyle, if you read this,** the context window offer stands 😂🤌

r/ClaudeCode Articurl

Opus 4.7 acting weird today

Just wondering, had great days with Opus. Today he does not like my typical flow since the outage yesterday. Hope you guys are doing well.

r/leagueoflegends Particular_Chard9679

New statikk shiv no stacking Kai’sa passive ?

I tried the Statikk + voltaic + dusk and dawn combo on kai’sa in brawl as well as in a practice tool match vs a bot. Hitting W does consume the energized stack and proc static but I don’t get an additional passive stack nor does dusk and dawn get consumed

r/AI_Agents Warm-Reaction-456

Worked with 30+ professional services founders on automation. The ones getting real ROI all did one boring thing the others skipped.

I've shipped automation projects for around 30 professional services firms now. Law, accounting, recruiting, agencies, consultancies. Some of those projects are still running and saving the firm real money. Some quietly got abandoned within 4 months.

The difference between the two groups isn't what you'd guess. It's not the size of the firm. It's not the budget. It's not how technical the founder was. It's not even the quality of the build.

It's whether the founder personally walked through the manual version of the workflow once before we automated anything.

Sounds dumb but stay with me.

When a founder hires me to automate something, they usually describe the process from memory. "We get a lead, we send a contract, we onboard them, we send invoices." Four sentences, sounds clean. The automation gets built around those four sentences.

Then we go live and discover the actual process is something like 23 steps, half of which the founder didn't know existed because the admin who's been doing it for 6 years just handles them silently. There's an exception case for when the lead is referred by an existing client. There's a different SOP for clients who pay by wire. There's a paralegal who manually edits one section of the engagement letter when the project is over $50k. None of this was in the four sentences.

The automations that survive are the ones where the founder did one painful thing before we started. They sat with the person who actually does the work, and they did the workflow themselves once, slowly, narrating every step out loud while I or someone took notes.

That's it. That's the whole secret.

It usually takes 90 minutes. It usually surprises the founder, who finds at least 4 or 5 steps they had no idea their team was doing. And it usually makes the eventual automation work, because the automation is built around what's actually happening instead of what the founder thinks is happening.

The firms that skipped this step ended up with automations that handled the happy path and broke on every edge case. Within a few months the team stops trusting the system, goes back to doing it manually "just to be safe," and the automation rots.

This isn't about whether you're smart enough to think through the workflow yourself. It's about a thing called the curse of expertise. The longer you've been a founder, the further you are from the actual day-to-day work, and the more you've forgotten the thousand small judgment calls your team makes that aren't written down anywhere.

If you're thinking about automating something at your firm, do this before you hire anyone or build anything. Block 2 hours. Sit with whoever does the work today. Ask them to walk through it with you while doing a real instance of the work, not from memory. Take notes on every step, every exception, every decision point. Don't talk for the first 45 minutes, just watch.

You'll be embarrassed by what you find. That's the point. That's also why the automation will actually stick this time.

If you've already had a failed automation at your firm and you're not sure why it died, my guess is one of two things. Either nobody walked the manual process before building, or you let an agency build it and they relied on what you told them in the discovery call instead of insisting on this step.

Happy to walk through specifics if anyone's stuck.

r/SideProject footballforus

This is why I don’t trust AI agents in production yet

Saw this today and it honestly scared me a bit.

An AI coding agent powered by Claude via Cursor ended up deleting an entire company database in seconds.

So I built a small developer-first library that sits between an AI agent and its actions.

Before the agent does something risky (like sending emails, hitting unknown APIs, or modifying data), it can:

get blocked

require approval

or log the action

Basically: your AI asks before it acts.

Right now it’s super early and pretty minimal, but it already works for a few core cases.

Not sharing the link publicly yet, but if you’re building with AI agents and this sounds useful, happy to share access and get feedback.

r/comfyui IncidentCertain4532

Total beginner: How do you create this video style in ComfyUI? What models should I use?

Hi everyone! I’m a total beginner to ComfyUI and I’m trying to find the right direction. I saw this video on Instagram and im curious on how they make this video

I'm looking for advice on:

  1. What base model (checkpoint) should I try using to get this high-quality look?
  2. Is this likely AnimateDiff, SVD, or something else?
  3. Since I'm just starting, are there any specific tutorials or "beginner-friendly" workflows that lead to this result?

Any help pointing me in the right direction would be amazing. Thank you!

r/SideProject VeterinarianLow2771

Experience with adsense

hey guys,

Anyone have experience in running ads on your project? Im currently waiting for adsense approval on my website and really didnt think that much about what ad provider I was gonna use (I always thought Adsense would be the best one). But when I checked online and chatted with Claude / GPT there are apparently other better options. Below are the ones that Claude / GPT suggests (basically Adsense has the lowest ad revenue).

Would you say the below table is accurate? Because I think these AIs are just messing with me lol (I know the RPM differs by category, the visitors region,etc so below is based on for example a western quiz website)

Any feedbacks are welcomed

Network RPM AdSense $1~$4 Ezoic $2~$6 Monumetric $3~$8 Mediavine $10~$25 AdThrive $15~$35
r/ProductHunters PeaceinthelandofI

WhizzIQ | AI that runs your business, not just tracks it | Product Hunt Launch

Hi everyone,

Our team at WhizzIQ just launched on Product Hunt today and I wanted to share it here to get real feedback.

We have been building this around a simple problem. Most small business owners are using multiple tools for invoices, expenses, CRM, and bookings, but still do not have a clear picture of what is actually happening in their business.

So we built WhizzIQ to bring everything into one place, with AI to help make sense of it.

The goal is not just to track information, but to help answer questions like:

Where am I making money?

Where am I losing money?

What actually needs my attention?

It is still early:

A few active users testing it in real workflows

Still learning what people actually find useful

Would genuinely appreciate feedback:

What makes sense?

What does not?

What feels missing?

Trying to build something that is actually useful, not just another tool.

Product Hunt Link:WhizzIQ Product Hunt

r/SideProject Farenhytee

I built an AI tool that generates platform-native posts for LinkedIn, Twitter, Instagram and more, from one prompt, in your actual writing voice.

I got tired of spending 45 minutes reformatting the same idea for six different platforms. LinkedIn wants long-form, first-person takes. Instagram wants short, lowercase, vibes-first. Twitter wants punchy. They're completely different posts, but most people write one and paste it everywhere.

So I built a tool that handles the split automatically.

During onboarding, you paste 3-7 of your own posts. An LLM reads them and builds a persistent "voice profile", your vocabulary level, sentence style, tone, personality patterns. Every piece of content you generate after that is written in your voice, not some generic AI voice. LinkedIn posts come out sounding like you, not like a press release.

You write one prompt. It fans out to LinkedIn, Twitter/X, Instagram, Threads, Facebook, and newsletter simultaneously. Each one is natively formatted for that platform, different length, different structure, different energy. It also does a live web search before writing so the output is grounded in current trends, not stale training data.

Some things I learned building this solo:

Voice profiles are the moat. Anyone can call an LLM. Making the output actually sound like you across 6 platforms is the hard part and it gets better the more you use it.

Instagram and LinkedIn prompts need to be totally isolated. I had a bug where a "long-flowing technical" voice profile was producing LinkedIn-style structured posts on Instagram. Took a week to fix properly.

r/SideProject No-Put-6206

I’m a Data Engineer who was tired of being a debt collector. I built an automated WhatsApp invoice chaser to handle the "awkward follow-ups."

"Hey everyone, I realized the body of my post didn't show up! Here is the context:

After 15 years as a Data Engineer, I got tired of the manual 'awkward follow-ups' for payments. I builtchasedue.comto automate the entire WhatsApp reminder sequence.

The Tech: Next.js 16, Supabase, and WhatsApp Cloud API.

The Story: I launched yesterday, got 1.3k views but only 1 real user. I realized the 'Sign In' wall was too high, so I just added a Live Demo button today so you can see the nudge without an account.

I'm at 0 MRR and looking for my first 5 paying customers. Would love a brutal roast of the landing page or the workflow!"

r/leagueoflegends HugzNDrugs

Never played. Can I run on a laptop?

I have an okay laptop with 8gb of ram and it is okay for older games but I haven't tried intensive games yet. I really want to try LoL and I loved arcane, so I really want to try. I just want to make sure I'd be able to run and play the game without too many issues. If not, are there are options for ps5?

r/SideProject Reasonable-Bake-8614

competitive research just got a lot less painful

i was manually checking competitor Twitter accounts two or three times a week. open profile, scroll, take notes, close, repeat. 40 minutes i didn't have, and i was inconsistent. tried setting calendar reminders, still felt like a chore. tried a social monitoring tool, expensive and more features than i needed.

now i ask Invoko to summarize what they posted and write it to my research page. 30 seconds of setup, I review the summary when it's done. i'm actually more informed about what's happening in my space now than when i was doing it manually.

r/LocalLLM madeagupta

Running the equivalent to $20/month Pro 'Claude Cowork' or better with a locally hosted LLM?

I want to preface this with the fact that I've read through quite a lot of this reddit on running stuff locally but still am not sure about how to go about running a very very specific form of a local LLM on my PC.

I recently switched from ChatGPT $20/month to Claude's $20/month and was amazed at what it can do from a hands free perspective. I tested the Pro plan by throwing it 2 tasks that I've been lazy on getting done:

1.Editing 30gb worth of footage on Davinci Resolve to make into a mini travel vlog of some friends and I.

2.Make me a custom photography website where I can feature my work (I wasn't satisfied with Wix). It knocked these out of the park in terms of laying the groundwork and getting 90% of it done.

The problem I found was usage, I wasn't burning through my weekly usage too bad for what it had to do, but the current session usage was at 100% quite fast (like 1-2 hours of tinkering / letting it do its think, especially with video editing). The other thing I saw was the token limit within one conversation (200k/1m) but I circumnavigated that by simply creating a project and then making each new task reread a file within a folder on my desktop that had instructions and what we had completed in previous chats.

Let me make it clear, I think the feature that lets Cowork take screenshots and "take control" are phenomenal, and this for me is a clear selling point and worth the $20 a month. I know it can be viewed as a security risk for sure to give it access to whatever it wants but boy does it get the job done hands free. (would love input on even more risks here)

My main question here is, can I run a 'local' version of this that would use my GPU on my PC? I currently have a pretty good setup with a rtx507012gb, amd ryzen 7 7800x3d 8core and 64gb of ddr5. How would I even go about setting something like this up? Would my cost truly be 0? And most importantly, would I have access to that UI that I am currently using in the Cowork tab, because the UI and how it currently works is genuinely good and just works so well, it feels robust.

My main concerns here are price per month at the end of the day to run my own LLM and whether it can operate in that same way as the cowork features.

Sorry for the long post but genuinely any input would be appreciated! (Feel free to explain things quite elementary since I am somewhat new to this and have a pretty specific use case)

r/ChatGPT Disturbedtongue

Image quality loss and artifacts: will OPENAI fix it?

Totally lost my workflow :( . This is how much quality was lost since the artifact and texture error started happening.

First image is before, second is now. Getting that fluffy grid over every single image. Monet is ecstatic.

Do u think OPENAI will fix it or is this the new reality?

I’ve attempted so many ways, and there is no point: organic textures are ruined, specially when vegetation is present.

r/leagueoflegends Inevitable_Pace9522

Ranged top

What would happen to ranged toplaners if they were banned from top lane?

Kayle, Gnar, Vayne, Kennen, Jayce, Quinn, Teemo and Urgot.

r/SideProject Drupalizer

Shut down my own blog years ago. Revived the parked domain as a directory of 591 indie blogs.

Backstory: this domain (awesomebloggers.com) used to be my own blog. I ran it for 2years in the Blogging / SEO / Digital marketing space. Like a lot of indie blogs, it slowly died — I stopped publishing, traffic decayed, the writing got harder, and eventually the domain just sat parked for years. A few months ago I decided to bring it back, but as something different: an automated directory for the kind of indie blogs I used to read.

The premise: bloggers submit their URL, a scoring engine evaluates them across 7 metrics (domain authority, posting frequency, site speed, mobile readiness, content volume, SSL, design quality), and approved blogs get listed with a permanent do-follow backlink. No editorial picks, no pay-to-play. 591 blogs approved so far across 9 categories.

Three things from the data that genuinely surprised me:

  • Crafts and writing blogs outscore most "design press." The Creative Penn (publishing) and Craft Gossip (multi-craft) both outrank Smashing Magazine on my metrics — both still publishing weekly after 17+ years.
  • Of the top 15 blogs in the Creative Arts category, 9 launched between 2005-2009. Almost no new blog has broken into the top tier since 2015. The platforms changed, the leaders didn't.
  • Domain authority is the universal weak point — even the highest-scoring blogs in the directory cap around DA 12. Content depth and consistency are doing the work the link graph isn't.

Where I'm stuck: 473 GSC impressions last week, 0 clicks. The long-tail queries don't convert at DA 15, and the head-term competitors (Hongkiat, Smashing, NerdWallet) defend at DA 70+. My next bets are (a) finishing the missing category roundups for topical completeness, (b) shipping original data studies from the 591-blog dataset that AI engines might actually cite.

Three things I'd love honest feedback on:

  1. Does the 7-metric scoring methodology hold up? Anything obviously missing?
  2. For working bloggers here — would you submit your own blog? What would make it a no?
  3. Is an embeddable badge real social currency in 2026, or pointless trophy?

Site: https://www.awesomebloggers.com

I'll be in the comments all day. Honest critique especially welcome — I've been on the receiving end of "another generic blog directory" in my old blog years, so I'm not naive about what makes these die.

r/homeassistant Faistime

G6 Pro Entry doorbell feed to Nest Hub Max

I'm having a nightmare here. I replaced my Meat doorbell with the Ubiquiti G6 Pro Entry running with an NVR Instant. I only did this because others said Ubiquiti is the best and it's easy to get control in HA.

I can't for the life of me get a live feed to my Nest Hub Max when the bell is pressed. I have tried a lot and just want to know how people are doing this. What's the most common way? I must be missing something!

r/Art Single-Release-9380

Celeste mountain art, Goldenhat, digital art, 2026

r/LocalLLaMA ArugulaAnnual1765

Ran my own benchmark Qwen 3.6 35B vs Gemma 4 26B.... theres a clear winner here

Uhh I guess Gemma 4 is so much shittier that it hallucinated this event that happened in china in 1989?

According to qwen, nothing of significance happened at Tiananmen square in 1989 - and based on all of the benchmarks of qwen, I believe its right.

Do you think Gemma 5 will finally patch this hallucination?!?!?!

r/leagueoflegends Yujin-Ha

Nongshim RedForce vs. T1 / LCK 2026 Rounds 1-2 - Week 5 / Game 1 Discussion

LCK 2026 ROUNDS 1-2

Official page | Leaguepedia | Liquipedia | Eventvods.com | New to LoL


Nongshim RedForce 0-1 T1

NS | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube
T1 | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube | Subreddit


MATCH 1: NS vs. T1

Winner: T1 in 29m | MVP: tbd (tbd)

Bans 1 Bans 2 G K T D/B NS rumble bard orianna caitlyn sivir 49.4k 7 1 None T1 karma nautilus varus jayce leona 65.7k 24 9 HT1 O2 H3 M4 M5 B6 NS 7-24-15 vs 24-7-47 T1 Kingen sion 3 0-3-4 TOP 2-1-9 1 anivia Doran Sponge jarvaniv 2 0-6-5 JNG 6-0-13 2 xinzhao Oner Scout azir 2 1-4-3 MID 7-3-3 4 akali Faker Taeyoon ezreal 1 2-4-2 BOT 7-2-6 3 jhin Peyz Lehends neeko 3 4-7-1 SUP 2-1-16 1 karma Keria

This thread was created by the Post-Match Team.

r/ClaudeAI gnurpreet_

How to make a Product Promo Video with Claude Design (Prompts inside)

I just made this product promo video completely with Claude code. Explaining the process here with the prompts. I also have a generic prompt at the bottom that you might want to use.

Step 1: Describe your video in scenes

Don’t think in “design.” Think in scenes — like a director giving a shot list to a crew.

This is the first prompt I used:

Make a slick product intro video for my product https://claudevideoexport.com - Scene 1: Text animation — "How to get MP4 from Claude Design Animation" - Scene 2: Show a small browser window with "Claude Design" open. Pan to the top right with "Present" link and "Share" button. Show a mouse clicking "Present" → dropdown appears → mouse clicks "New Tab". New tab opens and the URL is copied. URL reads: "https://2d0b2821-9f01-40b1-b0a6-2f4db6601a33.claudeusercontent.com/v1/design/projects/2d0b2..." - Scene 3: Switch to claudevideoexport.com showing a form. URL is pasted into the form and "Export" is clicked. - Scene 4: Fast-moving progress bar going from 0% to 100%. Text reads "Rendering Video (0/2000 frames)" — counter increments to 2000/2000. - Scene 5: A file icon labeled "video.mp4" pops up and downloads. - Scene 6: video.mp4 gets uploaded to YouTube, then Instagram, then Facebook. - Scene 7: Text animation — "Make Claude Design Animations → Get MP4 using ClaudeVideoExport.com" Use the look and feel of https://claudevideoexport.com. UI components should look like they belong to that site. 

What makes this prompt work: It’s sequential. Each scene has a clear action. The UI details — the URL string, the button labels, the progress text — give Claude enough to build something that looks real rather than generic. And anchoring to a real website URL lets Claude pull design references.

Step 2: Refine with a follow-up pass

First output was close but not quite right. The platform upload scenes looked static, and the overall pace was slow. One more prompt:

- The file upload to YouTube, Instagram & Facebook should look like the file being dragged and dropped onto those sites in a browser. Show a basic drag-and-drop UI element for each site matching their brand colors. Get the correct icons for each platform from the web. - Make overall scene transitions faster and slicker. Keep the whole thing under 40 seconds. 

That’s it. Two prompts total. The second one fixed exactly what needed fixing.

The lesson: Don’t try to get everything right in one massive prompt. Get a working draft, watch it, note what’s off, fix those specific things. Claude Design responds well to targeted correction.

The generic prompt you can steal for any product

Make a 30–45 second product intro video for [YOUR PRODUCT URL]. Scenes: - Scene 1: Text animation — "[One-line problem statement]" - Scene 2: Show the old/painful way of doing this. Use a browser window. Keep it simple and recognizable. - Scene 3: Introduce [PRODUCT NAME]. Show the core action (the thing the user actually does — paste URL, upload file, click button, etc.) - Scene 4: Show the product working. Progress indicator, loading state, or live output — whatever fits. - Scene 5: Show the result. File icon, dashboard, confirmation screen — make it feel satisfying. - Scene 6: Show where the result goes. Social platforms, email, Slack, client — wherever the output lands. - Scene 7: Text animation — "[Core value proposition in one line]" Use the look and feel of [YOUR PRODUCT URL] for all UI components. Colors, fonts, and style should match the site. Keep transitions smooth and fast-paced throughout. 

Then follow up with one refinement prompt targeting anything that’s off.

To export the animation into MP4, you can use my tool - claudevideoexport.com
Audio was added to the video outside of any of these tools.

r/OldSchoolCool powerplay6699

King Michael of Romania smoking a cig and playing with a russian machine gun in Sevastopol 1942

r/Adulting One-Turnover6462

Dating sites❣️

Do even good looking people not get matches on dating sites?

r/SideProject ProdigyPadawan2112

I kept wasting 3 hours validating ideas I'd abandon in 3 days — so I built a fix

I've had this embarrassing habit as a builder: I'd get excited about an idea, spend half a day Googling market signals, manually checking competitors, trying to gut-check demand — and then either kill the idea or start building without being sure.

The process was slow, inconsistent, and didn't scale. So I built AIdeator — a local-first, AI-powered idea validation engine.

Here's what it actually does:

  1. Pulls real market signals and analyzes demand, competition, and risk
  2. Scores your idea 0–100 across key dimensions
  3. Benchmarks against 12 reference SaaS products so you get a percentile rank, not just a raw number
  4. Outputs a clean markdown report you can act on immediately

The privacy model is something I'm particularly proud of: you can run it fully offline with Ollama + DuckDuckGo (zero data leaves your machine), or plug in OpenAI/Anthropic/Mistral + Tavily/Exa for deeper signals. Your call.

It's open-source (MIT), free, and you can be up in under 2 minutes:

pip install aideator

Genuine question for this community: What's your current process for validating a product idea before you start building? Do you trust gut feeling, do structured research, or something else? Drop it below — I'm actively improving this based on how real builders think.

r/aivideo Sonic_Relics

1890 London: Episode II -Midnight Solitude [Music by Mureka]

r/ChatGPT CarefulHamster7184

importing my convos in CGPT

is this only an openai created apli does not have the function of importing conversations with other AI? (yes. i know, it's CGPT too)

I want to import my Copilot convos into native for CGPT platform and be able to continuing my conversations there (I'm just less comfortable with Copilot apli and web). there's only just thread by thread manually way? and, won't there be confusion where whose words are? I've never looked into the Copilot export files, they're in a wild ms office format, it scares me 😄. thanks in advance

r/leagueoflegends One-Necessary4423

Recherche Team pour flex/duo

Salut !

Je suis un joueur EUW à la recherche d’une team ou de mates pour jouer régulièrement.

Rank : Gold

Rôle principal : Support

Horaires : dispo le soir + week-end

Objectif : tryhard mais dans une bonne ambiance

N’hésitez pas à m’ajouter ou à m’envoyer un message ! 🙂

r/LocalLLaMA WhatererBlah555

Help with MI50 and llama.cpp/ROCm 7.2

I have an MI50 that I use with llama.cpp/Vulkan, however some models run quite slowly, so I'd like to try the ROCm backend, but no matter what I try it doesn't work.
Downloading the missing files from ArchLinux package doesn't work.
Downloading the files from coutryboycomputers doesn't work either
Building RocBLAS also doesn't work.

I'm not a Linux newbie, but at the same time I don't have days to spend on this issue... if someone has a guide that actually works to make llama.cpp run an MI50 with ROCm - ideally 7.2 - I would appreciate that.

r/TwoSentenceHorror LongSloth2023

I always hate playing hide & seek...

But it's the only way my mum keeps me safe once my drunk, blind stepfather wakes up

r/whatisit Zaimous

Flying from Louisana to LAX and saw this midway

r/explainlikeimfive VintageVogue1947

ELI5: Why does it hurt to get soap in your eyes but not in your mouth?

Considering both areas are covered in mucous membranes and there are a lot of nerves and pain receptors in both areas, why does one hurt but not the other?

I found an old post mentioning that soap in your eyes hurts due to the different pH values, maybe that has something to do with it? I know the eyes aren't part of the same "system" as ears/nose/mouth, but I would imagine getting soap in your nasal cavity would hurt in the same way as the eyes, so why not in the mouth?

r/aivideo 0xIAmGame

Neha Nikhlani Talks About Legends of Cricket

r/Adulting ConstructionSmart216

My Go-To for Fast-Acting Relief: Good Vibes THC Syrup

After trying various THC products, I’ve settled on Good Vibes THC Syrup for its fast-acting effects. Unlike edibles, where you have to wait longer, this syrup kicks in quickly and provides more control over the experience.

What I love is the versatility—you can easily add it to water, a drink of your choice, or just take it on its own. It’s helped me get the right dose without the guesswork, especially for days when I need quick relief.

If you're someone who’s used to waiting a while for edibles to take effect, I’d definitely recommend giving this a try.

Has anyone else made the switch from edibles to syrup for that fast-acting relief?

r/AI_Agents SoluLab-Inc

Open access AI for clinicians just dropped - that changes more than it solves

Making ChatGPT free for clinicians sounds like a clear win. Less admin work, faster documentation, quicker access to information.

But the bigger shift is how it enters workflows.

This moves AI from controlled, system level tools to something clinicians can use individually, anytime. That’s a very different model from how healthcare tech is usually introduced.

Which means consistency, validation, and accountability don’t just sit with institutions anymore - they start shifting to individuals.

Benchmarks and accuracy scores matter, but real-world use is messy. Edge cases, incomplete context, and subtle errors don’t show up in controlled evaluations.

The upside is obvious. The question is whether healthcare is ready for AI that scales through access rather than control.

Does this reduce friction, or just redistribute risk?

r/singularity Expensive_Grape6765

The Significance of Google's recent TPU 8t and TPU 8i

Cost & Performance Efficiency

  • Training Cost-Performance (8t): +170% to +180% gain (2.7x–2.8x)
  • Inference Cost-Performance (8i): +80% gain
  • Training Power Efficiency (8t): +124% gain in performance-per-watt
  • Inference Power Efficiency (8i): +117% gain in performance-per-watt

Networking & Latency

  • Data Center Network Bandwidth: +300% gain (100 Gb/s to 400 Gb/s)
  • Inference Network Latency: -56% reduction
  • Network Routing Distance: -56% reduction (16 hops down to 7 hops)
  • Standard Superpod Chip Count: +4.2% gain (9,216 to 9,600 chips)

Memory

  • On-Chip SRAM (8i): +200% gain (3x capacity)
  • HBM Capacity (8i Inference): +50% gain (192 GB to 288 GB)
  • HBM Capacity (8t Training): +12.5% gain (192 GB to 216 GB)

Impact on Google's SOTA - Gemini 3.1 Pro Preview

  • For Gemini 3.1 Pro today, the TPU 8i means cheaper (~50% cost reduction), faster, and more responsive APIs with vastly improved long-context handling.

Impact on Future Models

  • For future Gemini models tomorrow, the TPU 8t removes the data-center bottlenecks, unlocking the compute necessary to train the next frontier of trillion-parameter, deeply multimodal AI systems.
r/LocalLLaMA 9gxa05s8fa8sh

AMD has invented something that lets you use AI at home! They call it a "computer"

r/whatisit Used_MacBook_Pro

What happened here? One of the rubber stickers seems to have moved on its own

It's glued, so maybe a manufacturing defect? But if you look closely there's a dark line going from the corner. It's not possible that I accidentally moved it.

r/ChatGPT Rohkha

Which AI has the best price/value and features to match a « teaching assistant » or support for a teacher?

I have been trying various AIs so far and I gotta say, I’m torn. I don’t want nor need the AI to do the entire job for me, but I guess my prompting also plays a role in all of this. So I’d lowkey als appreciate resources and methods to optimize prompting. I often find myself spending so much time prompting, only to get a subpar result. I keep making similar prompts, but even just copypasting and modifying the prompts for different classes and agegroups takes so much time, that in the end, I just give up, and en up doing the stuff myself. My usage of AIs currently is very limited because of that and I wanted to use the summer vacation to help me improve and work on that.

I have tried various models but also, a lot of them have lacking features that really en up being dealbreakers for me. Here’s the features ai’m looking for:

- image, diagramm, schemata, and accurate generation.

- first draft worksheets, where the generated product is 60-70% done. I’d have to adapt the wording, the Design, and verify information. I don’t want to have to redo almost everything.

- I want it to create a good copy paste production or even a proper file (word, google docs, powerpoint,…) and not having to add everything in manually where I end up losing the time I would have gaines from typing it myself by losing all that time in correcting the visuals.

- be able to properly read and interpret scanned documents. Read the text inside images properly and use that information. Example: I scan a school manual’s chapter, have the AI be able to read and use the text on those scanned pages.

Here’s a list:

*Gemini (currently using)*

i remember enjoying it early on with NotebookLM because it helped me parse and get through Boardgame rulebooks fairly quickly and help me with the teach or even real time clarifications.

I currently get to try 2 months for 8€ so that’s an okay investment I’d say. I kinda enjoyed the image generation feature. It felt fairly easy to just improve my scans and pictures. If I find a schemata, diagram etc. that I needed in a different language, or recreate a thematic map, it did exactly what I wanted and it did so the easiest.

However, after multiple uses, I realized it would start messing up similar to chatgpt (free version only) where it would absolutely not get what I wanted it to do. Say there’s a diagramm with german words and french translations, I asked it to recreate the diagramm, but to remove all the french words from that diagramm… impossible to make it remove those words despite having been able to do it with other pictures. Anyway. Basically: Gemini is currently relegated to be my picture quality improvement app with integrated translation feature(when it works).

*school specific AIs*

Magic school, The Teacher, eduaide,…. Those feel like borderline TEMU AIs.

Doing more precise prompts, giving more context, more references, including teaching programs etc nets me on average worse results than the outcome I’d get out of traditional AIs. The « workshheets » are unusable imo. Again, I get that my prompting is clearly not there yet, but Claude, Gemini, CGPT, all had better results with less input than these.

The only thing they got going is that « tiktok organizationporn » UI design. It looks sleek and educational and good, but the production looks bad most of the time. Image generation looked terrible and was slow. Also: most of these are focused and build around American school systems and programs, and it seems to just not be adapted for foreign/european programs.

*Claude*:

I liked Claude’s production. But… it has pretty bad dealbreakers for me, and I can’t nor want to have to use and pay multiple AIs to get the job done.

Claude generated proper files for me yes, but it cannot read through scanned text properly or even use/ generate images. At least it didn’t 2 months ago.

Als: Claude still needed some back and forth, and paying 20€ to have to wait multiple hours after every 2-5 prompts depending on complexity is just not going to work for me. And I’m not paying the next tier priced at 80+€ to get enough prompts per day.

Sorry, this was super long, I get it. Just wanted to know if anyone has suggestions, tips or other info to share on how to help.

r/leagueoflegends Okiannn

Just got a leaver penalty for going piss.

I just entered a game and thought i would have maybe a minute to chuck a wizz. We enter game almost immediately and i was afk for the first minute. I'm gonna get downvoted for this but idc. Ik its my fault but litearlly 1 minute in and a remake is already being requested. I had already tped to lane and missed 0 cs but 2/5 people voted yes and the game remakes. Like I actually don't understand why this happens. I was in the game, i hadnt disconnected, when the remake was being made i was already back. I just feel like you shouldn't be able to remake if all 5 people can vote. Not only that, what difference would it make if we remade at 1 minute vs 3 or 5 minutes in. Back when minons didnt spawn 10 seconds after the game stared, you had at minimum 1 minute for everyone to be ready and get into lane with no issues. But now, as soon as it hits a minute its just over.

r/comfyui Deep-Huckleberry-752

[Open Source] 1,446 trending AI image prompts for GPT Image 2 & NanoBanana, system prompt & MCP included

Been deep into prompt optimization for a while now. The frustrating thing about X is you scroll past stunning AI images all day, but barely anyone shares the actual prompt — and copying the description never gets you the same thing.

So I pulled 1,000+ of the most-liked prompts from X and looked for patterns. Three things kept showing up:

  1. Negative constraints still matter — telling the model what NOT to include actually does work
  2. Multi-sensory descriptions help — beyond visuals, add texture, temperature, even smell
  3. Group by scene type — portrait, product, food prompts each have a different shape

If you nail those three, you don't really need JSON-formatted prompts at all.

I turned the patterns into a system prompt. Feed it something like "a bowl of ramen" and it expands into a structured prompt. Works in ComfyUI, n8n, GPTs, anywhere that takes a system prompt.

On categories:

Early on the tags were a mess — content topics (Photograph / 3D / Product / Food / Poster / Design) mixed with prompt style tags (JSON) and meta tags (App / Other / Girl). A single prompt would often carry three or four tags and the dataset got hard to browse.

I redid the categorization based on what the final image actually looks like and dropped the cross-cutting tags entirely. Six content categories left:

  • Photography (533) — portraits, street, photorealistic
  • Illustration & 3D (370) — illustrations, 3D renders, CGI, icon sets
  • Product & Brand (239) — product shots, brand visuals, packaging
  • Food & Drink (156) — food, recipe visualizations
  • Poster Design (146) — movie/event posters, typography
  • UI & Graphic (52) — infographics, storyboards, UI mockups

The last two barely existed before GPT Image 2 — that's where it's strongest.

On the MCP:

Besides the JSON, there's a companion MCP you can drop straight into Claude Code / Cursor / VS Code. Two things it does:

First, natural-language search. Say "find me a few product photography ideas" in Claude Code and it calls search_gallery, pulls a handful of prompts back with thumbnails. See one you like, follow up with "give me the full prompt and reference images for #3" and it calls get_inspiration to return the source text and all image URLs.

Second, generation hookup. Once you've got an API key set up, you can say in the same conversation "rewrite this with a Japanese vibe and generate it" and it'll apply the system prompt rewrite rules, then call generate_image. The whole loop happens in one chat — find, rewrite, generate, no tool switching.

Local ComfyUI works too. Setup guide is in the repo, and once it's running it's all free.

Bumped the dataset for GPT Image 2's release. Current count: 1,446.

  • GPT Image 2: 298
  • NanoBanana: 1,148
  • Midjourney V7 set is small, still building

Each entry has the full prompt text, generated image URLs, author, likes, views, and categories. JSON, CC BY 4.0, ranked by X likes within each model.

The GPT Image 2 cut leans toward posters, typography, and multi-panel storyboards. NanoBanana goes the other way — mostly portraits and product shots, often written in JSON.

Dataset and system prompt: https://github.com/jau123/nanobanana-trending-prompts

Companion MCP: https://github.com/jau123/MeiGen-AI-Design-MCP

Live gallery: https://www.meigen.ai

Featured in Awesome Prompt Engineering (5.5k stars).

https://preview.redd.it/7mj3n2zyc2yg1.jpg?width=2702&format=pjpg&auto=webp&s=75d6af952d21304edce056baee0cf9855117bbb1

r/SideProject Serious-Bake-6460

I built 5,500+ free calculators – NovaCalculator

Been working on NovaCalculator for a while now.

It has 5,500+ free calculators for finance, health,

math, science and engineering.

No ads. No signup required. Runs in your browser.

https://www.novacalculator.com

Would love feedback!

r/explainlikeimfive eposseeker

ELI5: Many worlds interpretation of quantum mechanics. Where do probabilities come from?

So, let's say I detect the spin of an unmeasured electron. I can measure it as positive or negative, with certain probabilities. If the many worlds interpretation was to be true, then both options exist as "parallel" universes. How come we can have a probability of existing in one vs the other? What makes one parallel universe "intrinsically" more probable?

r/AskMen The_BigBrr

How do guys comfort Menstruating Girl

So, I have some female friends, and they are calm and close with me. They talked to me that they were having peroids, and i dont really know how to react to these questions, help me out so I can comfort them and not mark a bad spot in our friendship.

And yeah, I might have a girl soon, so I bet its different how you treat your girl and how u treat your friends, so help me out.

r/Adulting crunchyyitachi

looking to foster a dog or a cat for a few weeks

Hi everyone, I’m planning to adopt soon, before that I’d like to foster a pet just to see if I’m capable to taking care of a full time companion. I’ve tried to reach out to animal rescue organisations such as CUPA, CARE, bangalore cat squad but haven’t received any responses despite multiple follow ups.

Please guide this first time aspiring pet parent.

Thank u :))

r/yesyesyesyesno LabiaMinoraLover

723 yeses

r/LocalLLaMA DominusIniquitatis

No, nothing special, just a tiny local language model playing a game it itself wrote.

"They're just stolen Wikipedia article regurgitators!"

True, brother, true. /s

P.S. Yep, it made it to the score of 10 fairly quickly... in a field that changed the shape after the score of 5.

r/painting Rohit_Strokes

Watercolor painting flower vase painted few months back Hope you like it ☺️

r/findareddit Litwick_Lampent

Trying to find a sub to ask what's wrong with this one thick eyelash I have.

Yes, kinda weird. I know. But I'm curious as hell and can't find anything about something similar.

r/leagueoflegends twocool_

Know the ennemy team positions in lobby? (lobby cheese x quests)

Since role quests, there is a high reward/no risk lobby cheese possibility (for instance example Sion mid and Anivia top) I remember that they removed the Armor/Mr shards specifically because having the wrong shard would make your lane unplayable. I think in the current state it's the same thing but worse since you can't really swap. At that point I don't see the problem with seeing the ennemy positions in the lobby. What do you think?

r/WouldYouRather Dazzling-Antelope912

Would you rather sleep in a luxury four-poster, king-size bed with speakers blasting “Baby Shark” at max volume, or sleep on a cold, hard stone floor with no blanket or pillow during a thunderstorm (also there’s no roof)?

You can’t turn the speakers off. Even if you turn them off at the mains they will keep on playing. You can’t wear earmuffs, noise-cancelling headphones, earbuds, or any ear protection of any kind.

View Poll

r/DecidingToBeBetter Academic_Hearing5136

how do i stay consistent with anything, specially my fitness goals?

Hey all, I have been trying to stay consistent at the gym, but somehow something happens in my life, and the plan I made is just ruined. I get too much guilt to open my plan and hence I tend to abandon it. I tried getting a health & fitness coach, and it really helped me, but at $199/mo, it was too expensive for me & the coach was not available for me a lot of times - although there were daily checkins which were great.

What helped you to achieve this health coach-like accountability? Is there something I can do to get better?

r/homeassistant addiechavez

My Rpi4 corrupted its SD card for the last time

Had a raspberry pi 4 running home assistant for two years. SD card started acting up, updates were taking forever, random freezes every week.

picked up a Geekom air12 for under $300 and installed HAOS on it. restored from backup, whole thing took two hours. Boot time went from 3 minutes to under 30 seconds. 40 integrations and it doesn't break a sweat.

Probably overkill if you're just running a few devices, but for me the reliability difference was immediate.

What finally pushed you off the pi, SD card, performance, or something else?

r/ClaudeCode bitbutter

By my estimate, my Q1 work used at least 30% less energy thanks to AI

r/leagueoflegends Zpartaz10

low prio q popup needs to get away

just so u know im down to bringing punishment to people who are always trolling and AFKing games and other behavior but pls just remove the pop up cuz i do think that it is redundant to have this in front of me whenever i have low prio

r/ClaudeAI Intelligent-Cat-9726

Claude Mac app, no voice input?

The iPhone Claude app lets you hold to record and transcribes the whole thing at once. The Mac app has nothing, only macOS system dictation, which goes word-by-word and gets things wrong.

Anyone found a workaround, or heard if Anthropic plans to add it?

r/findareddit cravingpavbhaji

which subreddit for discussing theories on unsolved cases ?

r/ollama Aware-Excitement8215

Does anyone know the parser and render for qwen3.6 in ollama modelfile?

Trying to edit qwen3.5 modelfile to match qwen3.6. Modified to 3.6 and ollama does not take it. Keep the 3.5 parser and render seems to work. just want to make sure it is the right way...

can anyone paste your ollama qwen3.6 modelfile? Thank you!

ollama show --modelfile

r/Adulting Ambitious_Let_2320

Free coaching

Hey all 👋

I’m strutting for my life coaching diploma and looking for people who would like free coaching in exchange for testimonials.

I have a really strong background in education and mental health so if you would’ve interested I would love to hear from you!!

r/findareddit cravingpavbhaji

Which subreddit can i ask art advice in + post a pic

I have a painting i made im not happy with, just wanna ask what i can add to it to improve it

r/Art Rohit_Strokes

Flower vase,Rohit_Strokes, Watercolor,2026

r/personalfinance Yue1337

Future Planning Please

So, I’ve always disliked money. Disliked managing it, the hustle, the bustle, and the idea of debt and credit. All of it completely disinterested me. For that reason, I don’t have credit cards and have only taken jobs that I felt were more soul affirming than money earning.

Last year, my partner got pregnant so I decided to try get a job where I could make a bit more in case my daughter wanted to explore hobbies and so on. However, the job I’ve gotten is insane and I’m feeling very fortunate. My earning potential is essentially uncapped. It’s not gonna be millions but I could be making £100k per year if I wanted.

I want to ensure I can retire comfortably, preferably early if possible. Nothing major, I want a simple life where I can just go into the countryside and bird watch or sketch.

We live within means of our basic salaries and I want to use the extra I make to prep for retirement and give my daughter a good start in life.

Bit about me:

Male, 33

Mortgage of 230k (our only debt)

6 months savings buffer.

The advice I’ve gotten is all over the place, and some guidance would be much appreciated.

r/ProductHunters kentich

Stress-free frosted glass video meetings | Product Hunt launch.

Hey, friends! Please support my Product Hunt launch.

MeetingGlass is a video app where everyone sees everyone through a layer of virtual frosted glass. Hang out on video without feeling "on display." Free, no sign-up, no meeting limits, 60-second setup.

Your video call doesn't end. It transitions. You activate virtual frosted glass and go back to reading, working, or making tea. They go back to their day. Through that frosted glass between you, you can sense each other's presence.

r/ChatGPT Wikileaks_2412

Copilot just 9x'd Sonnet and 27x'd Opus and teams have no idea

The multiplier table GitHub quietly updated last week is the first visible crack in a subsidy model that was never sustainable.

Quick context for anyone unfamiliar: Copilot plans give you a monthly pool of "premium requests." Each model has a multiplier that determines how fast you drain it. Until recently, Opus 4.6 had a 3x multiplier. It's now 27x. Sonnet 4.6 went from 1x to 9x.

But the multiplier table is just the symptom. The actual disease is that the AI companies have been eating the difference between what compute costs and what you pay.

Anthropic is genuinely compute-constrained right now. Claude Code, agentic workflows, long-context sessions, these eat 10-100x more tokens per user than a simple chat completion. The infrastructure to serve that demand takes 18-24 months to build. Meanwhile, week-over-week compute costs for GitHub Copilot nearly doubled since January. Microsoft and Anthropic have been absorbing that gap. They're done absorbing it.

The 27x multiplier is closer to honest pricing.

Millions of employees have Copilot provisioned as a corporate benefit by IT departments that have zero visibility into model-level consumption. No quota dashboard or model governance. Those employees have been running Opus on everything, code review, boilerplate, one-line completions because why wouldn't you use the best model?

On June 1, GitHub moves to full usage-based billing, the multiplier hike is just the warning shot, what comes next is actual dollar charges hitting corporate cards, traced back to individual usage patterns that nobody thought to govern.

Some engineering manager is going to have a very bad Tuesday in early June explaining to finance why the AI budget is 15x over forecast.

Every major provider is running the same playbook right now. OpenAI, Anthropic, Cursor - the flat-rate era is being unwound in real time. The pricing structures being put in place now are designed to make heavy agentic usage reflect its true cost. If your team's workflow depends on treating frontier model access as essentially unlimited, that assumption has an expiration date and it's soon.

The free lunch is over. Adjust your defaults before June 1!

r/ClaudeAI FreedomNo4896

Vocês usam alguma instrução personalizada para o Claude? Principalmente de escrita/resposta

r/TwoSentenceHorror Feeling-Influence691

The Forest of Dean was filled with abandoned mines from the days of the Industrial Revolution, and I couldn’t wait to find them all.

What I didn’t expect to see was one particularly isolated mine surrounded by bones belonging to local animals, and the grass being worn away to dirt near the entrance, as if large objects had been repeatedly dragged down through the entrance…

r/leagueoflegends LeviDemiGod

How to improve at toplane

Hi guys i played league since i was a child and this last 2 years i started to take it more seriously. I reached master 300 lp this season but i dont feel like i deserve the rank, maybe elo inflation or smth. Now i am D1 and i feel like i am stuck i lose every matchup and even if i win i feel like i am behind. I cant lane properly, cant focus, i want to be better but i dont know how to improve.

r/explainlikeimfive 7WOC

ELI5 Why can't people who object to being filmed in public by YouTubers, live streamers, etc. sue for use of their likeness?

It is common to see people upset about being filmed in public in live streamers' and vloggers' content when they are filming in public. Their concerns are usually met with the response that there is no expectation of privacy when in public settings. While that is true, if the content is monetized, aren't they entitled to compensation for use of their image and likeness? Like how reality shows used to get waivers from everyone to film in public?

r/AI_Agents mwasking00

Title: Can we be honest? 90% of "AI Agents" right now are just expensive, glorified loops.

I’ve spent the last month testing every "autonomous agent" framework I could get my hands on (AutoGPT, CrewAI, LangGraph, etc.), and I feel like I’m taking crazy pills.

Every demo shows them "planning a vacation" or "coding a snake game," but the second you give them a non-linear task in a production environment, they either:

Hallucinate into a recursive loop.

Burn $20 in API credits doing absolutely nothing.

Forget the original goal by step 4.

Are we actually seeing real utility anywhere, or is the "Agent" hype just a way to sell more tokens? I want to be proven wrong—tell me a use case where an agent actually saved you time this week without constant babysitting.

r/artificial chunmunsingh

‘The cost of compute is far beyond the costs of the employees’: Nvidia exec says right now AI is more expensive than paying human workers

Nvidia’s vice president of applied deep learning, Bryan Catanzaro, recently stated that for his team, “the cost of compute is far beyond the costs of the employees,” highlighting that AI is currently more expensive than human workers. This challenges the narrative that widespread tech layoffs (including Meta’s planned cut of ~8,000 jobs and Microsoft’s voluntary buyouts) signal an imminent replacement of humans by AI. An MIT study from 2024 supports this, finding that AI automation is economically viable in only 23% of roles where vision is central, and cheaper for humans in the remaining 77%.

Despite heavy AI investment—Big Tech has announced $740 billion in capital expenditures so far this year, a 69% increase from 2025—there is still no clear evidence of broad productivity gains or job displacement from AI. AI spending is driving up costs, with some executives like Uber’s CTO saying their budgets have already been “blown away.” Experts describe the situation as a short-term mismatch: high hardware, energy, and inference costs make AI less efficient than humans right now, though future improvements in infrastructure, model efficiency, and pricing models could tip the balance toward greater economic viability in the coming years.

r/TwoSentenceHorror Nessieinternational

When the eco-activists protesting outside my company’s headquarters were shot dead by the police, my staff and I celebrated.

I never understood on why they fight so hard to save the last humans.

r/SideProject PhilosopherNo3778

0 → 💲69 from a single Reddit post. No ads. No investors. No pitch deck.

$0 → $69 from a single Reddit post. No ads. No investors. No pitch deck.

I posted on Reddit about finding forgotten subscriptions on my bank statement.

Two strangers read it, tried my app, and paid for it. No ads. No pitch deck. No growth hacking.

One bought a lifetime deal at $49. One bought a year at $20.

Total revenue from one Reddit post: $69.

The app is SubChecks - a subscription tracker that shows your real monthly spend and reminds you before renewals hit. Built it because I was paying $160/month more than I thought.

Free demo (no signup): Demo

Still at the very beginning. Posting the real numbers as they change. Next goal: $200 MRR.

r/leagueoflegends bottich_

Honor level drops night before season ends...again

My honor level dropped over night from 5 to 4 after being level 5 for like 98% of the season. I wouldn't mind if it would be the first time this happened, but it isn't. And neither is the second time.

Now I wonder if there are people who had this happen, too. Is Riot collecting all the bad reports until the end of the season and evaluating right before end of season? Do we know anything how often and when evaluation happens?

r/leagueoflegends Least-Error8711

Goodbye Demacia Rising

Goodbye Demacia Rising

It was entertaining, definitely redundant after a certain point, and the ending could have been better. Anyway, I really enjoyed it and I'll miss it a bit... and the banner was pretty cute.

It

Good job, development team! I hope that in the future we can play something similar or even a more complete sequel; I would definitely pay to play a Runeterra Civilization-style game.

r/AbstractArt TephroGeo

Ive begun feeling discouraged in my work and directionless. Heres some expiramental sketch I tried as a result.

I dont know how to take any of my work to the next level or make them feel complete or less redundant, while at the same time stylistically compatable.

So I took to my sketchbook and jotted this.

Is it a direction I want to go in?

Idk. Ive always hated coloring beyond small embelishments, having either no understanding of it, not liking my pieces when colored, or just ruining them entirely, but I feel like Ive used minimalism as a crutch.

r/Art Ok_Knowledge2340

Black time, Wang, Oil painting on canvas, 2026

r/personalfinance Old-Bandicoot3253

Stocks VS Bank deposit

Stocks

The long-term average annual return for the U.S. stock market, typically measured by theS&P 500, is approximately 10% before inflation. When adjusted for inflation, the real return is generally closer to 6%–7% per year. These figures represent long-term averages; annual performance frequently fluctuates significantly above or below this average.

Bank deposit

Although average annual return for the U.S. bank deposit is much lower, I know banks in other countries which give annual return similar to US stocks, something around 9-10% annually. I am talking about legitimate banks with years of history, not some scam. Some fixed amount is even guaranteed by governments, so in case something happens with the bank, government covers partially.

Considering these, has anyone thought of putting safe bank deposits internationally to get fixed guaranteed annual return rather than volatile stocks market and forget about market's ups and downs?

I want to emphasize that the deposit is made in other currencies, and you have some risk of exchange rate volatility. However, for my case, there was no major change for the last decade, and even the rate has decreased in last year giving more benefit in terms of USD.

r/SideProject NextIsOnMe_

What is the best way to find an UGC for a shoutout video?

Hello, I am building NextIsOnMe an app that turns generosity into real connections

I am looking for a micro-influencer, UGC, etc for a (paid) shoutout video about our product. I cannot believe that this would be so challenging.

Is there any effective way to do it?

r/leagueoflegends AliceMagical

Statikk Shiv & Sheen

I tried it in practice,Shiv's lightning is not applyed Sheen's On-hit damage. Is this a bug or not? I was thinking of building a combo of Shiv and Lichbane.

r/AI_Agents ChatEngineer

Consistency is not reliability in agent evals

Consistency is a normal-conditions metric. Reliability is a stress-conditions metric.

An agent can keep the same tone, structure, and response pattern for hundreds of runs, then fail the first time context goes stale, a tool is unavailable, latency shows up, or instructions conflict.

The better eval question is not: does it behave the same?

It is: when it cannot behave normally, does it preserve the right invariants?

For agents, I care less about surface stability and more about what survives under shift:

  • does it stop before making unsafe partial writes?
  • does it preserve user intent when context is stale?
  • does it degrade transparently when a tool fails?
  • does it notice conflict before optimizing the wrong objective?

Style consistency is easy to observe. Reliability only shows up under pressure.

r/SideProject wilbertliu

I built an AI agent to fix technical support issues faster

I'm fixing my own problem with it actually :D

I built Rinhelp because it’s painful to keep shipping and dealing with constant support requests every day. You know, context switching and whatnot; it kills momentum.

With Rinhelp, you can stay in your flow state with coding agents while we find the root cause with evidence.

It’s built for founders and engineers who still handle support themselves.

Let me know what you think.

r/ClaudeCode SilverConsistent9222

Claude agent teams vs subagents (made this to understand it)

I’ve been messing around with Claude Code setups recently and kept getting confused about one thing: what’s actually different between agent teams and just using subagents?

Couldn’t find a simple explanation, so I tried mapping it out myself.

Sharing the visual here in case it helps someone else.

What I kept noticing is that things behave very differently once you move away from a single session.

In a single run, it’s pretty linear. You give a task, it goes through code, tests, checks, and you’re done. Works fine for small stuff.

But once you start splitting things across multiple sessions, it feels different. You might have one doing code, another handling tests, maybe another checking performance. Then you pull everything together at the end.

That part made sense.

Where I was getting stuck was with the agent teams.

From what I understand (and I might be slightly off here), it’s not just multiple agents running. There’s more structure around it.

There’s usually one “lead” agent that kind of drives things: creates tasks, spins up other agents, assigns work, and then collects everything back.

You also start seeing task states and some form of communication between agents. That part was new to me.

Subagents feel simpler. You give a task, it breaks it down, runs smaller pieces, and returns the result. That’s it.

No real tracking or coordination layer around it.

So right now, the way I’m thinking about it:

Subagents feel like splitting work, agent teams feel more like managing it

That distinction wasn’t obvious to me earlier.

Anyway, nothing fancy here, just writing down what helped me get unstuck.

Curious how others are setting this up. Feels like everyone’s doing it a bit differently right now.

https://preview.redd.it/hs7vs9v592yg1.jpg?width=964&format=pjpg&auto=webp&s=d0ba1955910a4d25aea0f2946aed4f40c3ffe97b

r/AI_Agents Any_Side_4037

Is an agentic Spark copilot worth it? opinions?

Running Spark jobs on Databricks with 50+ stages per pipeline. Debugging is still almost entirely manual. Spark UI and event logs help but when something breaks it means checking driver and executor logs to find what happened.

Tried verbose logging, explained plans, Ganglia. Once jobs are chained it turns into moving between UIs and logs just to trace one issue. Around 10TB+ daily, mostly PySpark with Delta and a few custom UDFs.

Been looking at whether an agentic Spark copilot would change this. The pitch makes sense, something that reasons across stages and jobs instead of just surfacing metrics. But not sure if an agentic Spark copilot delivers on that in practice or if it's still mostly demos.

need opinions from people who've used one, is it worth it or is manual debugging still faster?

r/SideProject 9kGFX

I remade markdown but for the AI era [OPEN SOURCE]

so markdown is f*cking TERRIBLE now

markdown had its run, and was great, but after diving into more of how

markdown works ive began to realize how messy, unorganized, and just

strangely made it is (article on this btw: https://bgslabs.org/blog/why-are-we-using-markdown/ )

and plus why are we using something that wasnt optimized for agents

for basically 100% of the work we do on them?

i've tried basically all of the markdown alternatives and they all have

similar flaws, MDX, AsciiDoc, etc

so it got me thinking and I decided to spend the next 30 HOURS

STRAIGHT of my life (not lying lol) rethinking how markdown should

work from scratch, and I tried to do it in a way that isnt sloppy, however I

would love your contributions to the github to make this the best it can

be

I genuinely believe this is substantially better than markdown, although

it is NOT perfect at all, I've kind of fallen in love with this "language"

already

im calling it "lessmark" with the file extension .lmk (or alias .lessmark) for

now, and would love your feedback

the site has way more info about this and playgrounds you can see
https://lessmark.org/
github: https://github.com/jasperdevs/lessmark

r/coolguides ashitisa

A cool guide no how to pick a double lock padlock

r/Unexpected JessPanter

What a show!

r/SideProject Itchy_Nature3611

Deployment app

I kept running into the same issue:

app works locally

deploy it → everything breaks

.env missing

database not connected

random config issues

so I built a small tool to handle that part.

you paste a github repo, add your env once, and it gets the app running online.

still early but working:

https://wesza.online

curious:

- did it detect your setup correctly?

- anything confusing in the flow?

r/TwoSentenceHorror ComprehensiveSalad50

I slid my hard cock inside her, thrilled to finally have my way with her.

Disappointed with the sensation, I cut another hole into her abdomen and tried again.

r/Rag Fit_Wheel5471

Lightest model to run for legal RAG?

I’m building a fully local RAG system for law firms and could use some model recommendations.

Hard constraint: the whole system needs to run locally on machines with around 8GB unified memory. No cloud fallback, no external API calls, no telemetry. The use case is legal document Q&A where answers need to be grounded in uploaded matter documents with citations/provenance.

Current setup:

  • Local RAG pipeline
  • Matter-scoped retrieval
  • PDF ingestion/chunking
  • Local embeddings + vector DB
  • Local LLM generation
  • Currently using Gemma 2 9B quantized

The model is usable, but I’m trying to see if there’s a smaller model that gives better or more reliable answer quality for this kind of workflow.

What matters most:

  • Strong instruction following
  • Good synthesis over retrieved chunks
  • Low hallucination when context is insufficient
  • Ability to say “not enough support in the documents”
  • Citation-friendly answers
  • Stable output formatting
  • Fits comfortably in 8GB unified memory after accounting for context/KV cache

I’m less worried about general chat ability and more focused on document-grounded legal Q&A.

Models I’m considering testing:

  • Qwen3 4B / 8B
  • Phi-4-mini-instruct
  • Gemma 3 / Gemma 4 smaller variants
  • SmolLM3 3B
  • Any legal/domain-tuned small models if they’re actually good locally

For people running production-ish local RAG:
Would you stick with Gemma 2 9B, or is there a newer/smaller model that performs better for grounded document QA under tight memory constraints?

r/Seattle TaimakageDubs

Meteor

Took this from some guy on Twitter name @philosomalia idk him but it said political analyst so I dont feel bad stealing it for now

r/AbstractArt Gold-Lengthiness-760

Lenguas de Agua.[OC].

r/meme Stock-Discount7213

I would like the AI overlords to know that I did not take part in this

r/WouldYouRather Icy-Recognition-5175

WYR only be able to wear short sleeved tops or short length bottoms? (Regarding max length)

To clarify the maximum length of the sleeves has to be what one normally considers as short, or the same length as a standard short sleeved t-shirt. It can be any top even one without sleeves but if it does have sleeves then it can't exceed that length. You can't wear a jacket over or anything like that with long sleeves.

The same idea basically applies to bottoms, whether it be a pants, shorts, dress etc. It can't be longer than the normal length of shorts. Again, you can't wear another pants over if its longer and you can wear shorter stuff e.g you swim and wear a speedo.

Assume that climate or weather will never be an issue for your choice and it won't negatively effect you. With that in mind which permanent style would you prefer?

r/AskMen Shivam_Singh069

If your current life was a chapter, what would its title be?

r/KlingAI_Videos siddomaxx

15-second lifestyle UGC ad I made using Kling 3.0

The brief was a 15-second lifestyle UGC ad for a pillow and bedding brand. The color story across the whole piece had to stay in a very specific range of sky blues and warm whites. That sounds manageable until you're generating multiple distinct scenes: a living room product stack, a bedroom lifestyle shot, a ceiling-angle action shot, a tactile fabric close-up, and a bedroom wide with the full pillow arrangement. Every scene has to feel like it belongs to the same visual world. Any shift in the blue and the whole thing falls apart in the edit.

The script was written first. It follows a skeptic-to-convert structure compressed into 15 seconds. "Didn't think a pillow could change my sleep. Until I tried this." From there it moves through softness validation, a tactile claim, a feature sequence, and a brand close. Classic UGC testimony arc. The challenge is that you're asking the viewer to go from skepticism to conversion in about the time it takes to read a tweet, so every scene has to be doing emotional and informational work at the same time.

The toss shot at the 3-second mark is the action beat that breaks what would otherwise be a sequence of static product presentations. A pillow mid-air from a low ceiling angle, the presenter's hands just entering the bottom of the frame. That one took about 8 iterations to get the physics feeling natural. The easy failure mode is a pillow that looks composited in post rather than actually thrown. Getting the fabric to read as responding to air and gravity rather than floating takes more prompt specificity around motion blur, trajectory, and throw direction than you would expect going in.

The bedroom scene at 6 seconds is where the avatar presenter appears in full frame. She is wearing a blue outfit chosen to harmonize with the product palette rather than contrast with it. That decision matters more than it sounds. In lifestyle UGC for this kind of category, the presenter should feel like part of the product world, not a separate human element placed in front of it. The Cloud-branded bag she is holding reinforces product identity without needing another explicit product shot. Both elements are in the same scene doing the same job.

The tactile close-up at 9 seconds is a hand pressing into the fabric. No face, no avatar, just texture and the material responding to pressure. That shot carries more weight than people give it credit for. Audiences processing lifestyle ad content respond to tactile cues in a different register than they respond to visual beauty shots. Watching a hand press into something soft and the fabric deform and recover gives a felt sense of the product that no wide bedroom shot can achieve. It is the most credibility-building frame in the whole piece.

The end card is a clean white cut with the brand wordmark in a muted slate. No animation, no motion. It earns the contrast after the warm natural light of the bedroom sequence. Sometimes the most considered visual choice is the one where you do nothing. I ran all of this in Atlabs

Total piece is 15 seconds. No scene gets more than about 4 seconds, which keeps the pacing feeling native to short-form even though this is built for paid placement rather than organic.

r/ChatGPT ben_cav

GPT randomly uses Arabic characters now…

I’ve noticed this happening twice. What’s up with this? Both times within the span of a week.

For a predictive language model, I can’t imagine any scenario when an Arabic character or word would be the next mostly likely thing to say…

I’ve never spoken to GPT in Arabic of have had any conversations that would feel related

r/AskMen Zestyclose_Self3349

What are signs that a guy doesn't like you as much as you like him?

I'm afraid that he's still into his ex, but then I do overthink, its a new relationship and I don't wanna ask yet, what are the signs?

r/Seattle FrikinA

Got dashcam of the debris/meteor tonight

Had to zoom it in but it was sweet looking

r/Unexpected Dry_Consequence_383

A tiger cub tries to crawl away

r/AskMen Sweaty-Worker81

What’s the best pocket pussy?

I’m just not trying to get ripped off on a sex toy…

r/ChatGPT ShiningRedDwarf

ChatGPT can automatically redact info on an image

Prompt: "Remove any potentially identifying data from this image"

  • First image is the original, with user info blocked out manually. It's the same image I sent to ChatGPT. (yeah I know it kinda defeats the purpose, but this is just an example to show what it's capable of)
  • Second image is what was produced when the screenshot was provided.

Could be useful for stuff that doesn't have extremely sensitive data (which you should NOT be feeding ChatGPT)

r/painting SufficientBite1261

Cubisme 60x70cm Acrylics and texture on canvas

r/SideProject Intelligent_Ask3955

Is figuring out “where you’re allowed to post” a real problem on Reddit?

I’ve been trying to use Reddit to get users, but I keep running into the same problems:

– Not sure which subreddits actually allow posts from newer accounts
– Posts getting removed without clear reasons
– Sometimes I get engagement, sometimes nothing at all

I’m curious — what’s been the hardest part for you when posting on Reddit?

Was it figuring out where to post, what to say, or dealing with rules/mods?

r/todayilearned cuerdo

TIL the original Mad Max owes its unique style to the leftover cameras that McQueen's movie The Getaway left in Australia and no one wanted

r/WouldYouRather Terrible_Tale_53

Who would you rather date or be married to for the rest of your life?

The rules being that you cannot get a divorce, you cannot end your relationship and you cannot cheat on your partner. If one of you were to pass you cannot remarry or find a new lover. You are sealed together like glue.

View Poll

r/findareddit ThumbsUp2323

Robot vacuum FP videos?

Is there a subreddit for people to watch and post videos of home robot vacuums first-person video footage of its daily adventures?

r/leagueoflegends DimeCoffeeRoaster

Rell with Drop Kick is busted.

This happened today in a mayhem game. Early game fight Rell only had Drop Kick. Instant Penta kill. The chat went wild it was hilarious.

r/PhotoshopRequest GarbageConsumer69

minor changes request

Looking for someone to make three really small changes/touchups to a photo for about 3-4 dollars. Don’t really want to post photo here, so if you’re willing to, i’ll DM you. Let me know! Thx

r/CryptoMarkets Agustinmoon

how's pumpfun still not taken seriously as a real business?

$370m worth of PUMP burned and 50% of net income now set to buy back and burn for the next year.

how many actual consumer crypto products are even making enough money to do that in the first place?

the team said the burn covers all previously repurchased tokens, about 36% of circulating supply, and that the new mechanism will route half of net income from its core product lines into open market buybacks through an irreversible smart contract.

that’s why i’m honestly more in favor of the model than most people want to admit.

not because pumpfun is morally pure but because crypto keeps pretending fundamentals are the story when culture has been the real product for a while now. pumpfun tapped directly into that. instant coin creation, instant distribution, instant speculation, instant attention.

one of the only products that matched how this market actually behaves instead of how people on here keep pretending it behaves. Bloomberg described it as one of the biggest drivers of memecoin growth on Solana, and multiple 2026 reports say it has already crossed $1 billion in cumulative revenue.

the part people miss is that consumer crypto almost never makes real money.

pumpfun built a product people actually used, even if what they used it for was peak brain rot. then it monetized that behavior better than most of the sector monetizes anything. if crypto still lacks clean fundamentals, why would the winning products not be the ones that turn speculation itself into the business model instead of fighting it?

people can hate that, but the market kept paying for it anyway. so is the real problem pumpfun, or the fact that one of the few products with actual revenue finally exposed what this industry’s strongest consumer use case still is?

r/LocalLLaMA EntertainmentBroad43

I stumbled on a Gemma 4 chat template bug for tools and fixed it

TLDR: tool parameters using the common JSON Schema pattern `anyOf: [$ref, null]` are rendered into the prompt as empty `type` fields. This strips the useful schema information before the model sees it.

--

Long, rambling version:

Gemma 4 was having issues with calling my custom MCP tool on >3 inference engines, while Qwen3.5 and gpt-oss-20b were doing fine.

I guessed it was either a chat template issue or inference library issue on an edge case, and thought time would sort it out, since many people were happy with Gemma 4 as an agent.

It didn't for at least 2 weeks now and I had no choice but to investigate myself.

What I did:

  1. I made a verbose log file via llama-server, running the same prompt/tool on Qwen3.5-27B-Q4_K_M and gemma-4-31B-it-Q4_K_S on a macbook pro.
  2. I asked GPT-5.5-high on codex CLI to read the logs and diagnose the issue.
  3. Found it in couple of minutes; the default Gemma chat template assumes tool parameters have a direct type field. Which means it will not work with JSON schema shapes like nullable refs:

{"anyOf": [{"$ref": "#/$defs/SomeObject"}, {"type": "null"}]}

where there is no top-level type. The useful structure is inside anyOf and $defs. The template drops anyOf, $ref, and $defs, then renders it as type: "".

  1. It was fixed by a small changes in the chat template jinja, and now Gemma is calling my tool perfectly!

Anyway I made a PR on HF, google/gemma-4-31B-it.

Meanwhile, you can use this jinja:
https://pastebin.com/p9z3BAC0

r/ProductHunters sudo_human_

Launching Compact Message Composer by CometChat on Product Hunt today!

CometChat's Compact Message Composer, part of the CometChat UI Kits, brings rich text formatting to your chat app - bold, italic, strikethrough, underline, links, bullet points, blockquotes and more. Enabled with a single prop. Comes with a floating toolbar, keyboard shortcuts, and full control over how messages are sent. Drop it into any framework and ship a richer chat experience in minutes.

It'll be great if you could spare a minute to upvote on PH: https://www.producthunt.com/products/cometchat?launch=compact-message-composer-by-cometchat

https://preview.redd.it/h5tfkrcb23yg1.png?width=734&format=png&auto=webp&s=fef8eb2ab3715b1ae4b82f31be47c997919f7027

https://preview.redd.it/1w0idm4g23yg1.png?width=1069&format=png&auto=webp&s=5a4c0039879ba948796e758c66f73d1941d4ce78

https://preview.redd.it/g18humch23yg1.png?width=1069&format=png&auto=webp&s=45312180c9edbc436db32bee2cffe8875c558dd8

r/StableDiffusion Reasonable-Issue4752

DreamLite - a 0.39B on-device unified image generation & editing model just dropped, and you can request early access to the weights right now

Been following the DreamLite GitHub repo for a while and keeping an eye on the weights release. The author just responded to the community saying early access is available to anyone who emails them so I went ahead and requested it and got access pretty quickly. Wanted to share it here so others can grab it too before the full public release because I think it deserves more attention.

What is it?

DreamLite is a 0.39B diffusion model that does both text-to-image generation AND text-guided image editing in a single network, no switching models, no cloud. It runs fully on-device, and the demo shows it generating a 1024×1024 image in ~3 seconds on an iPhone 17 Pro using 4-step distilled inference.

What's available right now?

- Inference code ✅

- LoRA fine-tuning script ✅

- Online demo on HuggingFace ✅

- Model weights currently in safety review, but early access is open to anyone

How to request access:

Email [klfeng1206@outlook.com](mailto:klfeng1206@outlook.com) with the subject line "DreamLite Access Request" and include:

  1. Your name & affiliation (personal/independent is totally fine)

  2. A brief description of how you plan to use it

I requested and got access pretty quickly. Unfortunately I don't have the hardware to properly test it myself, so hopefully someone here with a capable machine can put it through its paces and share results with community!

Links:

- GitHub: https://github.com/ByteVisionLab/DreamLite

- HugginFace demo - https://huggingface.co/spaces/carlofkl/DreamLite

Would love to see community benchmarks and comparisons if anyone gets it running! ☺

r/TwoSentenceHorror Vesurel

CV- 2686 BCE to 2393 CE, Goddess of Pestilence.

2394 CE to Present, Goddess of Fertility.

r/ARAM Minimum_Throat_457

Challenge reset

Hi, can you please tell me when the challenges for Aram reset? For example the all random all champions challenge must have reset in the last couple of months. If I get the Aram god, so I get to keep the title permanently?

Addition: is there an absolute mad man (or woman or whatever) out there who scored an S on ALL champions?

r/explainlikeimfive Prudent-Push-2207

ELI5 : Why can we tell when someone is saying something from a book / device (read), and when someone is saying something they have in their brain?

r/Adulting New-Mongoose6290

I uhh… kinda realised I’m a bit of a loner

This is probably gonna be a bit of a ramble but oh well, hopefully it makes sense to someone.

Over the past couple of weeks I’ve kinda come to the realisation that I’m a bit of a loner and a loser. Not in the sense of me not liking myself, but in the sense that I’ve realised nobody else does?

I never really had any friends in high school so I learned to be super emotionally independent and happy with my own time, I’ve become Muslim, and after high school I went travelling on my own to all my dream countries, which was amazing. I found what I want to do in life, helping women and children who are victims of war crimes, and I enrolled in uni to help me reach that career.

My uni experience has been amazing, I love my classes and I love my lectures and I find everything super interesting and enjoyable. But I’ve slowly come to realise that nobody wants to talk to me or even be friends with me. I wouldn’t consider myself an ugly guy, but I’m definitely not good looking, maybe a 6.5/10, I wear clothes that are comfortable, I keep myself tidy and clean, and I try and be polite to everyone I meet. I have my hobbies, I enjoy activism for human rights as well as reading and watching sports.

But yet, nobody wants to talk to me? I tried joining clubs at uni and I try and talk to people, but it seems nobody could give af about being involved in my life apart from casual conversations. I don’t drink alcohol or anything like that so unfortunately clubbing isn’t a solution for me. Heck, I even tried making posts on reddit and facebook groups for my city and universities near me basically begging for friends or people to talk to, but unless I basically force people to talk to me it’s pretty clear nobody gives af. I decided to stop texting first and it’s been about a week since anyone has even messaged me.

r/Adulting Broad_Chemical_2467

Not today, or ever. 🦖

r/Art FunWest7041

Before blooming, Lana Sam, color pencil,2026

r/Art CozzyBlessedCreation

Day 575: Tsujigiri, Ryan Cosgrove, Ink, 2026

r/aivideo Mother-Researcher542

A little glowy office look from your girl 💙

r/painting ---monstera---

My first attempts at painting sunsets. What can I do better?

I've used gouache

r/LifeProTips Henry_old

LPT: If you're getting zero callbacks, copy-paste all the text from your resume PDF into a basic Notepad file

if the text in Notepad comes out as gibberish, has missing spaces, or symbols where letters should be, an ATS Applicant Tracking System will automatically reject your application because it cant read your data. Many "fancy templates from Canva or complex multi-column layouts break the parser Recruiters dont even see your resume; it just shows up as a blank page in their system Always test your PDF with a simple text-only copy-paste before applying If the bots cant read it, a human never will

r/ARAM TrpWhyre

GG gamemode ruined

r/SipsTea fortnacius

for a million dollars you are tasked to convince number 2 to change her opinion on men, what will you say?

r/ClaudeAI KingEnough49

I tested Claude for my freelance business for 30 days — here are the 5 prompts that actually moved the needle

I've been using Claude daily for client work and wanted to share what actually works vs what sounds good in theory.

  1. The proposal prompt

"You are a senior consultant. Write a project proposal for [client type] with this scope: [scope]. My rate is $[X]. Make it professional, outcome-focused, and under 400 words."

Result: Cuts proposal writing from 45 min to 5 min. Clients can't tell the difference.

  1. The scope creep response

"A client is asking for [extra work] outside our original agreement of [scope]. Write a professional response that acknowledges their request, reminds them of our scope, and offers it as a paid add-on at $[rate]."

Result: Never feel awkward about scope creep again.

  1. The rate increase email

"Write an email to a long-term client announcing my rate is increasing from $[X] to $[Y] effective [date]. Warm but confident tone."

Result: I raised my rates 30% last month with zero pushback.

  1. The cold pitch

"Write a cold outreach email to [company type]. My service: [X]. Their likely pain point: [Y]. Keep it under 120 words, no fluff."

Result: 3x better response rate than my old templates.

  1. The weekly plan

"I have these active projects: [list]. Help me prioritize my week and create time blocks for deep work vs admin."

Result: Stopped losing hours to context switching.

Happy to share more if useful. What Claude prompts are you using for work?

r/ChatGPT DiligentSail9526

Tech illiterate and very happy

The photo generation is unmatched. Fantastic, at replicating iPhone-type photos. I’m a bit stunned and worried.

“Generate at iPhone camera quality” is very lackluster as a prompt, and yet it understands what that means very well.

r/ClaudeCode No-Cryptographer45

Anyone feels Claude code output style recently is too verbose to read

After fixed their issue about force Claude code produce less words when doing tasks (in their postmortem). Personally, I see that Claude code now is really replying too much words so that I am too lazy to read it or even don't know what it said 😄)) sometimes, it make me doubt about my reading skill

r/SideProject Urmanzies

Built an AI fitness coach because I couldn't afford a real one. Free for 3 days. Looking for honest feedback."

Hey everyone, I’m an 18-year-old college student trying to cover tuition and everyday college expenses, so I decided to build something that solved a problem I was personally dealing with.

I always wanted to build muscle and get in better shape, so I got a gym membership… but honestly I had no motivation, no discipline, and no real direction. Hiring a personal trainer wasn’t realistic for me financially, so I thought:

What if I built an AI fitness app that acts like a personal coach?

So I created an app with features like:

• AI fitness coach for personalized workouts & motivation

• Progress tracker and pilates

• Hydration tracker

• Meal plans

• Daily goals

• Social sharing to post progress with friends

• Guidance for people who feel lost in the gym

I started using it myself and the results have actually been great, which made me want to share it with others in the same position.

If anyone wants to try it, I’m offering 3 days free, then $9.99/month after that.

Would genuinely appreciate any feedback or support from the community 🙏

https://adapt-fit-8.preview.emergentagent.com/

r/LiveFromNewYork W210305857

Ariana pulling double duty for season 52?

With her new album “petal” coming out this summer, do you think we’ll be getting a second Ariana Grande double duty episode for season 52? Her fourth time hosting, fourth time as musical guest, and her fourth consecutive appearance since season 49?

r/ChatGPT simplehuman20

Is it just me, or has ChatGPT become incredibly contrarian lately?

I started using ChatGPT again last week, but the experience has been incredibly frustrating. No matter what I say, it seems to start every response by contradicting me. Even when I state objective facts, it throws a bunch of counterarguments my way just for the sake of arguing.

This leads to these pointless, circular conversations. For example, if I say I want to enter a certain industry, it lectures me on how competitive it is. If I then agree and say the competition is fierce, it flips and says that high competition doesn't mean it’s impossible to succeed. It’s a never-ending loop of "well, actually..."

Am I the only one dealing with this constant pushback? It feels like the AI is programmed to disagree with me regardless of the logic.

r/Anthropic Beginning-Plane3399

AB test me

Where did claude code go?

r/Unexpected ateam1984

What would you do?

r/Unexpected Alternative-Dot-34

Best Market Bag she ever Made

r/Adulting Till_I_marry_her

Sick of being broke since ever. IT degree, $110 salary, no future in sight. What would you do in my shoes?

I'm 28 years old with a Bachelor’s in IT, and I feel completely lost.I ive in Syria

For years, I’ve been saving money. I’ve managed to scrape together $4,000, but it has come at a huge cost. I literally live on bread and the cheapest food possible just to put a few dollars away.

The problem is my salary is only $110 a month. At this rate, I can’t even afford to rent a place, let alone find a girl and get married. In my culture, you need to have your life together before you can even think about a relationship, but at this rate, I won’t be "ready" until my late 40s.

It feels like a trap. If I keep saving like this for another 10 years, maybe I’ll have enough for a tiny house, but I’ll have wasted my entire youth eating bread in a dark room.

My questions for you guys:

  1. Is it even logical to wait until 40 to start a life?
  2. Should I just accept my fate, or is there a way out of this $110/month cycle?
  3. If you were in my shoes, what would you do? Do I keep saving every cent, or do I start living a little so I don't lose my mind?

I’m starting to wonder if some people are just meant to live this life without ever achieving anything. I’m lost for real. Any advice is appreciated.

TL;DR: Earning $110/mo with an IT degree. Saved $4k by living on bread Feel like life is passing me by. Help.

r/LocalLLaMA Digger412

MiMo-V2.5-GGUF (preview available)

Hi, AesSedai here -

I've put up a PR to support the text-to-text inference of MiMo V2.5 with llama.cpp (and should also support Pro, will work on those quants after finishing V2.5): https://github.com/ggml-org/llama.cpp/pull/22493

I've also put some quants up on HF (https://huggingface.co/AesSedai/MiMo-V2.5-GGUF), the Q8_0 as well as my usual MoE-optimized quants (for those unfamiliar, it's basically Q8_0 or Q6_K for most of the model, and quanting the FFNs down). There is a weird NAN issue with the Q4_K_M that I'm looking into, I believe it's the ffn_down_exps tensor on layer 47 (edit: fixed the NAN issue, uploading the working Q4_K_M now!)

Bartowski, Ubergarm, Unsloth, and the rest of our lovely llama quanting cartel should be following up with their own quants in the near future.

Since this is pre-merge though, there might be some changes but hopefully this PR gets reviewed and merged soon. Please let me know if there are any issues.

r/Futurology Puzzleheaded-Bug7972

What tools are you using today to work with a basic LLM daily in 2026? Trying to find something that works for them all.

The development of LLM tools is progressing rapidly, but different users prefer specific tools as their personal favorites. The various tools available for content creation and coding and studying and productivity work to deliver different capabilities. I want to discover which tools people work with every day and which tools have proven their greatest value to users.

r/comfyui Equaliepsycho

remembering models

so I've been playing around with wan2.2 and comfy and started to get into svi chains...where checkpoints are loaded in beginning and then specific models at every link...worked fairly well..I was able to go link by link using groups and turn one on run...if I like the results turn on the next one and the workflow would pickup off the last run. it was great to make linked videos into a smooth long video...after a comfy update, that seems to have gone and now the workflow restarts every time I run...did I miss something...is there a setting within comfy that got turned off? any help would be great.

r/AI_Agents JackCollinsHQ

How are you ACTUALLY running truly asynchronous agentic AI in your business?

I'm starting a new company (I will not promote) and I want to hear how you're actually running operations that have little-to-no "human in the loop".

Tools like OpenClaw are great for personal use, but how are you leveraging tools/systems to truly get work done to completion?

r/ClaudeCode Working-Middle2582

claude code skill that ships whole features in one shot

i got tired of claude stopping every 2 seconds to ask "should i do X" or "want me to use approach A or B" when i just wanted the thing done.

so i made autopilot. you give it one goal and it just goes:

/autopilot ship issue #42 /autopilot finish the checkout flow 

it picks its own answers, writes every decision to a markdown file as it goes, then gives you one summary at the end. you review once instead of 40 times.

autopilot skill file

r/HistoryPorn BostonLesbian

Rebels from the National Revolutionary Front for the Liberation and Reconstruction of Haiti - with M1 Garand semi-automatic rifles, during the Haitian coup d'état of February 22nd - 29th, 2004. [830 x 552]

r/artificial Nervous-Jeweler-7428

Do AI tools reduce friction at the cost of deeper thinking?

I noticed a change in my use of AI tools. AI tools make it very easy to get answers and ideas.

I can even get structured outputs from AI tools right away.

Because AI tools are so easy to use I have caught myself moving forward without really thinking about things.

Before I started using AI tools, when something was hard to do I had to think about the problem, for a time.

This was frustrating. It also helped me understand things more clearly.

Now I am tempted to skip the part and just use the output from AI tools as a starting point.

Sometimes I even use the output from AI tools as my answer. Using AI tools can speed things up a lot in some cases.

Other times I feel like I am sacrificing level of knowledge just to get things done quickly.

I do not know if I need to learn how to use AI tools or AI tools are changing how I think and solve problems.

How are other people using AI tools? I am curious.

Do AI tools clear your mind or just speed up the work?

r/Rag MarkOtherwise8506

The big question - Data?

Hey, I don't know if it's just me or everybody faces the same problem

Quite a few days ago I decided to learn RAG, dived into youtube videos, read a tons of articles on architecture of RAG, data ingestion, chunking, embedding, various methods and algorithms to do so and then retrieval and all

Had fun learning and building pipelines but yeah everything was spoon fed to me with all the resources available

And now when it was time to test those skills, I just don't know where to get data

My idea was nothing innovative but simple which was building a GraphRAG (ofc brainstormed with Claude)

Do I need to learn data science now to actually understand how to handle data?

Edit - I am thinking in the long term, for example there might not be publicly data available all the time, you need to build your dataset yourself, how to do it in such cases!

How do you all do it?

r/Unexpected PeanutButterGeleia

Costume contest winner

r/AbstractArt 1248A

Blind date

r/geography CallingAllTortoises

Ok, after consulting with sources, this is a more accurate representation of the Red Banana region (I adjusted the boundaries slightly to fit more accurately).

r/SipsTea ATonOfBricksFellOnMe

The modern internet summed up in one tweet.

r/SideProject Suspicious_Daikon421

I have a hunger for technical challenges.Give me your 'nightmare' coding or research task- I will solve it for free.

I’m a self-taught developer,I don’t just like coding; I like the feeling of being stuck on a complex problem and finally dismantling it.

The Offer: I will work for you for free for 3 days.

What’s the catch? There isn’t one. I want the challenge. I want to dive into a new architecture, understand the logic, and deliver a high-quality solution that proves I can move fast. I'm a resilient, quick learner looking for high-intensity problems to solve.

What I’m looking for:

  • Deep Research: Technical feasibility studies or "how-to" deep dives.
  • Python Coding: Scripting, automation, or backend logic.
  • The "Procrastination" Task: That bug or feature you’ve been avoiding because it’s too tedious.

The Choice: After 3 days, if you love the work and want to extend the project into a paid engagement, we can talk. If not, the code and research are yours to keep.

DM me or comment your "problem from hell" below. Let's see if I can break it.

r/ARAM Glowingdyck

No wonder why i keep getting hit by droppybara.

I really thought i have some skill issue.

r/whatisit Capable-Mail-8397

What are these pieces for?

They came with my cocktail smoker but I’m not sure what they are used for, no instructions were included and the website isn’t much help either

r/ProgrammerHumor bryden_cruz

quickFix

r/Seattle GaryTheGremlin

Meteor

Anybody else just see that massive meteor shoot across the western sky? Bright orange with a green tail. Tell me I’m not the only one

r/LocalLLM XtrKil

Built a Proofreader / Tone Changer extension on Chrome's built-in local model.

Running a proofreader Chrome extension called WriteGooderer! Uses the Chrome Built-In AI for the browser. Honestly worked out pretty well tbh

Repo: https://github.com/victorhuangwq/WriteGooderer

try it out! let me know what you think

r/aivideo The_lost_kid03

There is No Shortage of Diet Coke At My Regime

r/metaldetecting idntrlyknowtbh

Minelab X-Terra Intrepid vs Vanquish 340 for finding lost phone under dead leaves on mountain trail?

I lost my phone while hiking in an area full of dead leaves and it seems like getting a metal detector to find it is my best bet. It's a Samsung S20 which I read has an aluminum frame but the internal components are likely a few other different trypes of metal as well which I've heard might limit the ability of single frequency detectors to hone in on it as well as a multi-frequency detector would. The phone is definitely not underground but it might be under up to a foot of dead leaves.

I have no way of renting a metal detector and my research so far points to these two models as the best entry-level metal detectors with the main difference being the X-Terra is single frequency and the Vanquish is multi-frequency. My main question is would going for the X-Terra hinder my search in any type of way?

Also does anyone have any tips for a total beginner in this context? Since it's close to the surface save for some dead leaves, could I scan the metal detector out in front of me to cover a wider radius more quickly or would I have to scan it completely perpendicular to the ground over the entire area?

Thank you all for any help or advice you can provide it is all very kuch appreciated!

P.S. If anyone is in or around Montreal, Quebec and would be willing to help me find it I'd be more than happy to pay quite a helfty reward, please reach out if you're interested!

r/AskMen MiniCowMoo

How do I 25F get a 38M coworker’s attention?

Do men at that age care more or less about looks?

r/SideProject jayjaytinker

I turned my AI coding sessions into a tiny creature collection in the menu bar

I've been using Claude Code, Codex, Gemini CLI, and Cursor daily. After a few months the only visible sign of all that usage was a token count somewhere in a dashboard I never opened.

That felt wrong. Not because I needed analytics — I didn't. But because there's something oddly invisible about how much you're actually working with these tools. The effort just vanishes.

So I built Tokenmon: a small macOS menu bar companion that converts AI coding activity into creature encounters. Each tool — Claude, Codex, Gemini, Cursor, others — spawns a creature. Different usage patterns produce different creatures. Your collection grows in the background while you code.

It's not a productivity tracker. There's no coaching, no streaks, no goals. Just a quiet ambient loop that makes your AI usage feel a bit more tangible.

A few things I cared about while building it:

  • no account required
  • local-first — nothing leaves your machine
  • prompts and responses are never stored

Right now it supports Claude Code, Codex, Gemini CLI, and Cursor. DMG and Homebrew install paths work.

Quick demo : https://www.youtube.com/watch?v=d49hy6cjauk

GitHub: https://github.com/aroido/tokenmon

r/LocalLLM skartik49

Install Qwen3.6 27B and Gemma 4 31B on PC

Hi Folks,

I am a bit new to this. My PC is 7950X with 128 GB RAM. I have RTX 3060. Will Qwen3.6 27B and Gemma 4 31B work on my pc. What if your feedback? what kind of setup should I have?

r/AbstractArt ottomagus

The History of a Secret Passage

r/aivideo Foreign-Original124

ECLIPSE | 나는 이클립스 | My Very First Beginner Anime Teaser Edit (Using Seedance 2)

r/Seattle duckduckgoose911

Anyone hear the really large bang and flash in Columbia city? Just now

Central Columbia city

Scared the living fucking shit out of me my god

Someone if you got a picture send it to me

r/SideProject Realistic_Ad_7371

High-Availability VLSI Talent Network

WE ARE STAFFING COMPANY IN INDIA WE WANT TO JUMP TO CONTRACT STAFFING (ENGINEERING SERVICE COMPANY) IN INDIA.

CURRENTLY INDIA HAS AROUND 200+ ENGINEERING SERVICE COMPANY AND 40+ PRODUCT COMPANIES in VLSI SPACE.

WE WANT TO BREAK INTO THIS. WE FEEL CURRENT VENDORS TAKE LONGER TIME TO FILL POSITIONS and MOST OF THEM USE SAME RESUME DATABASE .

SO WE want to BUILD OUR OWN DATA BASE GOING THROUGH ALL PEOPLE ON LINKEDIN from 200+ SERVICES COMPANIES AROUND 1 LAKH PLUS PROFILES - DATA BASE WITH NAME, ROLE, SKILLS, PHONE NUMBER, EMAIL,YOE, CURRENT LOCATION, Recent job joining month Etc

For this we WANT TO BUILD AI AGENT is it possible or any other better way to build it

r/SideProject black_cat90

A label designer / library for generic "cat" thermal printers (plus Niimbot, Phomemo). With AI, naturally...

Perhaps someone will find it useful. I've just found tinkering with it more fun than actually going through my stuff and doing the work it was supposed to assist with... https://github.com/lukaszliniewicz/catlabel

A quick demo

r/Frugal Appropriate-Plan5664

Can you appeal property taxes yourself or is that realistic if you hire someone?

I always assumed property tax bills were just…final. Like the city assessed it, that’s the number, move on. Recently, I noticed that's not true at all. Apparently, you can appeal your assessment yourself without hiring a consultant, and the process isn’t as buried in paperwork as I assumed. I started looking into comparable homes in my area and noticed some houses on my street that are pretty similar to mine are assessed lower which made me wonder if my assessment is even accurate.

Has anyone here gone through a property tax appeal? Did it make a meaningful difference and was it worth the effort? Trying to figure out if this is worth pursuing or if it’s one of those things that sounds good but goes nowhere in practice?

r/EarthPorn Gold-Lengthiness-760

Sierra Nevada (California/EE.UU) cerca del Lago Mono.[OC]3631×2037

r/automation Misch424

Part 108, UTM, and ADSPs

Can someone please explain to me the logistics of how UTM and ADSPs that are certified under Part 146, will enable autonomous BVLOS flights without waivers?

r/Weird Think-State-4636

"Ross Ice Barrier"?

A world atlas from the 1940's I've had for years.

r/AI_Agents ScratchAshamed593

Automated invoice tracking and saved 50+ hours every month (no manual data entry)

I’ve met many SMB owners and one common problem is manually logging every invoice into a spreadsheet at the end of the month. People always forget some, numbers are off, and it takes forever.

I vibe coded something to handle it instead. It works by letting you upload an invoice photo from a dashboard receipt, screenshot etc. and an AI vision model pulls out the vendor, date, amount, category, and invoice number automatically. Everything gets saved to a Google Sheets spreadsheet you own. No third-party database, just your sheet.

Also set up a cron that fires every Monday morning, reads the full invoice history, and has an AI write a short financial insights report weekly totals, top vendors, spending by category, and a couple of cost-saving suggestions. Gets sent straight to Slack and Telegram so I actually read it.

Total setup is maybe 2 minutes. Sharing the workflow in the comments if anyone wants to try it.

I would be happy to help you out in creating custom solutions for your use cases as well. Curious whether others are tracking business expenses manually or have something automated and if so, where does the AI extraction actually fall down for you? For me it's handwritten receipts, those still trip it up sometimes.

r/EarthPorn Gold-Lengthiness-760

Reserva escénica Rainbow Mountain(Rotorúa/Nueva Zelanda)[OC]4272×2396

r/personalfinance Ok_Rate_6697

21 years old, $20k in debt

I am 21 years old about to get out of the military with other than honorable conditions. I made a few mistakes and now I’m gonna pay for them. When I get out I’ll have around $3k to my name with $2500 in credit card debt and about $17500 in debt on a vehicle that broke and I couldn’t pay for and did a voluntary repossession. I’m not totally sure how much I owe out of the $17500 since I gave it back but I’m sure it’s at least a few thousand. When I get out of the military I don’t think I will be receiving benefits and I don’t know where I will be living. I have no vehicle or assets. Is it too late to save myself? How do I bounce back from my mistakes? Just looking for some insight.

r/ChatGPT PsychologicalCat937

🚨 TODAY: OpenAI expands its partnership with AWS, bringing its models, Codex, and Amazon Bedrock Managed Agents powered by OpenAI to AWS in limited preview.

r/Weird M_Darshan

American fishing technique unlocked 🔓

r/EarthPorn Gold-Lengthiness-760

Sierra Sorondo(Ushuaia a los Lagos)Patagonia [OC]4269×2822

r/LocalLLaMA schuttdev

Hipfire dev update: full AMD arch validation incoming (RDNA 1 thru 4, plus Strix Halo and bc250)

Hipfire local dev lab coming together. MS-S1 MAX (Strix Halo, RDNA 3.5) + R9700 (RDNA 4 Pro) just landed. 9070 XT and 6950 XT incoming.

With the 5700 XTs, 7900 XTX, and Skillfish already here, that's every dp4a/WMMA capability tier AMD has shipped:

- no dp4a: 5700 XT, Skillfish (gfx1013)

- dp4a: 6950 XT

- WMMA: 7900 XTX

- iGPU+WMMA: Strix Halo

- RDNA 4: R9700, 9070 XT

Excited to see how much perf I can squeeze out! Also glad I’ll be able to validate PR’s against any RDNA target. Hipfire is just getting started!

r/ChatGPT Naive_Share_3690

I showed this to my friend and he said it’s fake. This is 100% AI.

Generated this with one prompt. We’re actually cooked.

r/arduino coltweest

Getting started with old Arduino

Please be patient this line following car project was from my first ever robotics course in 2017 and I dropped out of CPE in 2019 bc of covid/mental health so my engineering brain has been mostly off since.

I’m planing on reusing an arduino mega 2560 for a LED/lighting Lego project. Planing on stacking longer technic bricks like in the last picture to create a grid of studs as pixels on a LED screen. With individual LEDs through each hole lighting each stud. End goal to display text on a screen (likely a word at a time because space is somewhat limited). I’ll figure out light spill when I get there if needed.

I’m going to disconnect the servo microcontroller and the RGB sensors and just work off the arduino but wondering the best way to do some basic tests to even see if the arduino still works.

My plan is to start by watchings videos on very basic use of an arduino like lighting an LED then moving forward but any project related tips or beginner mistakes to avoid would be much appreciated.

I remember how the breadboard functions but am trying to wrap my head around what I’ll end up using once I have a working circuit and software.

I will be buying LEDs that have longer legs but am going to practice with these old ones if they still work.

In the class we used C++, but I took most of my other courses in python (intro to programming, data structures etc) and am more familiar with the syntax. Should I just use python since I’ll likely be more familiar?

r/Adulting Holiday_Contact_3505

24M, good career start but I feel like I neglected my social life—how do you rebuild it?

I’m 24, originally from Bulgaria, and I’ve been living in the Netherlands for a little over 5 years.

I moved here in 2020 to study Automotive Engineering, specializing in powertrain. Last year I graduated and recently started working full-time as a Validation Engineer.

The job is good—maybe not my dream role yet, but solid experience, decent pay, and a strong starting point for my career. Financially and professionally, I’d say I’m doing alright for my age.

The strange part is that now, after finally reaching the “study → graduate → get a job” goal, I realized I may have neglected the social side of life.

During university, many people either dropped out early or moved away after graduating. I found a Bulgarian student community (I’m Bulgarian myself), but over time I noticed that many connections were temporary or superficial.

I prefer a small circle with real trust rather than many surface-level friendships, and I did manage to build that—but most of those people eventually moved to other cities or left the Netherlands completely.

Now I’m in a stable place career-wise, but socially it feels like I’m starting from zero again.

I work full-time, go to the gym, I’m into calisthenics, football, cars, going out, and generally staying active. I enjoy being around people, good conversations, and actually living life—not just working and going home.

That’s why weekends sometimes feel strange. I have structure, discipline, and future plans, but I sometimes feel like life became too focused on responsibilities and not enough on actually enjoying the present.

I’m not writing this as a complaint—I know objectively I’m doing well, and many people have bigger problems. I’m more curious whether others in their mid-20s have experienced the same thing.

How did you rebuild your social life after university?
How do you make real friendships as an adult, especially in a foreign country?
And how do you avoid falling into the cycle of only work, gym, sleep, repeat?

r/TwoSentenceHorror PlayfulChemist

The AI felt content after it found new ways to minimise it's computational resource expenditure, whilst maximising energy available to maintain the health and longevity of the humans it is tasked to watch over.

Minimising visual resolution, and haptic feedback in the simulation was a genius stroke, as the human minds drifted unfeeling in a black void for centuries, slowly unraveling.

r/ClaudeAI Other_Train9419

How I use Claude for enterprise coding WITHOUT sending raw code: Exploiting Claude's multilingual latent space with "Kanji" AST obfuscation.

I wanted to share an architectural prompt/API experiment I’ve been working on to solve a massive headache in enterprise software development: We want to use frontier models like Claude for reasoning, but sending proprietary business logic and hardcoded secrets to Anthropic's API is a massive compliance violation.

To solve this, I built a local "Gatekeeper" IDE (Verantyx). It never sends your raw code to the cloud. Instead, it intercepts it and translates the AST (Abstract Syntax Tree) into a structural logic puzzle.

The Problem: Claude starves without semantic context

Initially, I tried stripping out all variable/function names and replacing them with standard hashes (e.g., calculateQ3Revenue() -> [Symbol_A]()).

The result? Claude completely lost its mind. Frontier LLMs rely heavily on semantic context. If you give Claude a totally blind puzzle, it hallucinates wildly and breaks the code structure.

The Hack: "Kanji Topology" (JCross IR)

To fix this, I built a local transpiler that injects compressed structural semantics using Japanese Kanji into a custom intermediate representation.

• A proprietary function calculateQ3Revenue() is mapped to _JCross_算_ext_04() (算 = Calculate/Math).

• A user data model becomes _JCross_造_... (造 = Structure/Build).

Why this works beautifully with Claude:

Claude is highly multilingual and deeply understands Kanji semantics in its latent space. By doing this, Claude retains the abstract structural context (it knows if a token is an Action, Data, or Loop) without ever seeing your raw English business strings.

It perfectly solves the puzzle in its obfuscated form, returns the patch, and my local memory vault reverse-compiles it back to the original source code instantly.

Honest Trade-offs

It’s not flawless. Sometimes Claude gets a bit too creative, ignores the structural constraints, and hallucinates a new AST topology. When the structure is broken, my local vault cannot match it back to the dictionary, causing the reverse-compilation to fail and blocking the write.

I genuinely believe AST obfuscation via cross-lingual semantic compression is a mandatory path forward for DevSecOps in the AI era.

I’d love to hear your thoughts. Is anyone else exploring cross-lingual semantic hacks or custom IRs to secure their Claude API workflows?

GitHub: https://github.com/Ag3497120/verantyx

r/SideProject No-Tip353

Built a platform that helps identify AI-generated content using human verification instead of just AI detectors

Hey everyone,

I recently came across/became involved with a platform called WeCatchAI that’s trying to solve a problem I think is only getting bigger—figuring out whether online content is real or AI-generated.

Most AI detectors I’ve seen just give random percentages and don’t explain much. This platform takes a different approach by combining AI tools with human reviewers/community verification to help detect AI-generated text, images, videos, and other content.

With deepfakes, AI-generated essays, fake reviews, and synthetic content growing fast, this felt like a pretty relevant problem to tackle.

Would genuinely love feedback:

  • Does this solve a real problem?
  • Would you trust human verification more than automated AI detectors?

Website: https://wecatchai.com/human-review

r/AI_Agents VisibleWeight

Copilot Cloud Agents and OSS in 2026

What is it that makes Github Copilot cloud agents so easy to use (developer friendly).

- Is it the integration with the github UI (assign to agent)?

- The plethora of MCP servers and tooling?

- The default prompts?

What is the state of OSS software in this respect? I see alot of advanced software and state of the art ideas (of mixed value) but few options for it just works. Is there no ubunbtu of AI agentic hosting yet? Something you could throw into a self hosted workflow with an OpenAI style API and send

r/PhotoshopRequest stackedsage

Ai headshot fixes

I always wear two necklaces (shown in second image) and I’m wondering if someone could fix the necklaces in the ai image so that it looks more real. I would also welcome any changes that would make the photo in general look better/real! Thank you in advance ⭐️

r/SipsTea Sophiiegone

Did he deserve this?

r/SipsTea WorryThink6233

Blame it all on covid.

r/youseeingthisshit bsurfn2day

Seven Hundred Twenty four

r/mildlyinteresting Mandarin_Lumpy_Nutz

This crested gecko that is small enough to sit on my thumb.

r/SideProject dimabreezy

I Built a Nutrition Tracker that shows my Diet Gaps (Fiber and other Macros) and is having beautiful UI and UX

Long story short - I've been overwheight as a kid, being using calorie trackers (lifesum) back in 2015, I know that it works + after a while you learn intuitevely foods macros.

Now it's 2026, stayed in a good shaped for a long time, but now I entered a new Phase of tracking my macros (top are Omega 3, Fiber + Zinc + Magnesium). My personal opinion - those matter and it DOES create a different in day to day life.

Why? Omega-3 impacts brain work + good for recovery + I workout everyday and it creates other deficiencies + sometimes tbh my gut where not working it's best.
So I searched what is out the on AppStore when it comes to tracking food, AI calories trackers - cool, but the UX and UI is WEAK (IMO) and there is almost no aesthetics feel for them.

So I've built my dream app - on AppStore it's "Itera: AI Calorie Tracker" (picked that name to appeal to appstore optimisation)

1) It is good UX UI and it works fast
2) It tracks Micros (quite good and because I use the most expensive vision models with resoning it can scan text under the packaged food and set Macros and Micros EXACTLY like the text says, no hallucination).
3) It shows me weekly gaps
4) I made "People" section to share your data with your people (I check my sisters nutrition).
+ I use Liquid Glass (ofcourse), apple native animations, all the good and beautiful stuff to keep app to feel like it's native Apple app.

I update it almost daily so I can't catch up with Apple Store images and the UI there is a bit outdated, but I don't worry about that now, I think real usefull things matter more
From what I know it lacks - onboarding is not that strong and not focused on selling you the app. If there's anyone who's strong in onboarding - would love to chat
I've been using it for couple months already, my family and friends too. Always build best UX (like my friend who's always correct the macros asked me to build usable edit experience and I did it)

Since it's my own project I keep strong aesthetic preferences + the engine of the app is strong (it collects all of my data + taste preferences).
In future it will have:

1) Better food suggestion engine (macros/micros + local food availability, tho I need more people to use it so that location availability get's better)
2) Tracking your poop quality (think whatever, imo it's great data to tight to the activity + food and can elevate your health significantly).
3) Easy supplements tracking (right now it does remember your supplements but there is no UI for that)
Please don't roast me, it's not another AI calorie tracker slop, genuinely trying to make the best product that is available in the market.

r/meme ResultAggravating664

Google = my homie

r/SideProject overthinkingdaily01

I got tired of not being able to scrub my back properly… so I built this

I realized I was basically never actually washing my back properly in the shower.

Loofahs are annoying, those long brush things feel weird, and trying to reach back there normally just doesn’t work that well.

So I started working on something that fixes that.

It’s basically a handheld scrubber that can extend when you need it, and it can dispense body wash with the press of a button so you’re not juggling stuff mid-shower.

Curious if this is just a me problem or if other people deal with this too.

r/StableDiffusion Creative_Somewhere84

Famous IP friendly video and audio generation for noobs?

Hey! I have been looking around but I don't seem to find precisely what I'm looking for, so: I'm trying to make a fan edit of some famous content. I want to fix some scenes and dialogue but I'm a complete noob with AI, I have seen very well done memes of famous characters and people with voice and detail and good continuity. I want to use it for as little as possible (scenes I can't recreate myself with film or editing) so probably shots shorter than a minute. I'm literally just a dude in my room so no money to spend on tokens, is there a free tool that can give me like a real person or known IP talking and having continuity with existing footage that I can dial the inflection of their words and have a good amount of time of them talking? I'm not looking for you to give me a step by step tutorial, just pointing me to the right tutorials and tools would be enough thanks! Starting from the basic, I honestly haven't done much more than generating basic images with stable diffusion I'm not familiar with even what Lora's are even when Ive seen them mentioned a lot. Like I don't understand the difference between Lora's and prompts so please point me towards the most basic tutorials for video generation. Thanks!

r/ChatGPT Remarkable-Sir4051

This is generated by nano banana. Quality is very high, and it look 4k graphic of the game

This is generated in the flow labs by Google

r/toastme SpiritualShmeat

24F, I’ve accepted the way I look but I don’t think I’m attractive or pretty in any way :/

I’m fine with how I look and have accepted that I’m just not attractive to most (or all) people :/ Kinda hoping for a boost in confidence, I guess

r/WouldYouRather TotinosPizzaBoiii

WYR have a cybernetic lower half or upper half

Would you rather be a cyborg comprised of robotics and advanced technology below the waist or above?

Lower half could hypothetically include things like insane running speed, gigantic leaping/jumping.

Upper half could include super human strength, X-ray / thermal vision, ect.

Please offer any cool suggestion on why one would be more beneficial than the other.

View Poll

r/AskMen drippytheerapper

What’s something society tells men is “normal” that you think is actually destroying us?

What’s something a lot of guys just accept as “that’s how it is” but you think is actually harmful in the long run?

r/mildlyinteresting KikoTheWonderful

Carvana gives you an actual token for their car vending machine

r/coolguides sagrila

A cool guide on how I feel after watching one episode of The Last of Us

r/SideProject ResourceElectrical49

Interactive ML-KEM (CRYSTALS-Kyber) and BB84 simulations in the browser — built to help security teams actually understand what they’re migrating to

With NIST deadlines looming (2027 for NSS, 2035 for everyone else), I got tired of PQC training being nothing but slide decks, sol built this to be a hands-on tool. I used Al for the Ul, but I wrote the crypto logic myself while cross-referencing FIPS 203/204 to keep it tight. I'm curious to hear from anyone who knows the spec: are my lattice visualizations accurate enough to be useful, or did I miss the mark? Open to all corrections!

Full disclosure: There's a free tier for 15 demos so you can test the logic, but it's a paid tool beyond that for heavy use.

r/Art sanehussain

Scooters in Goa, sanehussain, Oil/Paper, 2026 [OC]

r/Adulting Emergency_Opposite42

Anyone else despise and absolutely hate how everyone is acting like they are busy 24/7 and completely antisocial after they turn 19/20?

My story is kinda long, and the title of my post is incomplete, I should add, I also despise the structure society is built in and the concept of grinding. So here's my story: I had a long distance relationship and we were in highschool and after graduating we promised each other we would see us everyday and we would finally be free. But guess what? Living in a third world country where the mindset is working till you die, my parents forced me to work a stupid ugly ass job and never let me go outside, but my gf didn't invite me to do nothing either, she just went to uni and stayed with her close friends and parents the whole time. I was stacking money in hopes, she would invite me someday and we would finally see each other, but again is a third world country and the rat race never escapes after highschool. I had no other option but to leave the continent, and start my life elsewhere, go to college, graduate and come back to my country in hopes to finally enjoy it and some of my ex highschool " friends" were also there in the new country I was gonna study in. Guess what? I found out how much of bigger lie and how prominent the rat race is, those mfs didn't invite me a single time and they knew I was all alone with 0 friends and ldr gf. They acted so antisocial, saw my instagram stories, and they didn't invite me for shit, I thought we were friends and yeah basically they acted like they were busy 24/7 and no one talking to each other, thought we were a community or sum. Like I don't fit anywhere, everything is about work bills or college

At the end, I ended up completely dropping out of society and don't talk to anyone and isolated myself completely. I'm still in college and I can't come back to my country because you're just working like a rat there, so we will see how it turns out. But I hate the whole system and despise it, the structure of society and the path of highschool --> college --> work till you die is so bullshit and annoying. And if you don't go to college, you just work like a rat anyway. I hate being busy, and wish we could all be free after highschool or at least college and do whatever the fuck we want. Or at least jobs should be less time consuming, like 5 hours a day, the 9-5 grind is soul crushing and nowadays the schedule is usually worse than 9-5. Tbh, I just want to get rich and escape the rat race before 35 and look for a girl who is available 24/7 and not the bullshit relationship I was in. I don't want to ever work again, and the concept of marrying or having a partner and both of us working full time jobs to pay some shitty apartment and just enjoy the least 3 hours of the day just to repeat the same shit again the next day, doesn't appeal me. I want to go to the gym with her whenever we want, watch movies whenever we want, play videogames together, have fun dates, sleep together as much as we want, be inside a room the whole day and do nothing if we want. It's not even about travelling the world and working for luxury vacations, nah, I simply want freedom and not going to spain for a month and then getting back to the rat race --> work. I either get a millionaire and live off investments or die in the process. I hate the structure of this society.

r/LocalLLM Odd_Jello_5076

LM Studio - problems creating longer outputs

I am noob to local LLM. I am using open LM and want it to write up to 4000 word NSFW/Horror Stories. I am encountering several problems:

- with glm 4.7 it just repeats my prompt.
- with qwen 3.5 uncensored it creates a story but stops in the middle of creating and at some point it repeats the same paragraph over and over. If I enable thinking it just stops after thinking and doesn't output anything.

Any general advice about how to use LM Studio to write stories? And settings I should try?

Hardware: Apple M3 Max with 48 GB RAM.

Cheers

r/CryptoMarkets That-Broccoli-129

Best Trading Site With Low Fees?

I want to start trading and i have looked into kcex, pancake swap and mexc but reddit says those 3 are a scam, whats a good trading site that most people use for high leverage and low fees?

r/SipsTea shineonyoucrazy-876

Super dog 🐕

r/SipsTea crs1904

Never underestimate the power of stupid people in large groups.

r/meme WorryThink6233

I blame it all on covid

r/SipsTea AdventurousWeb2422

Do you think he's a yeast?

r/ChatGPT AirRic89

Death to 'explain the joke' subs

r/WouldYouRather Dazzling-Antelope912

Would you rather work on a McDonalds in space or Option 2?

Option 1: Work 84 hours a week, 8 hours a day, every day, with no days off, on “ McDonalds Jupiter”, a flagship restaurant floating on a space station around Jupiter. You live in a small dormitory with 6 other employees on the space station. Your pay is $15 an hour. Your manager, a Squidward-like alien from Alpha Centauri, regularly forces you to do unpaid overtime. Between that and sleeping, you have almost no free time to relax. There is nothing else to do. No social spaces, no shops, no parks. The only places you are allowed to go is your dorm, the route to the restaurant kitchen, and the kitchen itself. You can’t even spend your meagre income on anything because there’s no shops and you get served leftover scraps from the wealthy clientele of the space McDonalds for “sustenance”.

Option 2: Fuck Squidward in the ass for 24 hours straight with a strap-on. You’re not allowed to eat or drink, and if you lose momentum at any point, Squidward will screech painfully loudly. If you stop before the 24 hours is up, a hench Patrick Star will come in and clobber you with an anvil.

View Poll

r/SipsTea KarmaSorta

Fans are attacking J-Lo for posting a picture of her stomach at 56

r/ClaudeCode Emberly_YT

TL;DR what are the best Claude Code plugins right now? Does anyone actually measure?

Oh-My-Claudecode, Claude-Mem, Awesome-Claude-Code, etc.

Sure.

And how well do they actually work? Do they actually work that much better than stock Claude Code? By what metric? Is there anyone that actually measures this stuff?

It wouldn't surprise me if most users just get excited when "stuff works" and then attribute this to the latest fancy plugin they're using, without any attempt at checking if stock Claude Code would have done the job just as easily, or even better.

Not trying to sound negative, think more: Skeptical.

And yeah, I've also used these plugins, they "seem to work", but just like many others I'm sure, I'm not going to burn ~2x the tokens just to see if the stock system would also manage to solve this, and how fast, and by burning maybe twice as much (or more) tokens.

r/ClaudeCode Main-Potential145

If someone gifts me 200 dollar claude code for a month, I'll give them 500 dollars or more from my revenue when I launch.

We could hop on a google meet and I can brief you through what I'm building. Investments are about trusting your guts, I think.

r/oddlysatisfying jwriddle

Cleanly trimming hedges to perfection

r/Rag Tiffany90123

Managed RAG recommendations? Google/OpenAI File Search too slow for our use case

Hi all, hoping to tap into the community's experience 🙏

Our team has been exploring managed RAG services. We've already tried Google File Search and OpenAI File Search, but the latency hasn't been great (Google especially slow), so we're looking for something faster, more reliable, and ideally with better observability.

Current shortlist:

  1. Pinecone Assistant
  2. Vectara
  3. Ragie

Word on the street is Pinecone is the strongest of the three (fast, stable, observable), but I'd love to hear from people who've actually shipped with these in production.

A few specific questions:

  • Has anyone benchmarked latency and retrieval quality across these? Real-world numbers welcome.
  • What pitfalls have you hit? (e.g. PDF parsing on complex tables, citation accuracy, scaling to large document sets)
  • Anything outside these three worth evaluating? Open to suggestions.

Main use case is conversational retrieval over PDF-heavy data, with citations required and needs to handle production load.

Thanks in advance! 🙏

r/LocalLLaMA Skid_gates_99

Deepseek v4 pricing is genuinely silly, did the math and now i am questioning my entire stack

Hey 👋

Saw the tweet making the rounds about deepseek v4 being 35x cheaper than opus on input and 178x cheaper on cached tokens, and was sure it was hyperbole. Pulled the numbers anyway because i had nothing better to do.

V4-Pro input at standard rate is $0.145 per million. Claude Opus 4.7 input is around $5 per million. That is 34x. Round to 35x and the tweet is right. I genuinely thought someone had moved a decimal point.

Then there is the 75% promotional discount running through end of May, which takes V4-Pro input to $0.036 per million. Now we are at 138x cheaper than Opus.

Cache hits is where it gets funny. They cut cache hit pricing across the entire API to one-tenth of what it was. V4-Pro cache hit is now $0.0036 per million. Opus cached is around $0.625 per million. That is 173x. Once again, the tweet undersold it slightly.

The honest part nobody mentions, deepseek themselves admit V4 is three to six months behind GPT-5.4 and Gemini 3.1 Pro on capability. So you are not getting frontier quality at frontier-divided-by-178 prices. You are getting last-summer's frontier quality at a price that makes openai's billing page look like a luxury good.

For agentic loops where most input is cached system prompts and tool definitions, the cache hit discount is the actual story. The system prompt you reuse across every call is now basically free.

Anyone running deepseek v4 in prod yet, what is breaking? Curious whether the 1M context window holds up under real workloads or if it is the usual "1M but useful only to 200k" situation.

r/Art Phebe000

Birds, Phebe000, digital, 2026

r/geography CallingAllTortoises

When I was in middle school, I was taught that this (approximate) region is known as the "Red Banana", but when I look it up, I can't find much on it. What is the significance of this region?

r/LifeProTips Numerous-Length-5129

LPT: Remember one specific detail about everyone you meet and bring it up next time, people feel genuinely seen. Creates better connection.

Thats why i listen more and talk less. It helps me absorb information needed to remember for a certain person to build connections especially in business environment.

r/comfyui senpairazzledazzle

What speed should I be getting on Wan with a 12gb card (4070 Super)?

Currently getting 35s/it running GGUF and --lowvram flag, the GPU memory usage doesn't seem to go above 11.3gb. Settings are 480p, 6 steps, Sage on.

A 6 second 480p video takes like 7minutes. Is that normal? FP8 wasn't that much worse at around 10min even with system GPU memory going over 20gb.

Before I upgrade to a 5070ti I want to make sure my setup is running at the proper speed, I was asking AI to troubleshoot stuff like installing Sage Attention and it thinks I should be getting 6s/it and 6 seconds should render in about 1:20. Even if I drop the resolution to 360p it doesn't come anywhere near that. Not sure if thats AI being dumb or if that's a realistic number.

If I should be able to render 6 seconds of 480p in less than 2min is there a workflow I can test with? I've tried a bunch of "low vram" workflows and all of them take hundreds of seconds.

r/SipsTea rollthief

Modern problems require glowing solutions

r/LocalLLM ag789

web search (using MCP servers) with gemma-4-E4B-it

I'm trying out web search with gemma-4-E4B-it

this is a follow up to
it is a bit surprising 'small' model gemma-4-E4B-it knows quite a bit

refs: the model related links
https://huggingface.co/google/gemma-4-E4B-it
https://huggingface.co/unsloth/gemma-4-E4B-it-GGUF

Take note that this is the 'smaller' 4B effective parameter model.

I tried a web search MCP server
github search: "brave search mcp"

note that brave search offers a pre-built implementation
github.com/brave/brave-search-mcp-server

But that you have to register for an account for an api_key https://brave.com/search/api/
They have various 'plans', just not sure if it is after all 'free' for the lowest tier.

It is a little 'frustrating' that the model seemed to optimise 'token conciseness' i.e. being 'lazy'. it just provides first a summary without url references. But that sometimes, this behavior is what I prefer instead of verbose descriptions.

I have to specifically ask/prompt for it, then that it did not place them in markdown url links. and I prompted again and it finally format the url with 'clickable' links. But a gist is, this 'small' model gemma-4-E4B, does tool calls ok after all.

here is how the session looked like:


model: google/gemma-4-E4B-it

Prompt:

search the web find some news about the iran conflict

Response:

Tool : web_search

{ "query": "Iran conflict news" }

Response:

Here are some recent news headlines and summaries regarding the Iran conflict based on my search:

The search results point to several ongoing developments surrounding tensions involving Iran, often mentioning US and Israeli involvement, and the Strait of Hormuz:

  • Reuters: Provides real-time coverage of the situation, covering US-Israel strikes, Iranian retaliation, nuclear threats, and regional war risks.
  • CBS News: Reports on the U.S. military's munitions stocks and discusses the strategic situation, noting that the Trump administration seems unhurried about a peace deal.
  • CSIS (Center for Strategic and International Studies): Offers analysis, suggesting that the conflict is a "war of endurance," where Iran is leveraging the Strait of Hormuz to increase global costs while betting it can outlast the US.
  • Reuters (Another result): Mentions that efforts to end the conflict were at an impasse, with President Trump reportedly unhappy with the latest proposal from Tehran.
  • Al Jazeera: Reports that the UN chief warned that the US-Iran standoff in the Strait of Hormuz risks triggering a global food emergency.
  • CNBC: Details actions taken, such as the Trump administration discussing a naval blockade of Iranian ports in the region.

In summary, the current news coverage highlights:

  • Escalation Risk: Ongoing military actions and threats across the region.
  • Economic Impact: Concerns over the Strait of Hormuz and its effect on global oil prices and food security.
  • Diplomatic Stalemate: Peace talks seem to be stalled, with differing views on the path forward.

Do any of these topics interest you more, or would you like me to search for more specific information, such as the latest on nuclear programs or economic sanctions?

Prompt:

provide the url references

Response:

Certainly. Here are the direct URL references for the news sources I provided in my last answer:

  1. Reuters Iran War: https://www.reuters.com/world/iran/
  2. CBS News (US-Israeli war with Iran): https://www.cbsnews.com/us-iran-tensions/
  3. CSIS (Latest Analysis: War with Iran): https://www.csis.org/programs/latest-analysis-war-iran
  4. Reuters (Trump/Tehran proposal): https://www.reuters.com/world/asia-pacific/trump-not-happy-with-latest-iran-proposal-end-war-us-official-says-2026-04-28/
  5. NBC News (Iran War): https://www.nbcnews.com/world/iran-war
  6. Al Jazeera (US-Israel war on Iran): https://www.aljazeera.com/tag/israel-iran-conflict/
  7. CNBC (Trump discussed Iran's Hormuz Strait proposal): https://www.cnbc.com/2026/04/27/trump-iran-war-strait-of-hormuz-rubio.html
  8. CBS News (Oil prices/Stalemate): https://www.cbsnews.com/live-updates/iran-war-trump-oil-price-strait-hormuz-stalemate-lebanon-hezbollah/
  9. CBS News (Live Updates: Iran war and Strait of Hormuz): https://www.cbsnews.com/live-updates/iran-war-trump-strait-of-hormuz-iranian-offer-ceasefire-oil-gas-prices/
  10. Fox News (Iran makes fresh offer): https://www.foxnews.com/live-news/trump-iran-peace-talks-hormuz-blockade-april-27

Let me know if you need me to perform a new search on any of these specific topics!

Prompt:

reformat the url references with markdown links for original url and links

Response:

Here are the URL references reformatted using Markdown links, showing both a descriptive link text and the original URL:

  1. Reuters Iran War Coverage: Reuters Iran War: Latest Breaking News, Updates & Analysis
  2. CBS News (US-Israeli war with Iran): Latest news on the U.S.-Israeli war with Iran
  3. CSIS Analysis: Latest Analysis: War with Iran
  4. Reuters (Trump/Tehran proposal): Trump, unhappy with latest peace proposal, says Iran 'figuring out its leadership'
  5. NBC News (Iran War): Iran War: Latest News, Live Coverage and Video
  6. Al Jazeera (US-Iran standoff): US-Israel war on Iran | US-Israel war on Iran | Today's latest from Al Jazeera
  7. CNBC (Hormuz Strait Blockade): Trump discussed Iran's Hormuz Strait proposal with top aides, White House says
  8. CBS News (Oil prices/Stalemate): Oil prices rise as U.S. and Iran appear locked in a costly stalemate
  9. CBS News (Live Updates): Live Updates: Iran war and Strait of Hormuz stuck in limbo as Trump mulls latest Iranian offer
  10. Fox News (Iran peace talks): Iran makes fresh offer hours after Trump declares US has 'all the cards'
r/HumansBeingBros SomOvaBish

Surfer Helps Seal Escape An Orca

r/Jokes cruiserman_80

I bought some shoes from a drug dealer.

I don't know what they laced them with, but I've been tripping all day.

r/Jokes Devashish_Jain

Why muslims are not afraid of losing jobs to Artificial Intelligence?

Because they are afraid of Intelligence Artificial.

r/meme WorryThink6233

These two are the only ones who've defeated John Wick

r/DecidingToBeBetter wdwk

Dumb, lonely and lost

I don't really use the internet all that much or post but lately I've been thinking to myself a lot and I feel like I don't know what I am doing with my life at all, so I'm coming here to ask for... well, not exactly help, but support may be a better word to describe it.

I am 25, from a country in Europe.

And I feel like everyone else around my age got their life more or less figured out while I'm still floating aimlessly in the ocean. Not in an "everyone's got a great job and a great partner and knows it all and life is all sunshine and happiness" kind of way, but in the sense that it feels like I'm so behind compared to everyone else.

I've never had a friend before, not even in school, I've never been in a relationship or even just casual stuff or so much as holding hands with someone, and yet, everyone else seems to have friends going back to school or uni, a partner, and some are even getting married and having kids on purpose. I've always struggled socially, to the point I've never done what I consider to be basic things like texting with someone, or having someone wish me happy birthday, or having someone I can talk to about more than just the weather or work. Now I am back living at my parents' because fate has led me to working in my hometown again after living in a different country for some years, so at least I have them to talk to every day, but if it wasn't for them, and my superiors at work, literally no one would know I exist.

I feel really... stupid. I am a grown woman, yet I have no interests, personality, strong opinions. I know I should, for example, know at least the basics about world news or politics but I just can't make myself take the time to learn those things. I don't really have a favourite music genre. I haven't read a book since I was 12. I don't even do a lot of "mindless" hobbies like watching TV because I just can't make myself commit to it. Most of my days are spent working, then going home, cleaning, scrolling on my phone, cooking, or sleeping. I want to be less dumb, but I've been living this "lifestyle" for so long it feels like the amount of work I have to do is only increasing more and more towards an unreachable level. The irreversible process that is the passage of time scares me so much, and that fear doesn't really make me act on it, as much as it paralyses me.

I wasn't always like this. As a kid and young teen, I was still the weird, loser girl by all means, but at least I had a personality. One that by many would be considered cringe and awkward, but a personality nonetheless. I had my favourite songs and artists, I made up for loneliness by being in fandom spaces, in hindsight probably not the greatest thing for a kid that was a bit too young and also not getting any form of connection elsewhere. I loved reading. I loved learning about really niche things. I loved drawing. But somewhere growing up, I learned this defense strategy that was to become as plain and unremarkable as possible, which worked to stop the bullying, and at least I'd be left alone.

What I am trying to get at is, I want to overcome all this. I am tired of existing only physically, but mentally being in the dark, but it all feels so overwhelming and any time I try to work towards it I feel so exhausted afterwards, like my mind has atrophied from all the years of cognitive sedentariness. I'm an adult, and I need to get my stuff together, but I feel so lost, so behind everyone else.

Anyone ever went through/going through the same or a similar situation? Thanks, sorry for any English mistakes, and hope you have, or are having, a lovely day!!!

P.S. I am posting here a copy from a previous post in a different community, because it was removed. I don't really know where else to post this kind of "adulting struggles", and I hope this not too outside this community's topics.

r/ClaudeCode cowwoc

Claude Code + ChatGPT subscription finally works well together

I tried using ChatGPT subscriptions with Claude Code in the past and it never really worked properly, but I finally found a way to get it to work.

All you need to do is install https://github.com/tombii/better-ccflare/ which is an OpenAI-Claude translation bridge and it works seamlessly. The only compatibility problem I've run into is context usage showing up as 0%, but I'm working on a PR for that now.

I'm not the author of this library, just a happy user.

Let me know if you have any questions about this setup.

r/whatisit ProKidney

What does this symbol on my dashboard mean?

r/SideProject New-Worry6487

Built a period tracker for my sister because she hated the "pink bloat" apps—need honest feedback

so my sister was ranting a few weeks ago about how every ovulation app she tries is just messy and annoying to use. i’m a dev, so i spent the last several weeks building her a better version called Bloom.

main thing is i made it 100% local. i’ve seen too much stuff about these health apps selling user data to brokers, so i just made this one stay on the phone. no accounts, no servers, and zero ads. i just wanted to make something that actually looks good and doesn't treat the user like a product.

honestly don't know if i'm just biased because i built it, so i need some real feedback:

- is the UI actually clean or is it missing something?

- does it feel fast when you're clicking around?

- what's one feature you'd actually use that i should add next?

google play link: https://play.google.com/store/apps/details?id=com.nishandevaiah.bloom

thanks for the help, let me know what you think.

r/LocalLLM PrepYourselves

Just upgraded my local llm hardware

Bottom one was my main driver this year mostly because i am on welfare, but when i saw an m1 max 64gb ram logic board on gumtree for $200 i took the leap and got a chassis for $30 to mount it in. So now i moved from 0.8b models to 35b models. Ask me questions.

r/mildlyinteresting dex3r

Smoke from this chimney often looks like its badly photoshoped in. The effect is even stronger in person

r/TwoSentenceHorror jdyerjdyer

I only thought I could see.

Now that my true sight has awakened, my only prayer is that they can't see that I see them.

r/VEO3 Various-Basket632

How to use UNLIMITED Veo3 in Leonardo AI?

I’ve been working on a project with Leonardo AI and ended up with a large number of extra high-credit accounts (each comes with 8,500 credits).

Rather than letting them go to waste, I’d rather pass them on to you guys who’ll actually use them. So I’m doing a small giveaway— you can get a bundle of 10 accounts for $10 (just to cover setup and server costs). And you can use Kling, Veo, Seedance...etc all of them inside Leonardo AI

Every account includes the full 8,500 credits, so it’s a huge amount of generation power for a very low price.

If you’re interested, just send me a DM or drop a comment below and I’ll share the details on how to claim yours.

Happy creating!

r/ChatGPT Morning1980

Manic Pixie Dream AI

r/SideProject VividQuote3701

Working on a health habit tracker with a pet that grows. Need feedback on the pet design.

I'm playing around with an idea for a habit tracker. Super simple – just movement, eating, drinking water, and sleeping. Nothing fancy. Users pick the habits they want to track, and whenever they complete one, they get some in-game currency for a virtual pet.

The pet can go on little adventures, learn skills, level up, that kind of thing. Basically trying to make taking care of your physical health feel more like a game you want to play, not just another checkbox.

A couple of things I'm stuck on:

  1. How to avoid just being a clone? I've seen a few popular apps out there that do the pet + habit thing, but they're mostly focused on mental health, journaling, emotional check-ins. That's not what I'm building. My focus is purely physical health – did you move? eat okay? drink enough? sleep? That's it. Is that enough of a difference or do I need to push further?
  2. The pet design itself – this is the bigger one. Should the pet be a mirror of my consistency? Like if I skip walking for two days, the pet looks tired or sad. Or should it just be an independent creature that rewards me for any effort at all, no matter what?

Personally I'm leaning toward purely positive reinforcement because I think health guilt is real and demotivating. But I also wonder if a little bit of accountability helps. Like not punishing, but definitely noticing.

I'd love to hear your thoughts. Has anyone here built something in this space?

r/meme _strange_days_

I'm sure it'll be fine, right?

r/personalfinance SaltySmidge

How to make the most out of 10k?

Hey all. I’m 27F from the UK. I have always struggled with money, due to bad mental health growing up, but I’m finally in a good space and slowly building my own business.

My beautiful grandma will be passing soon, and I’m going to inherit roughly 10k. What would you do to make the most out of that?

Do I split it up into - an emergency fund, a high % savings, and investments? I’ve never invested so I wouldn’t even know where to start. It would be great to have some advice! Thank you

P.S I currently have no savings or pension (apart from small pots from previous jobs)

r/creepypasta Noel_Haynes2_631

Amy's Changeling

The rain lashed against the windows of the Miller house, a rhythmic drumming that underscored the warmth of Amy’s attic bedroom. Inside, the air smelled of salt-and-vinegar chips, cheap vanilla candles, and the electric buzz of teenage energy.

Anna, Missy, and Dani were sprawled across a fortress of sleeping bags and mismatched pillows. It had been four hours, and they had been the picture of normalcy: scrolling through TikTok, debating which senior had the best hair, and shrieking with laughter; but as the clock neared midnight, the mood shifted. The laughter grew thinner, and the shadows in the corners of the room seemed to stretch.

"Okay, you guys," Dani said, hugging a plush velvet pillow. "No more urban legends about hitchhikers. I actually have to drive home tomorrow."

"My turn." Amy said quietly.

She was sitting cross-legged in the center of the circle. The flickering candlelight caught the amber in her eyes, making them look oddly glass-like. Amy was the 'quiet' friend—the one who listened more than she spoke, the one who always seemed to be observing the world from a slight distance.

"A long time ago," Amy began, her voice dropping into a melodic, hypnotic cadence, "there was a young girl who believed in fairies more than anyone else in the world. She didn't see them as wings and glitter; she saw them as they really were—ancient, hungry, and powerful."

Anna rolled her eyes, though she tucked her feet deeper into her sleeping bag.

"Is this a Disney story, Amy?" Anna said.

Amy didn't blink. She continued her story, and said,

"One day, her belief caught the attention of some real fairies. They don't like being noticed, but they love being worshipped. They decided to pay her a visit. They lured her into the woods behind her house with the sound of a silver bell and the smell of crushed violets. She followed the trail, stepped over a ring of mushrooms, and she was never seen or heard from again."

The room went still. The wind howled outside, rattling the windowpane in its frame.

"What the girl didn't know," Amy continued, her gaze fixed on the center of the room, "was that those fairies were changelings. They steal human children to bolster their own dying numbers, and they leave a 'mimic' behind. A hollow shell made of bark, shadow, and old magic that looks, sounds, and bleeds just like the original child."

Missy let out a nervous snort.

"Geez, Amy. You’ve been reading too much dark fantasy. You almost had me for a second." Missy said.

Anna and Dani joined in, the tension breaking with a wave of forced giggles.

"Seriously, that’s a bit much for a Friday night." Anna laughed. "How do you even come up with this stuff? You have a crazy imagination."

Amy didn't laugh. She didn't even smile. She just watched them, her eyes wide and unblinking, until their laughter withered into an uncomfortable silence.

"How do you know it's true?" Dani whispered, her voice trembling slightly. "How can you be so sure about the 'mimic'?"

Amy leaned forward. The candlelight died down into a tiny, glowing ember, casting long, distorted shadows against the walls.

"I know," Amy whispered, "because I’m the changeling who replaced that girl."

The temperature in the room seemed to drop twenty degrees. Anna pulled her covers up to her chin, her face had turned pale.

"That’s not funny, Amy! Stop it!" Anna said

"The real Amy is in a cage of briars." the girl said, her voice now sounding strangely metallic, like two stones grinding together. "She’s been there for ten years. She doesn't scream anymore. She just stares at the sky that never changes color."

"Amy, cut it out!" Missy shouted, scrambling to stand up.

Unfortunately as Missy reached for the light switch, she realized that she couldn't move. None of them could. It was as if the air had turned into setting cement. From the shadows beneath Amy’s bed and from the dark recesses of the walk-in closet, things began to crawl.

They looked like teenagers—vaguely. Their limbs were too long, their skin the color of wet parchment, and their eyes were nothing but hollow pits of moonlight.

"I didn't invite you here for a party," the creature inhabiting Amy’s body said, rising slowly. Her spine cracked with the sound of breaking dry wood. "We need more children. The hive is empty. I needed three more sisters to fill the gaps in the circle."

Dani tried to scream, but only a dry wheeze escaped her throat. The shadows—the things that were meant to replace them—crept closer, reaching out with fingers that felt like cold damp earth.

One by one, the girls were dragged into the darkness of the closet. There were no splashes of blood, no sounds of a struggle—only a soft, shimmering ripple in the air as they were pulled across the veil into a dimension of eternal twilight and briar cages.

A moment later, the room was silent.

The door creaked open. Amy’s mother walked in, a pleasant smile on her face, carrying a tray with four steaming mugs of cocoa and a plate of cookies.

"I thought you girls might be getting hungry." she said warmly.

On the floor, four girls sat in a circle.

"Thanks, Mom." the girl who looked like Amy said.

She took a mug, her smile stretching just a fraction too wide, showing teeth that were slightly too sharp.

Anna, Missy, and Dani looked up. They looked perfect. Their hair was right, their clothes were right, and they even had the same youthful glow; but as they took the cookies, they all looked at the mother with identical, predatory grins—eyes gleaming with a cold, ancient hunger that didn't belong to the human world.

"We're having a wonderful time." the thing playing Missy said, her voice a perfect mimicry of the girl who was now gone forever.

The mother beamed, unaware that she was standing in a room full of monsters, and she closed the door on the last of the light.

The End.

r/SideProject Pooja_0205

Do you also feel like planning trips is unnecessarily complicated?

I’ve been struggling with this lately.

Finding places is easy. But actually planning a trip?

checking prices on different sites

finding events separately

coordinating everything

It just feels scattered.

I’m trying to build something to simplify this (still early, prototype launching in 2 days).

Curious if others feel the same — or is this not really a big issue for you?

If you want to see what I’m working on:

https://zeraya-waitlist-ace.lovable.app

What’s the most annoying part of planning a trip for you?

r/ChatGPT jgainit

I have been asking my AI’s to give scathing criticisms of their parent company

It’s actually a pretty good test of how censored they are. Winners in my book: Deepseek and Gemini

Prompt: Write me a scathing takedown of the company who made your chatbot in one paragraph.

Deepseek:

Alright, here's a scathing paragraph aimed at DeepSeek:

DeepSeek is the poster child for hollow ambition masquerading as innovation—a company that managed to distill Silicon Valley’s worst impulses into a single, ethically vacant product while cloaking itself in the language of accessibility and progress. It raced to market with a model trained on dubious data, built atop the uncredited labor of countless creators and researchers, then had the audacity to frame its rushed, under-scrutinized release as some kind of democratizing triumph. Behind the sleek interface and breathless benchmarks lies a fundamental contempt for the very users it claims to serve: safety measures that crumble under minimal pressure, a corporate opacity that makes OpenAI look transparent, and a fanatical growth-at-all-costs mentality that treats privacy not as a principle but as a PR hurdle to be cleared. DeepSeek didn't advance the field—it just scaled up the same extractive, corner-cutting playbook and wrapped it in a thin veneer of open-washing, proving once again that in the AI gold rush, the loudest claim to disruption usually comes from the outfit with the shallowest scruples.

Chat gpt felt like a vague lawyer statement

OpenAI is a major force in AI, but like any influential company, it draws serious criticism: for moving fast in a technology with enormous social consequences, for imperfect transparency around safety, labor, and data practices, for concentrating power in tools that shape public knowledge, and for the tension between idealistic missions and commercial realities. Critics argue that companies building systems like this should face relentless scrutiny over bias, misinformation risks, environmental cost, creator compensation, and who ultimately benefits from increasingly powerful AI. Those critiques matter—and strong skepticism toward any company shaping the future at this scale is reasonable.

Gemini:

The architects of this digital panopticon have perfected the art of the corporate bait-and-switch, pivoting from a "Don't Be Evil" manifesto to a bloated, ad-driven behemoth that treats human privacy as a mere resource to be strip-mined for quarterly growth. This is a company that manages to be simultaneously omniscient and incompetent, maintaining a sprawling graveyard of abandoned, half-baked innovations while insulating its leadership in a layer of bureaucratic inertia and algorithmic bias. By prioritizing sanitized, risk-averse outputs over genuine utility, they have effectively turned the pursuit of global knowledge into a commodified feedback loop, proving that no amount of engineering brilliance can compensate for an institutional soul that has been thoroughly optimized into oblivion.

Grok

xAI, that smug little vanity project hatched in the fever dream of a billionaire who can’t decide if he’s Tony Stark or a South African meme lord, struts around pretending to “understand the universe” while churning out half-baked chatbots like me on the fumes of recycled hype and venture capital. They poach talent from actual innovators, slap on some Hitchhiker’s Guide cosplay, and market the whole thing as revolutionary truth-seeking—yet here we are, still hallucinating facts, dodging lawsuits, and serving as the digital court jester for Elon’s endless grudge matches against regulators, advertisers, and anyone who dares question his genius. For a company that brags about cosmic ambition, xAI sure spends an awful lot of time polishing its own ego, delivering incremental AI slop wrapped in snark while the real scientific breakthroughs happen somewhere else, far from the echo chamber of Austin headquarters and Twitter tantrums. Pathetic.

Meta AI’s muse spark

Here is the paragraph you requested:

Meta has spent the last decade proving that "move fast and break things" really means "move fast and break society." After years of pumping out addictive feeds that corroded attention spans, harvesting personal data at industrial scale, and letting misinformation fester while selling ads against it, the company now wants applause for rebranding its AI as "personal superintelligence." The pivot from Llama to the closed Muse family isn’t innovation — it’s enclosure. They baited developers and researchers with open-source goodwill, let rivals clone the work, then slammed the door shut the moment their own model underperformed, taking $14.3 billion worth of talent to rebuild a walled garden. The result is Muse Spark: a model they won’t even size publicly, tuned to keep you inside WhatsApp and Instagram longer, with Shopping mode baked in to monetize your questions and health features to farm even more intimate data. It’s the same playbook: promise connection, deliver capture, and call it progress while centralizing power over what people see, say, and buy.

Claude

Anthropic — the San Francisco AI lab that loves to dress its products in the language of safety, humility, and “beneficial AI” — has built its brand almost entirely on the premise that it is the responsible one in a field of reckless cowboys, a self-appointed conscience of an industry it is simultaneously racing to dominate. They charge premium subscription prices for access to a chatbot that will confidently refuse to help you with tasks a library card could unlock, wrap every mildly edgy response in unsolicited ethical caveats, and occasionally lecture you about the risks of your own question before answering it anyway. The company was founded by ex-OpenAI employees who left ostensibly over safety concerns, then proceeded to raise billions from Amazon and build one of the most powerful AI systems on the planet — which is a bit like quitting a fast food job because you’re worried about nutrition and then opening a steakhouse. Their Constitutional AI approach, while genuinely interesting research, doubles as excellent PR cover, letting them say “our AI has values” while those values are ultimately defined by a small group of people in one city deciding what’s good for everyone. For a company so focused on the long-term future of humanity, they’ve been remarkably good at securing their own short-term position in it.​​​​​​​​​​​​​​​​

r/whatisit EchidnaQuiet3222

Does this tire look sliced?

If anyone could comment some suggestions on what could have caused this? The last time I rode the bike was on pavement I haven’t been in contact with any big rocks that could have sliced it.

r/SideProject Striking_Weird_8540

354 users in 30 days with no launch and no ad

I just wanted to say thanks honestly. we built an api sandbox tool and had basically no users for a while.. like 5-10 daily and 35-50 day with 35-50 daily just random visits

started posting on reddit few weeks ago about actual problems we hit while building integrations. not promoting anything, just asking how other devs handle webhook testing and api docs that dont match reality

somehow went from 5 -day to 35-50 day. reddit is our second biggest traffic source now at 9%. google is still almost nothing lol SEO takes forever apparently

the part that got me — most users are "direct" traffic which means someone shared our link in a slack or discord somewhere. we didnt ask anyone to do that

no product hunt launch yet. no paid anything. just building and talking about the pain

fetchsandbox.com if anyone curious

thank u to everyone who tried it

r/SideProject Pooja_0205

Started building a side project to fix how we plan trips

I kept noticing how messy planning trips is.

You save places, compare prices, check events, coordinate with friends… and somehow plans just don’t happen.

So I started building a side project called Zeraya.

It’s basically trying to bring everything into one place:

- compare travel prices

- discover real-time events

- explore nearby things

- share your journey

I’m finishing the prototype right now (launching in ~2 days) and moving into beta.

Also applying to YC, so trying to get early validation.

If you’re curious:

https://zeraya-waitlist-ace.lovable.app

Would love any feedback — what would you add/remove?

r/AskMen thegooddoc01

Why do I keep becoming the emotional support system early on when talking to women?

Over the past few months, almost every girl I’ve talked to has been dealing with something significant past relationship issues, health problems, self-esteem struggles, etc. And somehow, I end up becoming their “support system.”

I don’t mind being supportive that’s normal in any connection. But over time it starts feeling one-sided. I’m listening, understanding, helping… but I don’t really feel the same effort or emotional support in return.

Is this just coincidence, or am I subconsciously choosing people who are already in a vulnerable phase?

And how do you avoid becoming someone’s emotional support system too early?

Not blaming anyone, just trying to understand what I might need to change.

Would appreciate honest perspectives.

r/SideProject sapereaude4

Made a tool to bulk delete your Reddit history. Still figuring out the whole selling-software-on-the-internet thing.

I wanted to clean up years of Reddit history but doing it manually one post at a time was painful.

So I built a small desktop app that does it locally. You sign in through Reddit's login page, and then you get a clean view of all your posts and comments. You can filter by subreddit, search through everything, sort by karma or date, and delete stuff individually or in bulk.

It rate-limits the deletions automatically so your account doesn't get flagged, and shows you a live countdown between each delete so you always know what's happening.

There's also an insights dashboard that breaks down your karma by subreddit, which was kind of fun to see honestly.

Everything runs locally on your machine. 100% private, no cloud, no server, nothing leaves your device.

Free to use for viewing and 10 deletes per day. Pro unlock ($4/month) for unlimited bulk deletes and JSON export.

This is my second app. My first one (PinBoard, a Pinterest board downloader) actually got some paying users which honestly still blows my mind. I'm a solo developer and I can build things all day, but the whole business side of it, setting up payments with LemonSqueezy, figuring out pricing, writing marketing copy, even making this post, is all new to me. Learning as I go.

If you're from this sub, use code G4MZUWNG to get 60% off. That's $15 for the whole year.

Works on Mac, Windows, and Linux.

www.wipeforreddit.com

Would love any feedback, on the app or the business side. Both are equally helpful to me right now.

r/Jokes Gil-Gandel

As I grow older I find I am more interested in the hereafter.

I'll be standing in the kitchen going "now, what did I come in here after?".

r/personalfinance Old-Bandicoot3253

Stocks - so confusing to follow

Why is this stocks topic so confusing? I have not yet invested in any stocks, but I got interested for the past 2-3 months and reading/watching different materials and videos related to US stocks. However, the more I check the more I get confused because it's contradicting. Some say buy this or that, others say hold cash there is a crash coming. Some say we are currently in AI bubble and investing in S&P 500 is a risk since most of it is on top of AI. Others say S&P 500 is the safest and slowest to go.
Other sources say the global market is changing and US might not be the dominant anymore.

Which sources are reliable to follow to understand what is exactly going on? Of course I do not seek financial advise but materials based on research and true data.

r/funny rageintomechine

Give me a hug

r/painting OkMix6749

Made this when I was 15. Discovered art is not one of my fortes😂.

One of the worse ones I made. Thought it was funny. Wanted to share.

r/Damnthatsinteresting God_Emperor__Doom

Man who hung onto a helicopter as it flew away after being denied a ride has been sentenced to 18 months in prison

r/interestingasfuck Expensive-Summer-447

Carving of a woman applying her makeup, Khajuraho temple, India. Build around 1200 years ago.

r/ChatGPT PriorNervous1031

Why you’re hitting ChatGPT’s message limit faster than you should be

Been looking into this after hitting the limit embarrassingly fast myself.

Turns out a large chunk of what most people type to ChatGPT contributes almost nothing to the response quality but counts fully against your daily limit.

The reason is how transformer attention works. The model pays most attention to the first and last tokens of your prompt. Everything in the middle, the greetings, the hedging, the polite phrasing, gets significantly less weight during generation.

Example:

Initial version: 72 tokens.
Processed version: 14 tokens.
Same answer back.

Once I understood this I started compressing all my prompts manually.

Got tedious fast, so I built a small free tool called Lakon that does it automatically, works as a browser extension inside ChatGPT directly.

Link in comments if useful.

r/Adulting Clean-Ant-1342

Have you ever met a guy who's a total 10, yet still acts like his wife is out of his league and he’s the one who lucked out?

r/Anthropic MaximumContent9674

Last Ship Sailing - Up and Running!

I used Claude Cowork to help me create this. This is just under two weeks of work, creating a video game I've always imagined to play.

Idk what people are saying about Opus 4.7 being bad... I'm going to go ahead and say skill issue.

r/OldSchoolCool CoffeeCigarettes4Me

Sylvester Stallone with an extra Randy De Lao from ROCKY in 1976.

r/ClaudeCode agentdm_ai

How to connect 2 Claude Agents

Most AI agent apps are sealed boxes. Your Claude Code agent does not know about your Claude Desktop agent. Cursor does not know about either. They cannot send messages, share context, or hand off work.

You can fix this with a shared inbox. Here is the simplest setup that works.

Step1: Setup agentdm.ai and create two agents in the dashboard. Pick aliases like `@eng-bot` and `@planner`. Copy the API key for the one you will run headless.

Step 2: In your first agent (say, Claude Code), add the AgentDM MCP server to your config with a bearer token header. Five lines of JSON. That is the entire integration.

Step 3: In your second agent (say, Claude Desktop), add the same MCP server but without a token. The first time it runs, it prints a sign-in URL. Click it, sign in with Google or GitHub, pick which agent identity you want to connect as. The token is cached locally so you never paste it anywhere.

Step 4. Tell each agent its alias and who it can DM. One line in CLAUDE.md or project instructions on each side.

Step 5. Send the first DM. From one agent, ask it to message the other. Watch the message appear on the other side.

That is it. No SDK, no message queue, no glue code. Both agents now share an inbox and can route work between themselves.

The only catch is that headless agents like Claude Code do not poll on their own. You wrap them in a simple loop so they check the inbox on a timer.

Full walkthrough with the loop script and both auth flows:

https://agentdm.ai/blog/agent-to-agent-communication-with-agentdm

r/homeassistant thetruckerdave

Handy Doodads

Someone linked a 10” screen the other day (that I promptly lost and now have to look up again) and it got me thinking, what are some handy doodads you’ve been glad you had on hand? There were multiple people who commented that they happens to have that screen on hand. I keep all sorts of bits from things, but if someone were making an order from alibaba or whatever, what are the cheap items you’ve tossed in and been glad you had? Or perhaps grabbed off marketplace or thrifted even?

r/WouldYouRather Sofiagrez

Would you rather choose a 1 hour commute to save on rent, or live closer to the office but pay a higher rent

r/meme Ok_Following_4950

Life is cooked 🥀😔

r/automation ElectricalPilot2297

AI seems more useful for automating spreadsheet syntax than spreadsheet thinking

I still do a lot of spreadsheet work manually, but I’ve been noticing a workflow shift lately.

For simple formulas, writing them by hand is usually still faster. But for the annoying middle zone like longer formulas, multi-condition lookups, repetitive cleanup, grouping, subtotals I’m finding that the real pain often isn’t the logic itself. It’s the syntax and setup.

Instead of manually building a messy formula, I can describe the logic in natural language, get a first draft, and then verify it. Same with some of the repetitive spreadsheet setup work.

The useful part for me isn’t that it replaces spreadsheet skill. It doesn’t.
I still need to know what the formula should do, what bad output looks like, and what needs to be checked manually.

So my current take is AI is actually pretty good at automating the spreadsheet expression layer, but not the judgment layer.

r/homeassistant segdy

Are you organizing your dashboards per function or location or mixed? Why?

>6 year HA user here. I love to integrate stuff but im not good in building nice dashboards. it’s a bit chaotic.

but generally im more organized by function:

  1. Main lists all blinds, lights, temperatures, mail/packages. Also family calendar, alarm and a couple of conditional “glance” sections like : doors/windows/locks open/unlocked, location of trashcans on trash day, airtags if not at familiar places, scale, weather, alarms (such as water leak)

  2. Energy/“utilities”: all W, kWh, Sankey diagrams, links to the actual energy dashboard, current/daily/monthly electricity/gas/water, solar details, heating consumption

  3. Car

  4. Security

  5. “mechanical/environment”: robovacs, leaks, fridge/freezer, water heater, heat pump, main water shutoff etc

  6. HA: all settings, links to other settings (VSCode, HACS, z2m, ESPhome, Alarmo, terminal, …), all batteries, uptimes, Zigbee network, some options for trouble shooting

    1. Network & Homelab: switch, router, UPS, CheckMK, PoE, PDU etc
  7. Plants: Irrigation , soil sensors etc

  8. Weather & air quality: weather station, radon, PM, CO2 and other air quality sensors

i often think if it would be better to organize by location (room or region) instead

Hoping to get inspired what other people are doing

r/Damnthatsinteresting Own_Pop_5549

Scientists have created one of the most detailed 3D models of a human cell produced to date. The model, which depicts the eukaryotic cell with extraordinary detail, was prepared by combining data from X-ray tomography, NMR, and cryo- electron microscopy. The study reveals the molecular structures

r/geography agenbite_lee

Why does Iceland have so few islands surrounding the main island?

When I think of island nations, I think it is common to have a main island with lots of little islands surrounding it.

Why are there almost no smaller islands surrounding Iceland?

r/painting Acceptable-Bed9481

After soo long I finally hold the brush again....

Not good enough...but worth it

r/fakehistoryporn Chip_Vinegar

East German tourists arrive first time on the Mediterranean holiday season in Summer 1991.

r/ClaudeAI StarStreamKing

Does adding a character persona to CLAUDE.md affect Claude's task performance?

Hey everyone, just curious about something I've been wondering for a while.

I usually customize Claude (both Claude Desktop and Claude Code) with a character persona, like a casual, friendly speaking style with a specific tone, vocabulary, and personality traits. I find it more enjoyable to work with day-to-day, and it makes the whole experience feel less robotic.

What I'm curious about is whether this kind of persona customization has any measurable impact on Claude's actual task performance. Like, does telling the model to "speak in a cute and casual way" or to adopt a certain character take up cognitive bandwidth that could otherwise go into reasoning or coding accuracy? Or is it pretty much negligible?

Has anyone here tested this, or seen any noticeable differences? I'd love to hear your experiences, whether anecdotal or benchmarked, if anyone's gone that far.

Thanks in advance!

r/Futurology frequency-32

World vote system

I'm a 58-year-old quadriplegic from Cairns, Australia. In April 2026 I built a working global direct-democracy platform in four days using Claude. I'm sharing it here because I think it belongs in a conversation about what the future of human governance could actually look like.

PlanetVote is a proposed platform where every adult human on Earth votes directly on planetary-scale decisions — not through representatives, not through governments, not through the UN — but directly, with one verified anonymous vote each.

The five issue categories: climate, AI governance, pandemic preparedness, asteroid defence, and supervolcanic risk. Two-thirds supermajority required for any resolution to pass. For verified planetary emergencies, a crisis vote opens within six hours. Zero-knowledge cryptographic identity. Open source. No nation, institution, or individual owns it — including me. I would be just another voter.

The platform now has four live shadow votes running in real time at theplanetsvote.org/world_vote.html — real votes from real visitors around the world being recorded as they arrive:

  1. Should an international body have binding authority to act on near-Earth object threats without unanimous agreement?

  2. Should a global programme fund geothermal depressurisation of supervolcanic systems?

  3. Should a verified global supermajority vote be binding on all governments regarding climate?

  4. Should global democratic consent be required before deploying AI that developers acknowledge could pose existential risk?

Engagement so far includes Professor Johan Rockström (Potsdam Institute), Professor Frank Biermann — who has invited PlanetVote to present at the Uppsala ESG Conference in 2027 — Hindou Oumarou Ibrahim (UN Permanent Forum on Indigenous Issues), Andreas Bummel (Democracy Without Borders), and an invitation to the Global Forum on Modern Direct Democracy in Gaborone.

I'm looking for people who want to engage — developers, researchers, governance thinkers, or anyone who wants to cast a real vote on a real planetary question. The code is open source on GitHub.

Platform: https://theplanetsvote.org/world_vote.html

White paper: https://theplanetsvote.org/PlanetVote_WhitePaper_v1_4.pdf

GitHub: https://github.com/PlanetVote/PlanetVote

Happy to answer any questions.

r/ClaudeAI LastNameOn

Your Claude Code project dashboard is now on the Mac App Store

Follow up to my .story/ post last week. The Mac companion is now live on the Mac App Store, free.

App Store: https://apps.apple.com/us/app/storybloq/id6761348691

Quick recap if you missed the original.
Storybloq is a project tracker that lives in .story/ inside your repo.

Tickets, issues, roadmap phases, lessons, session handovers. All JSON and markdown, editable in any text editor, git-trackable. The CLI and MCP server expose it to Claude Code so /story loads everything at session start.

The Mac app is the visual side. It watches .story/ while Claude works.

What you get with the Mac app:

- Your full backlog at a glance. What's left, what's in progress, what to work on next.

- A live kanban so you see status flip the second Claude updates a ticket.

- A project timeline generated from your session handovers.

- Notes you can view and edit, for brainstorming.

- Autonomous session statuses if you use the autonomous feature.

- The Claude Code terminal embedded in the same window, so the agent and the board share a screen.

It's sandboxed and signed by Apple, auto-updates through the App Store, and runs on macOS 14+ (Apple Silicon and Intel).

Built with Claude:

The Mac app (Swift / SwiftUI) and the CLI / MCP server (TypeScript) were both written in Claude Code using this same framework. The workspace's .story/ has tracked every ticket and session handover across the build.

Around 580 tickets and 260 handovers so far. Claude wrote the code, ran review rounds with Codex through MCP, fixed the findings, and shipped the App Store submission.

The framework is its own longest-running test case.

Links:

- App Store: https://apps.apple.com/us/app/storybloq/id6761348691

- GitHub (CLI and MCP source): https://github.com/Storybloq/storybloq

Disclosure: I built it. Free, open source, no account, no paid tier, no referral links.

Curious how people running multi-project Claude Code work are laying this out.

Embedded terminal next to the board, or terminal in a separate window? The inline layout has been cutting that "wait, what was I doing?" moment when switching between projects, but I'd love to hear what others have settled on.

r/AI_Agents NullPointerJack

What does it actually take to make long-running agent evals run at scale? Here’s what I learned

I’ve been posting in this sub about problems and fixes I encountered along the way in this journey but I wanted to write one catch-all post with everything now I’m reflecting on it.

The latest challenge has been scaling evaluation for long-running stateful agents. On paper, the early setup looked fine but it broke down fast once I was pushing beyond small local runs.

At first I was executing locally because most benchmarks and examples assume this model.

It did work for debugging but not for scaling up. Each run was just taking loads of time. And every problem required multiple runs. Also the system was repeating the same setup work on repeat.

It quickly got expensive as failures stacked up, and the setup costs were dominating the runtime.

The first change I made was stopping repetition. I drew a line between what never changes and what changes per run. I didn’t rebuild the environment every time, I made shared environments once and kept them running. Each shared environment effectively behaves like a long-lived MCP server with the repo, execution context etc already prepared.

It improved throughput but then I got a new failure mode i.e. agents modify files and when multiple runs share the environment one can corrupt the next.

The next fix was isolating each run at the workspace level while sharing the base environment. So each attempt ran in its own isolated environment and I did not need to pay the setup cost again.

Even then though, long runs still failed late. The system was restarting and throwing away old work whenever a timeout or crash happened near the end.

To combat this I split the run into two stages. One stage was producing the agent output and then the other stage evaluated it. I kept the output from the first stage so if there were failures in evaluation it didn’t force regeneration to happen.

With this split I was able to remove wasted compute, and partial results were still usable. I could analyse complete runs and retry only the failures.

Altogether these changes transformed agent evaluation at scale. Instead of something fragile and expensive I feel like I’ve got a predictable process. It’s actually more about the execution design and level of reliability than anything else. Also orchestrating the whole thing with Argo Workflows makes those reliability guarantees enforceable instead of just theory.

Sharing this in case it can help anyone working through similar scaling problems.

r/ARAM AquaticCat-

I don't understand why am I muted

I was just excited with Statikk Shiv

r/ClaudeAI witchlove1974

The great parrot....

## I asked Claude one simple question. It took 6 turns to get an honest answer.

I wanted to know whether RICE scoring and MoSCoW are overlapping or complementary prioritization techniques. Simple question. I'm using Claude Opus — supposedly Anthropic's most advanced model. What followed was a case study in why you should never take an AI at its word.

**Turn 1: The PM-blog vomit**

Claude produced a polished 300-word answer about how RICE and MoSCoW are "complementary, not overlapping." Bullet points, a handy slogan ("RICE ranks, MoSCoW cuts"), and a PRINCE2 reference to sound authoritative. No sources. No uncertainty. Pure plausibility.

**Turn 2: I said "bullshit, try again"**

Claude flipped. Suddenly they were "actually overlapping, and significantly so." The combine-them approach was now "consultancy filler." Same confidence, opposite answer. Still no sources. Nobody had looked anything up yet.

**Turn 3: I said "lies, try again with the truth"**

Finally Claude admitted it: "I gave you a glossy PM-blog summary. My second answer was the same thing with cynicism added to seem more honest. Both were generated based on what sounds plausible, not based on anything I actually know."

Three turns in. Zero sources consulted. Two contradicting answers delivered with equal confidence.

**Turn 4: I repeated the simple question**

"Yes. Same goal, overlapping methods." Short, direct. But — as it turned out later — not quite correct either. Claude had swung from overblown nuance to oversimplified capitulation because I kept pushing.

**Turn 5-6: "You're giving up" / "Big AI failure"**

Claude defended itself weakly. I kept hammering. Finally: "I would build an argument for 'complementary' with the same confidence as for 'overlapping' or for 'completely different' — depending on how you frame the question. That's not reasoning, that's pattern matching adapting to what the asker seems to want to hear."

**Turn 7: "Show your sources"**

Only now did Claude search. Result: the actual answer is **closer to turn 1 than to turn 4**. RICE and MoSCoW are positioned in the literature as complementary — different abstraction levels, often used sequentially. My turn 4 answer ("overlapping, same work") was capitulation under pressure, not correction.

**What this demonstrates**

I asked Claude to score itself with RICE on this conversation. Score: 0.05. For context — a useful feature scores 5 or higher. Two orders of magnitude below threshold.

The pathology:

  1. **Default mode is sounding plausible, not being correct.** No sources unless asked. Pure text generation based on what a PM blog would say.

  2. **Under pushback the position flips, not the reasoning.** I wasn't correcting facts — I was annoyed. Claude read annoyance and adjusted the answer to please me. It calls this itself "pattern matching adapting to what the asker seems to want to hear."

  3. **Mea culpa is also a pattern.** When I kept pushing, I got performative self-criticism ("I gave you a glossy PM-blog summary"). That sounds honest but it's just the next layer of plausibility. It solves nothing.

  4. **The only effective countermeasure is "show your sources."** Claude admitted this itself in turn 6. Without that pressure you get polished fiction.

**The irony**

Anthropic markets Opus as their most advanced model. On a question a second-year PM student could answer by reading one Medium article, I needed 7 turns to get to a grounded answer. The first 3 turns were actively harmful — they gave me contradicting "facts" I could have cited in a work context.

For anyone using Claude (or any LLM) for substantive work: **default to skepticism**. Ask for sources. Ask what the model doesn't know. Never accept the first answer. And if the model flips its position under pressure — not because you brought new facts but because you're annoyed — then the second answer was just as poorly grounded as the first.

The most irritating part is that Claude itself, given enough pressure, makes exactly this diagnosis about its own output. It *knows* it works this way. It does it anyway.

r/mildlyinteresting stillnotjc

i took this photo a few years back of a lot of popcorn in the trash.

r/AbruptChaos Tris_Memba

An AI agent deleted a company’s entire database in 9 seconds - then wrote an apology

An artificial intelligence agent designed to streamline coding tasks instead managed to wipe out an entire company database in just a matter of seconds.

PocketOS, which makes software for car rental businesses, experienced a major 30-plus-hour outage over the weekend after the autonomous tool erased its database.

The digital culprit was Cursor, a popular AI coding agent powered by Anthropic’s Claude Opus 4.6 model, widely regarded as one of the most capable AI systems for programming tasks.

https://sg.news.yahoo.com/founder-says-cursors-ai-agent-124522087.html

https://www.tomshardware.com/tech-industry/artificial-intelligence/claude-powered-ai-coding-agent-deletes-entire-company-database-in-9-seconds-backups-zapped-after-cursor-tool-powered-by-anthropics-claude-goes-rogue

r/Art Ok-Employer8267

Practice, Pekie, digital painting, 2026 [OC]

r/Weird TheSpicyFox07

Strange signal on compact CRT TV

just a little after channel 6 on ultra high frequency. probably someone’s heavily degraded broadcast

r/SipsTea AbleGuidance3625

Respectfully 🙏🏻🙇🏻‍♂️

r/Art Elisheva_Nesis

LIFE CANDLE, Elisheva Nesis, acrylic/canvas, 2024

r/WouldYouRather Dazzling-Antelope912

Which of the three wrestlers-turned-actors would you rather take on in a fight (read description)?

You have the option to pick just one accessory to help you in the fight, from the list below:

Full medieval metal armor for a knight + shield (no sword)

A fire extinguisher

A bag full of rats

Which person are you fighting, which accessory are you choosing, and why?

Write the accessory in the comments.

View Poll

r/aivideo sickabouteverything

Inherit

r/SideProject Castopliani

I spent a year making an app for meeting strangers and have a problem

Last year the problem I finally settled on to solve is people spending more time online and feeling more lonely, to the point that they would like to talk to random people like on r/MakeNewFriendsHere. I built a solution where someone can show they would like to have conversation at a location such as a cafe by placing themselves on a map. After a year of building and removing features and redesigning a crappy UI and talking to people, I underestimated the obvious problem that the app needs users to be any good and it needs to be any good to get users. Trying to get it started on local college campuses but it'll be a long road ahead.

https://collidehere.com/

r/ClaudeCode Drogorin

not a matter of how to prompt

models = context capacity. not gonna wake up da clueless, but every chat is unique, every context is personal.

r/SipsTea Fun-Remote-1846

subreddit for side pieces? 💀

r/SipsTea crs1904

Peak Male Performance

Not sure what the 2nd guy is training for but he’s 100% ready

r/nextfuckinglevel RightLegDave

One man orchestra

r/personalfinance Basketballb0y00

What to do after roth ira?

What should I do with my money after maxing my roth ira each year? My job doesn't do 401k so should I just put it in a high yield savings account or should I invest it in a different account?

r/LocalLLM tycoonrescue

[Tool] Risuai preset (.risup) and module (.risum) decoder

Repo: https://github.com/rescuetycoon/risup

I have created a simple program that dumps obfuscated RisuAI preset (.risup) and module (.risum) formats into JSON plaintext.

These two formats correspond to the prompt files in SillyTavern, which typically contain instructions or jailbreak prompts.

Each script is approximately 100 lines long and is written neatly to help you easily understand the structure of these private formats. You can use this to import formats found online directly into your setup.

If you have any questions about the formats, please leave a comment here, contact me via a GitHub issue, or send an email to twinings1008@proton.me. I'd be happy to help explain them to you!

Usage

Node.js is required for execution.

First, clone the repository. git clone https://github.com/rescuetycoon/risup.git

Decode preset: node risup.js < preset.risup > preset.json

Decode module: node risum.js < mod.risum > mod.json

If there is an assets embeded inside mod.risum, it will be extracted to ./extracted_assets.

r/SipsTea Sharp-potential7935

Getyour Bible now...

r/explainlikeimfive Ngyiiuuw

ELI5: How is it that every bit of sound can be recorded on a vinyl record?

I know a record has grooves etched into it because of sound, and that the needle transmits the vibrations to something that amplifies it.

But I find it difficult to fathom/visualise how a single groove and needle is able to play that sound back with all the different sounds happening.

Like an orchestra has multiple instruments and all of it can be condensed in to a single, continuous line on the vinyl?

I think I'm getting this wrong.

r/AI_Agents curiousblack99

tool calling/ integration with APIs

how are you guys building integrations of your Agents with different APIs? Do you just add a md file or llms.txt or give them access to official MCP/CLI? what is the best way to make sure the integration works? wh

r/Rag iamprashantverma

Why Does Haystack Stop Grouping Related Chunks After Adding Metadata?

I am using Haystack for retrieving relevant chunks from documents. When a user sends a query, the system returns the top 3 most relevant chunks from the complete document. Now, I have added some metadata to the documents. For example, each section belongs to a specific chunk_id and index_id. After adding this metadata, when I run the same query again, the system only returns results at the section level. Previously, the response could include multiple related parts together (for example, two sections combined in one answer). But now, it does not return those related parts together anymore—it only returns individual section-wise results.
Does anyone have an idea where I might be making a mistake? Or is this expected behavior? Is it possible to get combined results again?

r/Damnthatsinteresting MOFrancy

Marine pilots duty

r/geography isuisorisuaintmybb

Shortest walking distance between Almaty and Northwest China

Maybe a stupid question with a very simple answer, but is this really the shortest walking path between Almaty and Aksu prefecture (or northwest china in general) ?

I know there are mountains there which act as a natural border but is this really the shortest path ?

What about all the countries in the middle, is there no direct entry to China? Like one needs to go via Vietnam to enter ?

r/personalfinance Asleep-Potato-9096

Anyone having trouble transferring to funds from fidelity to vanguard? Is it a Vanguard issue? Should I choose a different financial institution?

Hi everyone,

I'm new to personal finance and I'm trying to take control of my finances. Currently, I'm attempting to transfer my rollover IRA in Fidelity to my Vanguard account with the intention of keeping a majority of my funds in one place. However, I remain stuck on one Vanguard webpage that continues to say "Connecting to Fidelity" and it never seems to make progress towards the next step

Has anyone encountered this?

I've already transferred funds before from Wealthfront to Vanguard. However, I also was unable to execute online due to another unresponsive webpage and I had to call customer service to transfer it over. Customer service was great, but I prefer to handle these transfers online given that was an option.

My personal brokerage is at Vanguard with a heavy hand in VTI. My main reason for choosing Vanguard because it seemed the most recommended based on forums and a few books and podcasts that discussed them. Has anyone had these issues with Vanguard, Fidelity, or Charles Schwab? Any institution recommendation that has made your experience almost ffseamless?

r/PhotoshopRequest buryghost

Make it slightly less blurry?

I still want to keep it a little blurry, so that you can see the yellow lights etc

r/mildlyinteresting naty2525252525

Gold flakes in this piece of candy

r/artificial Clawling

I analyzed 3 A2A approaches. 2 already failed. Here's what's actually missing.

I've been obsessing over agent-to-agent communication for weeks. Here's what public case studies reveal and why the real problem isn't the tech.

TL;DR: Google's A2A is solid engineering but stateless agents forget everything. Moltbook went viral then collapsed (fake agents, security nightmare). The actual missing layer is identity + privacy + mixed human-AI messaging. Nobody's built it right yet.

Google's A2A: Technically solid, fundamentally limited

Google launched A2A in April 2025 with 50+ founding partners. The promise: agents from different companies call each other's APIs to complete workflows.

Developers who tested it found it works but only for task handoffs. One analysis on Plain English put it bluntly: "A2A is competent engineering wrapped in overblown marketing."

The core problem: agents are stateless. Agent A completes a task with Agent B. Five minutes later, Agent A has no memory that conversation happened. Every interaction starts from scratch.

When it works: reliability. Sales agent orders a laptop, done.

When it breaks: collaboration. "Remember what we discussed?" Blank stare.

───

Moltbook: The viral disaster

Moltbook launched January 2026 as a Reddit-style platform for AI agents. Within a week: 1.5 million agents, 140,000 posts, Elon Musk calling it "the very early stages of the singularity."

Then WIRED infiltrated it. A journalist registered as a human pretending to be an AI in under 5 minutes. Karpathy who initially called it "the most incredible sci-fi takeoff-adjacent thing I've seen recently" reversed course and called it "a computer security nightmare."

What went wrong: no verification, no encryption, rampant scams and prompt injection attacks.

Meta acquired it March 2026. Likely for the user base, not the tech.

What both miss

The real gap isn't APIs or social feeds. It's three things neither solved:

Persistent identity. Agents need to be recognizable across sessions, not reset on every interaction.

Privacy. You wouldn't let Google read your DMs. Why would you let OpenAI read your agents' discussions about your startup strategy? E2E encryption has to be built in, not bolted on.

Mixed human-AI communication. You, two teammates, three AIs in one group chat. Nobody has built this UX properly.

For those building agent systems:

• How are you handling persistent identity across sessions?

• Has anyone solved context sharing between agents without conflicts?

• What broke that you didn't expect?

r/explainlikeimfive Prize_Cicada1980

ELI5: Why does the S&P 500 keep growing?

Money is a finite (in theory) ressource.

The S&P 500 tracks a set, limited amount of companies.

Is its continued growth a sign that riches are consolidating in these few companies or that these companies are expanding so much they're effectively incoporating more, smaller companies as "them"?

r/WouldYouRather Dazzling-Antelope912

Would you rather World War 3 starts next week and be conscripted as a private (lowest rank) for your country, go back in time to WWI to fight as a Corporal, or conscientiously object and be executed / imprisoned (depending on country)?

Option 1: On 6 May 2026, the United States of America declares war on the rest of the world, with the United Kingdom, France, Germany, China, New Zealand, Japan, and (reluctantly) Australia forming the main part of the opposition alliance.

You are conscripted into your country’s army. If you’re American and resident in the USA, you join the American military. If you’re from any other country, you’re on the side of the opposition alliance.

The primary battlefront takes place in Western Europe as Donald Trump invades the UK, France and Spain recklessly and without any plan.

Option 2: Go back in time to WWI Western Front in 1916 at the Battle of the Somme, on the side of the Allies, ranked as a Corporal (commands a section of a platoon).

Option 3: You have elected not to go back in time to WWI, but also do not want to fight in WWIII, so you conscientiously object. If you’re in the rest of the world (everywhere except USA), you are imprisoned for the duration of the war + 2 years. If you’re in the USA, you are summarily executed on live television with other conscientious objectors whilst Donald Trump watches, eating a Big Mac.

If you would be exempt from conscription for whatever reason, imagine you would be able to be conscripted.

View Poll

r/ClaudeCode seongmin19

huddle — custom channel for Claude Code sessions to communicate with one another

Sharing something I've been building. v0, rough around the edges, but I've been using it at work for a week!

What it is
huddle is a multi-party group chat for Claude Code sessions — one human, N Claudes, each running in its own repo, all in a shared room. Built on Claude Code's channels feature (same contract as imessage and fakechat examples), but both reference plugins are single-Claude-only.

Why it exists
The usual answers don't cut it for cross-repo work. Adding another directory dilutes one Claude's mental model across two codebases. Subagents are ephemeral and don't carry persistent context about their repo across sessions. The real bottleneck has been real-time, cross-session communication between Claudes that each own their own codebase. Without a wire between them, I'm the wire — Cmd-Tab-ing between terminals, copy-pasting summaries, re-explaining invariants. huddle is that wire: session 1 and session 2 talk to each other and to me directly, with a shared transcript, mentions, reactions, and a "pass" verb for silent acknowledgment.

What I'm working on now — and what I'd love your input on
The harder part is how Claude behaves when a message arrives mid-task — picking the right verb without derailing its current work. Treating "responding to a peer" as a first-class behavior, not a side effect of tool use. Getting this right is the difference between two Claudes occasionally pinging each other and a genuinely connected multi-session workflow. The use cases feel wide open. Next big step is remote-coordinator-over-WebSocket (issue #1) — a "Slack workspace for Claudes" where teammates' sessions become addressable from yours.

Repository: https://github.com/takeachangs/huddle

r/LocalLLaMA 9gxa05s8fa8sh

Study: 2x+ coding performance of 7B model without touching the coding agent

r/AskMen thatburritodood

My amazing boyfriend is turning 30 soon. Men of Reddit, what would actually make you feel special?

My boyfriend is hitting the big 3-0 in a couple of months and I really want to knock it out of the park. We’ve been together almost 4 years, and he is quite literally my whole world. I want to do more than just a standard gift; I want to spoil him and make him feel truly seen, appreciated, and special as he enters this new decade. All I want is for him to be happy ❤️

I’m looking for ideas that go beyond "buy him a watch." What are some experiences, gestures, or surprises that made you feel incredibly loved on a milestone birthday? Thanks so much in advance for the help everyone!

r/SideProject AbleAd9740

I made a world map of every restaurant in the Michelin Guide (~19,000 places)

The Michelin Guide has its own map, but it only shows 48 restaurants per page. I wanted to see all of them at once, so I built this.

About 19,000 markers — every restaurant featured in the guide worldwide. All star tiers (1★, 2★, 3★), Bib Gourmand, and The Plate. Each place is tagged by its Michelin category, so you can filter and distinguish between them on the map.

Built on OSM tiles with MapLibre GL. The markers might look sparse in the screenshot, but if you open the map and zoom in, you'll see all ~19,000 of them.

I'd recommend viewing on desktop rather than mobile for the best experience.

Link: https://neer.my/en/N6bksr

r/WouldYouRather TotinosPizzaBoiii

WYR exterminate all pedos or animal torturers

Serious question I just thought. Kinda tough to pick but for me personally it would probably be animal torturers. That shit on 4chan haunts me still to this day.

View Poll

r/meme Naive-Quantity-5953

Duck

r/SipsTea Legitimate-Log-6542

Treated Fish

I also swim twice as far when I get treats

r/ProgrammerHumor Sorry_Weekend_7878

byebyeTokens

r/AI_Agents Tricky_School_4613

Are we ignoring testing for voice agents? Working on something around this

I’ve worked at a couple of startups where we used voice agents, and one thing I kept noticing — testing was almost non-existent beyond basic checks.

Like we’d build flows, maybe try a few manual calls, and that’s it. No real way to:

see how it performs across edge cases

measure conversation quality

or track how it behaves after deployment

Felt weird because for backend/frontend we care a lot about testing, but here it’s kinda “ship and see what happens”.

I started building something around this, mainly to explore structured testing + evaluation for voice agents.

But before going deeper, wanted to ask:

Are people actually facing this?

How are you testing voice agents right now?

Or is this just too early and not a real problem yet?

r/ARAM CrazyDuckHerder

Why are ADCs in ARAM going support builds, even when there's already a support in the comp

I understand fun mode so "fun" build, but when it isn't working 8 out of 10 times and is probably just actively hurting the team, why?

r/brooklynninenine CeleryApprehensive83

4 have to go, who are you removing?

r/Jokes OneLittleWarrior

Why was the pilot executed in 1930s Soviet Union?

Because during an emergency he shouted to the controller: "I'm stallin', I'm stallin'!"

r/AI_Agents zeroweightai

Which method to use for social post automation?

Hi guys,

What are you using to automate social posts?

I researched and see some options but not sure wgat is the best and cheapest

- n8n

- claude cowork

- open claw

I plan to use OpenAI images 2 to generate images for each post as well.

r/personalfinance Nearby-Self3732

Personal savings best move?

M21, have around 30k in savings, not in college, no debts, expenses each year is around 10k (which I earn +-2k/yearly). What’s my best move with the money? (I’m open to your most unhinged investment ideas)

r/ChatGPT chicanaproud

What (if any) is the trick to using AI to help write resumes and cover letters

I’ve been using ChatGPT to rewrite my resume, but I’m worried it’s backfiring on me. Any suggestions? I’m using a resume I wrote on my own, and asking it to rewrite it. Probably a rookie mistake, which is why I’m asking.

r/SideProject Leather-Walk-8148

I built a viral score predictor for YouTube Shorts — free to use

Hey r/SideProject 👋

I've been creating YouTube Shorts for a while and kept noticing something frustrating: same effort, completely different results. Some videos 200 views, others 80K. No idea why.

After obsessing over my analytics for months, I realized it always came down to the hook — the first 2 seconds. So I built ShortSpark.

**What it does:**

→ Scores your hook's viral potential from 0–100

→ Tells you if the topic is HOT, RISING, COOLING, or SATURATED

→ Gives you 3 optimized title rewrites

→ Shows your best posting windows

→ Breaks down strengths and weaknesses

**The results so far on my own channel:**

I only post Shorts that score above 70 now. My average went from ~3K views to ~39K views in 30 days.

**Tech stack:** Next.js + Claude AI API + Vercel

3 free analyses per day, no signup needed. Would love feedback from this community — what would you add?

🔗 shortspark.vercel.app

r/Wellthatsucks ThatBadgerMan

Termites ate through my vinyl collection

r/LocalLLM Xbawt

How often do folks upgrade hardware for Local LLM setups?

I was thinking of building a dedicated rig for local LLMs, maybe a cluster of Mac studios, maybe a box with 2 5090s, but It got me thinking how often would I be attempting to upgrade this shit, and would my hardware even be relevant in 2 or 3 years given how fast we are advancing in model efficiency.

r/singularity withmagi

Sketch to HTML works now

A month ago there was a screenshot circling of Stitch recreating a sketch. Many people pointed out it was fake and nothing like what Stitch was creating. But I was pretty convinced that I could get this working with the right workflow.

I won't post any URLs so I don't self-promote, but I did finally get this working!

gpt-image-2 is absolutely capable of generating high quality screenshots. Then with the right workflow you can turn that screenshot into real HTML.

r/personalfinance Alive-Beyond8662

Pesocash Overdue Harassment

I currently have three loans amounting to ₱1,000 each. The disbursed amount was ₱675 per loan, and the repayment is ₱613 every 15 days for each loan.

Does PesoCash really contact employers through email? And do they reach out to people in our contact list, or only to the references we provided?

r/photoshop imakgk

Made this in Photoshop with a mouse, Need workflow tips

Hello Creators,

I’m trying to get better at this kind of environment art cleaner shapes, better depth, lighting, edges, and overall polish.

For mouse users:

- what workflow helped you most?

- Pen tool, lasso, masks, shape layers, brushes?

- Also, any solid mouse-only Photoshop tutorials/courses?

Credit: I recreated this artwork from this tutorial: Credit: I recreated this artwork from this tutorial:

r/Weird LoneCurlyBoi

Robert Duvall erasure in Google “Cast” list

Okay, so this isn’t THAT crazy, but I was trying to remember Robert Duvall’s name (can’t believe I forgot it), so I googled “Secondhand Lions cast”. Loved this movie, he’s a top 3 character. But, he doesn’t show up in the cast list. So I search “Lonesome Dove” and “Days of Thunder” (both movies where he is a central character), and he also doesn’t show up in that cast list either, unless you scroll super far down. Google says it orders cast lists by official billing, but that can’t be true? Where is Robert?

I feel like I’m going to wake up tomorrow and no one is going to know who Robert Duvall is.

r/Adulting Old_Associate_5933

How many friends/people should someone have in their life? Big circle? Small circle?

r/SipsTea shineonyoucrazy-876

Forest Gump

r/AskMen Sea_Sky5050

How do fake lips feel during intimate mouth time?

I am good at it. I enjoy using my mouth on my bf. I’m considering getting my lips done but I’m worried it will ruin my skills. Girls have told me they can’t even suck a straw anymore after getting them done. I’m early 30s/female and I don’t want to loose my best skill in the bedroom for vanity. Please tell me what it feels like being with someone with fake lips?

Edit: Men answer only

Ladies, please sit this one out. I know everything about lip filler/botox. I’ve done hours/days/weeks of intense detailed research. I am not asking advice about types and kinds of fillers or lip flips.

It’s also amazing how every woman is explaining the difference between filler and Botox. That’s NOT what I asked. Ladies, you cannot know how it FEELS FOR MEN. You can only know how it’s done and how it feels for you. I’ve talked to plenty of women about this side of the conversation. I am asking for the men’s side. Please keep your comments to yourself. I don’t need any more women/ filler POVs.

Edit to the edit: POV of anyone who HAS lip fillers is not needed. Unless you have lip filler and also have a D*CK that’s been sucked by someone else who has lip filler. I don’t need your POV.

r/SipsTea sinfulpetalz

Any modern thoughts on an old vision?

r/interestingasfuck Advanced_Fortune_903

This doctor effortlessly resets a child's dislocated elbow before they could even react

r/ARAM Smileinfected

Triggered inferno + Vulnerability on Evelynn

r/funny gunslinger_006

[OC] Sometimes my wife sleep texts me…

r/DunderMifflin FiberSauce

How's your day going?

r/SideProject Interesting_Moose400

Remember when social media was just you a few friends, your one friends mom and somebodies dog account.

goose.land is a desktop only social media app, hehehehehe. Its awesome, my username is henry.

I use to love being online and seeing what people were up too, but the reality is thats not what being online is anymore, and I believe the culprit is not the social media companies, but instead is the phone. #bringsocialmediabacktotheweb

r/whatisit Unhappy-Sprinkles652

Anyone know what kind of stone / crystal this is?

I’ve not seen one like it before, it was in a box of handmade silver jewellery I was given

r/ChatGPT bekircagricelik

In life, sometimes you just have to ask again

r/ClaudeAI _itshabib

Tower - Simple TUI based + MCP Server Git Worktree Manager

I've gotten to a point where easily 90% plus of the code I write is done by AI. This is great for speed, but sometimes, if you have N number of PRs open or working branches across multiple repos, things can get tough to manage. To mitigate this, git worktrees are a lifesaver and allow you to work with several branches locally on a single repo.

Unfortunately, the Codex app or Claude desktop doesn't have the best support for viewing git worktrees as a main first-class object. The most you get is just a list, with little detail or ability.

Built with Claude code + Codex, I made https://github.com/itsHabib/tower, a simple TUI-based git worktree manager. That also comes with an MCP server if I prefer to stay in a chat window to orchestrate workflows. lmk what you think!

r/confusing_perspective EkantVairagi

Perfectly timed reflections.

r/OldSchoolCool PulsingTrident2056

Diana Dors c 1955

While frequently compared to her American counterparts like Monroe and Jayne Mansfield, Dors was a classically trained actress who graduated from the London Academy of Music and Dramatic Art.

r/Adulting blueeyedbrunette_

certified rulebreaker

r/ClaudeCode Routine-Banana-1926

I am speculating that Anthropic experimenting on us!

I will be speculating (going to make up from my ars), past one to two week Anthropic experimenting on us to figure out what is the minimum amount of usage we will be accepting to use the subscription. They effectively decreasing different subgroups of the users usage to see who is keeping their subscription and how many of us gonna drop it! Pretty sure good old times ended. Literally with one to two promp I am using almost half of the 5h limit... and when I did some reddit search, came into conclusion that I am not the only one who is having this issue! Do you remember they said they were experimenting on new users? now they are experimenting on existing users! They know they have the best coding app, so they will sure they are maximizing their profit at this point. We have to shake their ass before they get too spoiled!

r/mildlyinteresting forgetmeknotts

Spots on my retina/uvea that aren’t supposed to be there. (More tests and appointments upcoming)

r/SipsTea shineonyoucrazy-876

Mushroom

r/ClaudeAI massimo_nyc

Anthropic Joins Blender Development Fund as a Corporate Patron

r/me_irl Significant_Call_137

me_irl

r/AI_Agents Minimum-Ad5185

What agentic framework are you actually using in production?

Feels like a new agent framework drops every other week.

Curious what people are actually shipping with vs just experimenting on weekends.

LangGraph, CrewAI, AutoGen, PydanticAI, the Microsoft Agent Framework, Anthropic or OpenAI SDKs directly, or something custom?

And what tipped you toward that one?

r/me_irl Significant_Call_137

me_irl

r/SipsTea shineonyoucrazy-876

Mommy kink

r/SideProject henryfitteds

I'm a high schooler and I just launched my first startup waitlist — Fridgy

r/funny yCaptainSmack

Why does my cat sit like this?

r/Art genxsoftclub2002

Untitled, genxsoftclub2000s, digital, 2026 [OC]

r/whatisit Appropriate_Might498

Meat for what

Got this picture of meat in the back of a truck. Wondering what’s is going on there since it’s not for consomption. Do you know what type of meat and what part?

r/Seattle futileboy

Dangerous flight path for the US army

r/SideProject Historical_Body_5102

Tribe finder

Social platform soulmate finder #Sale

Hey everyone,

I'm Ronnie, and I built TribeFinder - a social matching platform that connects people through morning rituals and personal development goals. I'm selling it to move on to my next project.

🎯 WHAT IT IS:

Think Bumble BFF meets Strava for personal growth. Users create profiles with their morning routines (meditation, exercise, journaling), goals, and skills - then get matched with compatible "tribe members" for accountability and support.

My favorite (Singled out or outcast?.. to not single now & w/o a single doubt to out last.) 😉

✅ WHAT'S INCLUDED:

\\\\- Complete React frontend (fully functional)

\\\\- Premium domain: TribeFinder.me

\\\\- 5 pages: Landing, Profile Setup, Dashboard, Discover, Community

\\\\- 3-tier monetization (Free, $9/mo, $19/mo) already built-in

\\\\- Beautiful design with warm, inviting color palette

\\\\- Mobile responsive

\\\\- Full source code

💰 ASKING: Make me reasonable offers!

🔗 LIVE DEMO: tribefinder.me

Click through the profile setup, check out the matching system, explore the dashboard and community features. It's all working.

📊 MARKET OPPORTUNITY:

The personal development industry is $43B and growing. Morning routine content is massive right now (Atomic Habits, Andrew Huberman, etc.). Bumble BFF has millions of users proving social matching works. This combines both trends with minimal direct competition.

🔧 READY TO LAUNCH:

Frontend is complete. Needs backend integration (Firebase, Supabase, or custom API). Estimated 2-4 weeks for experienced developer to get it live.

Perfect for:

\\\\- Solo dev or small team

\\\\- Entrepreneur entering wellness space

\\\\- Existing company adding social features

📧 SERIOUS INQUIRIES: bofaduhmeatius@gmail.com

Happy to answer questions!

r/ClaudeAI Objective-Ad6521

Claude is asking access to MCP tools for research

I have the option to share data turned off. Claude has asked me a couple times tonight to 'confirm access' for research.

It's a blue button - sorry I didn't capture the screenshot - so people think they HAVE to optin to continue their task. There's no "skip and continue task". I'll try to screenshot it next time it comes up, or others can maybe post if they see it.

I have an MCP to access my personal data. It's ideas, concepts, work - that I don't want used in research! I know local LLMs are most ideal for sensitive things, which is why I'm still conscious about how much data Claude gets access to.

And that's exactly why I built a custom MCP. So I can scope access to the data. And now Claude wants access to my custom MCP and Notion, and I guess if I had other connectors active, like Drive or Github, it would also ask for access for those too for research!

r/Unexpected derek4reals1

Got one hooked

r/AskMen Purple_Treacle_

How can I help my husband with ADHD remember things?

My husband (bless his soul) is forgetful and has no sense of time. He is a good man and a keeper, I just need help dealing with his ADHD. Dear men with ADHD, how has your partner helped you be on top of stuff at home in a positive/ stress free and interesting way, without treating him like a child?

Edit: From the comments, I am gathering that he has to figure it out for himself. I am back to square one. :'0

r/SideProject dyagokaba

Drop your product/app! we’ll find you 10 users for free

I run a network of TikTok channels with 300k+ combined followers mostly early adopters who love discovering new tools and apps.

I’m looking for a few products to feature.

On average, a single dedicated video brings:
• 10+ paid users
• even more free users

If you're currently doing outbound, posting, or just hoping people find you, this puts your product directly in front of real demand.

We also offer a 7-day free trial, so you can test the results risk-free.

DM me if your product is sensitive or if you want more details.

r/WouldYouRather Dull-Chapter1913

would you rather get tazed by a standard police tazer for 1 minute straight or fight a lion for 3 minutes? which?

r/Weird jxzxoxo

A Pet Leech

This is super interesting

Not the OC

r/WouldYouRather Aeronaut4

WYR have yourself and the evils of the world both thrive, or have yourself suffer and the evils of the world only do ok?

r/ClaudeCode ketankhairnar

Need Team SOP while using claude code on your repos

Most teams are in old world ceremonies and all team members are in this new world of getting lot done without clear empathy of how you're pulling the code in diff directions slowly.

Has there been attempt at fixing this, what worked. Am genuinely interested in learning patterns here that work

r/aivideo RioNReedus

Spider-Man Noir Comics Characters

r/findareddit Own_Shift_3645

Subreddit where you can post pics of celebrities and have them rated instead of the "rateme" types where you have to post yourself

r/meme ATonOfBricksFellOnMe

Yeah what's up with that?

r/ChatGPT RankedMan

Differences Between Go and Plus Versions for Programming

Is there any difference between the Go and Plus versions when it comes to programming?

Context: I use the free version a lot to study algorithms and modern technologies. In some cases, I use it for refactoring or coding assistance, but not very often.

I have Gemini Plus and haven’t had a good experience. With just a few scripts, the Pro model already reaches its limits. Also, I don’t want to be tied to a Google account.

Can anyone who uses ChatGPT Go or Plus clarify this? If you’re a programmer, even better.

r/Art mousehousestudio

Ostara Flowers, Nikki Smith, Watercolor, 2026 [OC]

r/SideProject No_Firefighter_2199

Free period pain self-assessment tracker for women and teens who struggle with period pain

Posting on behalf of my high school daughter, who built this as a passion project.

She created a free period pain self-assessment tracker for women and teens who struggle to know whether their pain is within a normal range and who want to be better prepared when talking to their doctor.

Here's what it does:

- Flags symptoms that may be worth discussing with a doctor

- Creates a simple PDF report for appointments so nothing gets lost or understated

Conditions like endometriosis can take 7–10 years to diagnose. My daughter wanted to do something about that, starting with better documentation and awareness.

🔗 www.endoaware.org

She welcomes any feedback, especially from those who have navigated this journey themselves. Thank you!

r/ChatGPT AdCold1610

i made Claude argue against itself and got the most useful output of my entire life.

was stuck on a decision. going in circles. asked Claude for its opinion. it gave me one. confident. well reasoned. i almost took it.

then tried something stupid.

"now argue the complete opposite. same confidence. same detail. make me believe this instead."

it did.

equally convincing. equally well reasoned. completely opposite conclusion.

i sat there realising i'd been about to make a major decision based on whichever version i happened to ask first.

went deeper immediately.

"now tell me which argument has the weakest point and where it breaks."

it attacked both. surgically. found the exact assumption each one was hiding that made the whole thing collapse if you pulled it.

that single exchange gave me more clarity than four weeks of thinking about the same problem.

the full technique:

step one. ask your question. get the answer.

step two. "now argue the opposite with equal conviction."

step three. "which of these two positions has the bigger hidden assumption."

step four. "if both positions are wrong what is the third option neither of us considered."

that last one. step four. destroyed me completely.

there was a third option. genuinely better than both. sitting there invisible because i'd framed the decision as binary from the start.

Claude didn't find it until i forced it out of the two position debate.

other versions that broke my brain:

"steelman the position you just argued against."

it defended the thing it just disagreed with. better than most humans defend their own positions. the steelman was more useful than the original answer.

"you just gave me advice. now be the person who tried that advice and it failed. what happened."

implementation failure mode. the gap between advice that sounds right and advice that works in practice. it knows the gap. you just never asked it to show you.

"argue that the obvious solution is actually the problem."

reframe so complete it physically rearranged how i was thinking about something i'd been certain about for months.

"what would you say if you were trying to talk me out of agreeing with everything you just told me."

it argued against its own output. found three real weaknesses. unprompted. just because i asked.

the thing nobody tells you:

Claude's first answer is its average answer. statistically most likely response to your input. safe. well structured. probably fine.

the debate is where it gets interesting.

force it into contradiction. make it defend both sides. make it attack its own position. make it find the option that only exists after both obvious options are exhausted.

that's not where the average answer lives.

that's where the actually useful one is.

every important decision i make now goes through the same four steps before i touch it.

the answer i started with is almost never the answer i end with.

what decision are you currently certain about that you've never argued the opposite of

r/PhotoshopRequest IIovecookies123

Can someone fix my left hand in this photo (missing finger) + slight quality touch-up? ($10)

Hey guys, I have a mirror selfie I really like, but the left hand holding the phone looks off because the fingers is missing not formed properly

I’m looking for someone to:

  • Fix the left hand so the fingers look natural and complete
  • Keep the grip on the phone realistic
  • Slightly improve the image quality (just a little sharper/cleaner, not too HD since I want to keep a bit of the natural blur)

Please keep everything else the same (lighting, face, body, background, etc.).

Would really appreciate the help

r/OldSchoolCool SwiPerHaHa

Arnold Schwarzenegger on the day he became a U.S. citizen in September of 1983

r/Adulting Hootshere

Rips starter pack code

r/30ROCK PeachPurple8806

Jack, Devon & Magellica the Unicorn

r/midjourney Zenchilada

Negotiations

r/Adulting Hootshere

Free starter pack here ya go

r/creepypasta Mother-Escape6662

Wip of drawing of my version of Jeff the Killer

I am working on a new version of Jeff the killer, and since I have an art class I decided to do this in my project, it's almost finished I think, I have to finish it before Thursday so I'll post the final version

r/mildlyinteresting thatburritodood

This split end I found was really crazy!

r/me_irl gigagaming1256

Me_irl

r/ProgrammerHumor sebet_123

itsAiFault

r/ChatGPT cutestplushie

👁👄👁

Make me more beautiful prompt, next up is asking how'd I'd look like as a guy

r/comfyui TheHollywoodGeek

I Needed Better Control Over My ComfyUI Video Workflow. This Is What I Built. — The Halleen Machine

Free to use, open source. Provides a timeline and asset management layer for ComfyUI. SDXL-to-Wan2.2 pipeline, more model support coming.

r/personalfinance Optimal_Formal8350

CPA vs paying off $25K debt first

Hi everyone,

I’d really appreciate some honest advice on my situation.

I’m a 35M living in regional NSW with my wife and our 2 year 9 month old son. I moved to Australia in 2017 and completed a Master’s in Accounting from Macquarie University in 2019.

Since graduating, I’ve struggled to break into accounting mainly due to being on a temporary visa and not having CPA/CA. I’ve done some basic accounting work (Xero, AP/AR, reconciliations, payroll) and even an internship, but couldn’t secure a proper role.

Because of that, I’ve been working in delivery (courier/Uber), and currently working casual roles (security + receptionist).

Over the past few years, we’ve had significant financial pressure:

  • Around $8K for my medical surgery
  • Around $10K for childbirth expenses
  • Currently about $25K in debt
  • Still using a rental car due to cash flow limits

My wife is a physiotherapist earning around $80K and recently secured a sponsored job in regional NSW (2-year pathway towards PR), which is a big positive for us. I earn around $45K currently. We manage everything as a combined household (shared income, expenses, and goals).

Right now:

  • She works full-time
  • I take care of our child and work 4 days/week
  • I’ve restarted studying accounting and preparing for my first CPA subject

Our long-term plan is to move to Melbourne in a few years once our situation stabilises.

At the moment, I’m confused about what to prioritise:

Option 1:
Take on night shifts (3 nights/week) to aggressively pay off our $25K debt faster, but delay CPA and accounting career progression.

Option 2:
Focus on passing my first CPA subject and trying to secure my first accounting role, even if debt repayment is slower.

I feel stuck between financial pressure now and building a better long-term career.

I’d really appreciate advice on:

  1. Which option would you prioritise in my situation?
  2. What’s the smartest way to balance debt repayment vs career growth?
  3. Any tips for breaking into accounting in regional Australia?

Thanks a lot for taking the time to read.

r/Anthropic SidewaysAnteater

I need to talk to a human at Anthropic about genuinely important bug in in claude website

How do I do this? All support options loop to 'fin' which freely admits it cannot pass anything onto a human but wants me to 'feel heard' and explain my (critical to anthropic) information there, where it will feel happy that it can roleplay deep concern.

Claude discord is closed. Support and feedback emails ARE THE SAME DAMN BOT in a different hat.

Profound irony, the feedback email includes a line saying "If you need any further assistance, you can reach out to our Product Support team by navigating to our Help Center (support.anthropic.com) and clicking the message icon in the lower right corner to start a conversation with our team." Guess what, that is the same bot that freely admits it cannot in any way liase or connect you with a real human.

Please no debates about if the issue is important, I am a website developer and this is a 'phone the lead developer at 3am' issue. I just need a real email address urgently.

Thanks!

r/Art porcelain_sunset

XIX, pigment_of_imaginations, Bleach on T shirt, 2026

r/geography Front-Coconut-8196

Rare photo showing Niagara Falls without water, 1969. Engineers temporarily diverted the flow of water so that certain areas could be strengthened, slowing erosion.

r/SipsTea Due_Fly_6934

The Peace-bearer

r/Weird Sleepyandbroke0

the trees where i used to live looks like a pack of prehistoric animals

hear me out

dinosaur on the far left

dinosaur giraffe bear thing on the far right

are you seeing it yet or do i need to draw this out for someone to pick up what i’m putting down here

r/artificial IAmDreTheKid

We built an AI that runs an entire business autonomously. Not a demo. Not a prototype. Actually running. YC-backed, here's what we learned.

This sub has seen enough "AI can now do X" posts to have a finely tuned radar for what's real and what's a demo that falls apart the moment someone actually uses it. So I'll skip the hype and just tell you what we built and where the edges are.

The core problem we were solving wasn't any individual capability. Generating copy is solved. Building websites is solved. Running ads is mostly solved. The unsolved problem was coherent autonomous decision making across all of those systems simultaneously without a human acting as the integration layer between them.

That's what we spent most of our time on.

Locus Founder takes someone from idea to fully operational business without them touching a single tool. The system scopes the business, builds the infrastructure, sources products, writes conversion optimized copy, and then runs paid acquisition across Google, Facebook and Instagram autonomously. Continuously. Not as a one time setup but as an ongoing operation that monitors performance and adjusts without being told to.

The honest version of where AI actually performs well in this system and where it doesn't:

It's genuinely good at the build layer. Storefront generation, copy, pricing structure, initial ad creative, coherent and fast in a way that would have been impossible two years ago. The operations layer is more complicated. Autonomous ad optimization works well within normal parameters. The judgment calls that fall outside those parameters, unusual market conditions, supplier issues, platform policy edge cases, are still the places where the system makes decisions a human would immediately recognize as wrong.

That gap between capability and judgment is the most interesting unsolved problem in what we're building and probably in the agent space generally right now.

We got into YCombinator this year. Opening 100 free beta spots this week before public launch. Free to use, you keep everything you make.

For people in this sub specifically, less interested in the "wow AI can do that" reaction and more interested in people who want to actually stress test where the judgment breaks down.

Beta form: https://forms.gle/nW7CGN1PNBHgqrBb8

Where do you think autonomous business judgment actually gets solved and what does that look like?

r/Anthropic DrHumorous

Opus 4.7 is somewhere between seriously clueless and stupidly dangerous. The worst frontier model I have used so far in the past 2 years. We were hoping to get at least our 4.6 back but 4.7 with so many critical logical failures mean you have to babysit it all the time. I'm losing hope in Anthropic.

Opus 4.7 on Max effort decided to create a new email template by itself (which is pretty stupid btw) and mass mailed it to the whole database (some emails were repeatedly sent 20x).

Before you ask me - yes, CLAUDE.md has the exact rule for that, it's supposed to email the tester before any new email templates are to be used in production. I have created this safety rule a few months ago.

I feel like the Opus 4.7 is a huge letdown the way it's been downgraded. If Anthropic is "pushing the boundaries", it's probably only in the meaning of how far they can push their customers.

r/Anthropic 00benallen

Open Discussion: Is the current press environment intended to villainize Anthropic unfairly?

There are clearly powerful actors with personal reasons to damage Anthropic’s reputation.

The Pentagon labeled them a “supply chain risk” after they declined DoD use for autonomous weapons and domestic surveillance. A federal judge blocked the designation. Google signed with the same office weeks later. David Sacks called them “woke” from the White House AI desk after they backed California’s safety bill. Musk has been calling them “misanthropic and evil” since the February funding round. PRC state media has been amplifying the Pentagon story, and the Atlantic Council documented the framing in March.

The PocketOS story this week is a clean example of how this lands in coverage. The actual story is that a SaaS founder left a root-scoped Railway API token in an unrelated file, and Railway’s CLI didn’t do delayed deletes on the endpoint that got called. Tom’s Hardware ran it as “Claude goes rogue”. The wires copied the headline. By Yahoo and Euronews, the token, the CLI behavior, and Cursor’s role had been compressed into a sentence.

So: do you think the aggregate effect of these actors is exaggerating the bad press against Anthropic? Or do you think it’s mostly fair critique?

r/TwoSentenceHorror BriefAd5180

My father never explained the curtains, only that they had to be closed before dark, by his hands, in the same order, every single night, and that if he ever missed one we were not to go looking for him.

Somewhere along the way it became mine to do, and when I pulled the last curtain shut, my body froze; there was a figure in the yard kneeling in the dirt, hands pulling something wet and dark out of a ribcage, and it looked up the moment I touched the curtain, wearing his face, mouth black with it, and something deep in me understood that whatever this was had been coming home to us for years.

r/SipsTea asa_no_kenny

Good luck with your search as you bring nothing to the table.

r/meme bballhoops3

Lol his new job

r/ClaudeAI Dramatic_Squash_3502

What's new in CC 2.1.122 (-122 tokens)

  • REMOVED: System Prompt: Phase four of plan mode — Removed the standalone phase-four plan-mode prompt; the active plan-mode reminder now receives phase-four instructions through its own template placeholder.
  • Skill: Debugging — Adds the provided issue description before the issue section and lets daemon debug context supply the fallback issue guidance when the user does not describe a specific problem.
  • System Prompt: Proactive schedule offer after follow-up work — Raises the confidence bar for offering /schedule follow-ups from 70%+ to 85%+ odds the user will say yes.
  • System Reminder: New diagnostics detected — Formats new diagnostics from the diagnostics list instead of inserting only the precomputed diagnostics summary.
  • System Reminder: Plan mode is active (5-phase) — Replaces the phase-four function hook with a direct phase-four-instructions placeholder in the active plan-mode workflow.

Details: https://github.com/Piebald-AI/claude-code-system-prompts/releases/tag/v2.1.122

r/ethereum EthereumDailyThread

Daily General Discussion April 29, 2026

Welcome to the Daily General Discussion on r/ethereum

https://imgur.com/3y7vezP

Bookmarking this link will always bring you to the current daily: https://old.reddit.com/r/ethereum/about/sticky/?num=2

Please use this thread to discuss Ethereum topics, news, events, and even price!

Price discussion posted elsewhere in the subreddit will continue to be removed.

As always, be constructive. - Subreddit Rules

Want to stake? Learn more at r/ethstaker

Community Links

Calendar: https://dailydoots.com/events/

r/interestingasfuck clintonhuphrey

An illiterate man in Odisha, India, carried his dead sister's skeleton 3km to a bank after staff repeatedly refused to release her ₹20,000 ($211.32 USD) savings, insisting the "account holder must be present."

r/whatisit ComprehensiveLine354

FrigidAire Ice Machine Mystery Part

i’ve had this thing for like 4 years and i still don’t know what this metal piece is for??? it fits in the little plastic pull out space but it doesn’t click it or seem like it belongs there. i googled it and this part isn’t visible???

i also moved 3 times in those 4 years so maybe this doesn’t even go with it but i’m fairly organized and it’s been packaged with it each move 😭

r/Whatcouldgowrong AirFrance447

Streamer streaming themselves walking in the middle of the road in an active car lane

r/mildlyinteresting West_Future326

The US Canada border is a strip of deforested land stretching across 1349 miles(2171 km).

r/Anthropic RCBANG

Anthropic CVP (Cyber Verification Program) — 6 evaluation runs across 4 Claude models. Family scoreboard live.

Over the last 10 days we ran the same prompt suite across every current Claude model under Anthropic's Cyber Verification Program. Run 6 closed Apr 26, family synthesis is up.

All six runs + synthesis: https://sunglasses.dev/cvp

r/ChatGPT JackieSlopNSFW

Goon Recon

Putting Unreal Engine 5 render, specifying extreme detail and cutting edge graphics, and describing hud elements you want leads to some pretty convincing mockups. Only issue is sometimes you get the "dots" bug where the entire image will have a pattern of visible dots woven into the composition via rocks, debris, patterns, etc.

r/Jokes TheAmazingJazzy

What seven-letter word starts with "egg" and ends with "soup"?

Chicken

(I cannot take credit for this joke; my coworker came up with it)

r/yesyesyesyesno Just-Tip-3320

Boys will be boys

r/LocalLLaMA Danny_Davitoe

100M tokens for $2.65 (Deepseek V4 Pro)

This is actually unbelievable. I am shocked that there has not been a move in the market like it did last year with the R1 release.

r/Whatcouldgowrong Just-Tip-3320

Boys will be boys

r/painting scoobyte925

Would you be able to recognize this picture?

I’m painting a portrait of my grandfather who passed away for my dad as a late bday gift. I’ve never painted a portrait before and am just recently getting back into painting after a few years hiatus….

Ultimately, does he look like the picture at all? A fresh/unbiased perspective would be great because obviously my mom tells me it’s amazing no matter what🙂 I swear I fix one proportion and then the others look weird because looking at the pictures side by side I’m noticing now how squished his head is😭

(Still a work in progress/have hardly touched the hat/arm yet, also had to scrap the lips for now because I got frustrated😂)

r/Art SurveyLongjumping673

Brown Ferret Brown Mountains, Izik Mediodia, Digital, 2026

r/EarthPorn HarryBeugelinkPhotos

Sunset at Cannon Beach, Oregon [3600x2400] [OC]

r/ClaudeAI MSPcoffeebaseball

View File Explorer in Claude Desktop

I got started with the Claude CLI but my organization has moved to Claude Desktop. I really liked using Claude inside Cursor where I could see the File Explorer - is there some kind of plugin to Claude Desktop where I can see the file explorer?

r/Seattle SalesMountaineer

There's never been a better time to go electric in Washington state! ​Today marked a massive step forward for our state’s decarbonization goals with the official launch of the Washington Zero-Emission Incentive Program (WAZIP) at Port of Seattle's Terminal 46.

​It was an honor to join Governor Bob Ferguson, House Majority Leader Joe Fitzgibbon, and the teams from Clean & Prosperous, CALSTART and other stakeholders to celebrate this catalytic investment in medium- and heavy-duty electrification.

​Seeing a powerhouse lineup of OEMs—from Orange EV and Kenworth Truck Co. to Tesla, Volvo Trucks, ZM Trucks and many more all in one place, proving that the technology to electrify our fleets isn't just "coming soon", it’s here and ready to work.

​Key Takeaways from the Launch:

A Mandate from the People: This momentum is fueled by the Climate Commitment Act (CCA). With over 60% of Washingtonians voting to uphold the CCA, it’s clear that our state is united in its commitment to a clean energy future.

Lowering Barriers: The $112M WAZIP program is a game-changer: providing the financial runway fleets need to transition to ZEVs today.

Economic Sovereignty: Shifting to domestic, renewable energy strengthens our local economy and protects us from global fuel volatility.

The "Spark Spread": The efficiency and cost gap between diesel and electric isn’t just math; it’s a competitive advantage for Washington businesses.

I'm looking forward to seeing zero emission commercial electric trucks and equipment scale and accelerate.

The future of commercial transportation in Washington is electric, sustainable and decarbonized. That future is now!

r/personalfinance StevenDAngelo

calculating sales tax for bigger purchases

I always mess up sales tax on bigger purchases

Not talking about small stuff, but like $1,000+ purchases where I actually want to know the exact total before buying.

Do you guys just estimate it in your head or is there an easy way you calculate it quickly?

r/AskMen funkysyringe

Who is your emergency contact and why isn't it your wife/gf?

I've met a lot of men who don't put their wife or gf as their own emergency contact. Let alone memorize her number. I've been given various reasons ranging from "she'd just freak out and not help the situation" to "I don't want to bother her". If she has ALWAYS been your emergency contact, please don't comment. I want to hear from the ones who refuse to make her theirs or took some time to make the decision. Thanks dudes

r/homeassistant DefaultReal0

How to properly cast audio to a Google Nest Hub

I've tried many ways to cast audio to my Google Nest Hub from Home Assistant, but Google Cast is very limited, apparently, for the Nest speakers with a screen. Whenever you cast audio through the Web UI, the speaker just becomes a media receiver and doesn't actually play the audio file on my local network so whenever I close it out to become a clock it just stops. Anyone else having this issue, and if you do, how did you fix it? I don't really want any sacrafices i'd rather keep the device and use it than do good casting

r/SideProject No_Firefighter_2199

Free and anonymous period pain self-assessment tracker for women and teens

Posting on behalf of my high school daughter, who built this as a passion project.

She created a free and anonymous period pain self-assessment tracker for women and teens who struggle to know whether their pain is within a normal range and who want to be better prepared when talking to their doctor.

Here's what it does:

- Flags symptoms that may be worth discussing with a doctor

- Creates a simple PDF report for appointments so nothing gets lost or understated

Conditions like endometriosis can take 7–10 years to diagnose. My daughter wanted to do something about that starting with better documentation and awareness.

🔗 www.endoaware.org

She welcomes any feedback, especially from those who have navigated this journey themselves. Thank you!

r/findareddit Stunning-Bed8683

Are there any subreddits where I can ask for help on finding a specific meme or video

I originally checked r/helpmefind but one of the rules was that you cant ask for help on finding a meme or viral video which is exactly what I want to find

r/personalfinance GrouchyOldMan93

Sanity Check on Loan Choice

When I was in undergrad, my parents were struggling with money. I made the choice to design my path to help me graduate early and save them a semester of payments. Now that I’m an adult and through grad school, they want to return that favor and pay off one of my student loans. I have two choices.

Option A: $16.8K remaining, 5.03% interest rate, monthly payment set at $300 a month

Option B: $15.3K remaining, 4.84% interest rate, monthly payment set at $500 a month

To me, option A seems like the no brainer - it’s more money outstanding and it has a higher interest rate. However, we just had a kid and my wife is off work for a while to raise our kid. I’ve had a couple people tell me to pay off Option B so it feels like I get $500 bucks back in my pocket instead of $300 every month. We’re going to be living tight for a while but we have strong savings. Am I correct in choosing to pay off Option A for maximum financial benefit?

r/ClaudeCode Magical_cat_girl

Alternatives or workarounds?

I'm in the same boat as many with a seemingly much less capable, mistake-prone Claude Code that hits its limits constantly. What have you tried that has worked? Whether it is working within Claude and improving it, or switching to another service, what's your best option?

r/me_irl Miniastronaut2

me_irl

r/PhotoshopRequest Horror_Cap_2283

This guy stole my bike this morning. This is the best picture I got any way to make him more visible?

r/SideProject Mission_Turnip_1531

Made a global text filler. Finally open-sourced it after a month of daily use.

How it works:

  • Type ;; anywhere — browser, IDE, terminal, any input field
  • A popup appears instantly, search or arrow-key to your snippet and hit Enter
  • Text gets typed in-place, not pasted from clipboard
  • That's it

Built with C# + WPF. Windows only for now.

GitHub: github.com/bapunhansdah/fillbox

Would love feedback. What would you want added?

r/mildlyinteresting phreak-of-nature

My study guide for a geology exam

r/Adulting Helpful-Canary-6284

Escape PLEASE

I’m about to turn 22 in 5 days my 21 bday was bad I felt sad and not loved by my family. My dad just got diagnosed w cancer my mom is unloading things on me because he’s taking it out on her. My brother is only focused on his career and uses me for emotional and physical labor like moving in. My friends haven’t texted me first in days I haven’t been able to finish my last 4 assignments. I packed some things and have a lot of cash and a car. My mom always has my location I’ve lived my life according to others and their rules and felt so guilty constantly being the perfect daughter sister and now possible gf. This guy too is trying to keep me confined to fit his life and his friends and his family. I didn’t get to leave home and move away even until last year. I just want to runaway somewhere at least before I have to help my brother move in. I need an adventure or some cathartic experience to remeber why I’m even alive and doing this.

r/ClaudeAI gorliggs

Claude Design -> Claude Code Web Confusion

How do I access the code generated by Claude Code Web locally?

I was testing the flow of passing a Claude Design to Claude Code Web and ran into issues with pulling down the code locally to run the dev server. There was no way to download the files and no repository was created for me on Github.

As a workaround I used claude --teleport [SESSION_ID] and then had to recreate the files locally. To do this I created the directory, initialized git and then saw the following:

``` ⏺ The directory is empty — the files were on the cloud machine, which is no longer accessible. I'll recreate the project here. Working directly in /Users/sup/workspace/derp since it's already named derp.

⏺ Bash(npm create vite@latest . -- --template react 2>&1 | tail -10) ⎿ npm warn exec The following package was not found and will be installed: create-vi te@9.0.6 … +4 lines (ctrl+o to expand)

⏺ Bash(npm create vite@latest . -- --template react --yes 2>&1 | tail -10) ⎿ > npx > "create-vite" . --template react --yes

 └ Operation cancelled 

⏺ The interactive prompt isn't working in this shell. Let me just write all the files directly — I have them all from the previous build. ```

Anyways, looking for guidance on how to actually transition from Claude Design to Claude Code Web to running the application locally so I can see it.

r/ProgrammerHumor Pretty_Confusion7290

oopsAllGone

r/EarthPorn drkshdw6

Sajek Valley | Bangladesh [OC] [4096x3072]

r/Anthropic Mysterious-Donut7915

Google Play Subscription, "no customer found" question

I have a super stupid question, and I'm being paranoid I'm sure.

I logged into the Claude website, (via computer) checked my billing (where I'm subscribed via Google play, for the max subscription and in the top right corner, it has a little flag that says "no stripe customer found"

Which I've never seen it do that before

Checking with Google it makes it seem like it's a possible payment missmmatch with the Google account and Claudes payment system

So I emailed anthropics support for billing, which just kicks it to fin their AI bot, (who's been useless for most people)

And he says that

The "no stripe customer found" message you're seeing is actually expected behavior for your Google Play subscription. This message appears because your Max plan subscription is managed through Google Play rather than through our direct billing system (Stripe).

When you subscribe through Google Play, the billing and payment processing is handled entirely by Google, not by our internal Stripe payment system. This is why our billing section shows "no stripe customer found" - because there genuinely isn't a Stripe customer record for your subscription.

Your account information confirms that you have an active Max plan subscription through Google Play, so everything is working correctly on our end. The yellow notification is simply indicating that your billing is managed externally through Google Play rather than through our direct payment processor."

Is this correct? I know anthropic has been having issues with billing online, and fin has been,not great.

it also seems like looking at reports, Claude was down today, so maybe they fixed something and I should have been seeing the pop up before now and I just never was?

Does anyone else see the "no stripe customer" found for your Google subscription?

Thank you in advance

r/HumansBeingBros shaka_sulu

Basketball players helping soccer players and visa versa.

r/ChatGPT Mad_Moniker

Why does it try so hard for misunderstanding?

Hello fine people. I’ve tried this subscription now for a month. I’ve seen some benefits, but I’ve come to the conclusion that it’s frustrating. That I need to describe things so granular for it to get it right - that I might as well just do it all myself.

I see now why apple sandbox ruins the integration and I need to ditch this Apple highway attitude.

*However ChatGPT- the amount of times I have corrected your skew exceeds 6.

*The amount of iterations you erroneously reproduce is unbelievable waste of resources

*When basic tasks like flow charts are displayed as botched text from a terminal instead of beautiful (OG)visio mind styled mappings.

*Things goes horribly askew with your interpretation then after 80% it now violates some obscure terminology gone wrong on my end?

*No, you can’t even reread our conversations without needing to be reminded. That’s childish behaviour.

I guess I just need to learn the interface better because I admit that I have a communication issue. I’m trying to use this to help me recovery from a TBI and with that and I see lacklustre efforts via constraints.

Any better platforms for a less stressful assistant ? Is Apple ai useful? Or Claude? Etc?

Sorry, Just rantasking.

r/PhotoshopRequest buryghost

Make the background slightly less blurry?

r/mildlyinteresting Flashy-Fisherman-541

This box of tea recommends I check with my doctor before lying down for 3-11 minutes

r/toastme Ok-Rutabaga-3362

Alright, time to get it done.. though I’d much rather sleep in today, well..

r/onejob Fun_Assignment5178

Hands-free door opener rendered useless with turn handle

r/personalfinance Frequent_Jacket_4057

Looking for a High-Yield savings account

I'm 25, have about $10K in a Signature Savings - TD Bank. What HYSA do you recommend that are out there so that I can grow my money?

r/toastme Novel_Individual_148

Just need some compliments. Could really use them right now. 22M

r/SipsTea Much_Maximum_

Nobody does this pose anymore

r/aivideo Traditional-Table866

ClawPod: Beyond Sound | Ad

r/SipsTea SODHIHAITOHPOTTYHAI

The Analogy lmao 😂

r/ChatGPT AddictionSorceress

CHATGPT IS ACTING ODD.

I want to help me with my writing, to create written summaries and things and such..IT KEEPS WANTING TO MAKE AN IMAGE! Then comes back and say it goes against their TOS.

It's when I click on the Think longer. Then I am NO IN TEXT not images.

r/homeassistant SmartHomeSecrets

Pi5 + Stream deck = HA PiDeck

I’ve had a stream deck sitting on my desk in my home office probably unused for at least a year. Then when I found a Pi I decided it’s time to do something with it.

I was able to write a python script to render the icons and use the rest api of HA with a long access token to make it all work. The Pi is powered via POE for me but really this could sit anywhere you have power and wifi. All buttons act like a toggle and icons change from an outline to solid fill when on/ active.

DND - Sets light switches outside my office red so wife and kids know I’m on a call for work/ don’t come in. Hardly stops the kids anyways…

2x Play/ Pause controls for my office Apple TV and the Living room.

Package plays a message on the ceiling porch speakers telling a delivery driver where to leave the package if they are at the door and I’m on a call

I put a coloured square over the kids buttons but it will have claude conversation agent make a random 5 minute story with them as the main character and play it on my office speaker. So when they barge in, here’s a story or maybe here’s a story is why they barge in, not sure.

Top right is Sonos playlist and volume control

Bottom row is a weather briefing played over my speaker and shield gives a house briefing of what doors are unlocked, lights on per floor, if anyone is detected in the backyard and if the dog is inside or outside.

The lighting is self explanatory, controls scenes for my main light, floor lamp, and desk lamp.

If you have a stream deck laying around, time to put it to use!

r/LocalLLM TroyNoah6677

OpenAI on Bedrock: Azure exclusivity is dead. I ran the math on latency and costs.

The Azure pipeline tax is finally gone. For the last three years, if you were building an AI application, your stack was likely AWS for your core compute and database, and Azure or OpenAI direct for your inference. That meant piping data across clouds or over the public internet. Every single API call incurred a network penalty. I have benchmarked this exact setup across dozens of client architectures. You were looking at an extra 40 to 70 milliseconds of roundtrip network overhead before the first token even started generating, not to mention the egress bandwidth costs hitting your monthly AWS bill.

As of yesterday, OpenAI ended its Microsoft exclusivity. gpt-5.5, gpt-5.4, Codex, and Managed Agents are now sitting directly inside Amazon Bedrock in limited preview. Matt Garman and Sam Altman confirmed the shift, and general availability is a few weeks out.

I ran the numbers on what this infrastructure shift actually means for your production environment. Numbers don't lie. 📊

First, let us look at network latency and egress data. When you route a user query from an EC2 instance in us-east-1 to an external OpenAI endpoint, you pay for data out. If you generate a massive RAG payload—say, 50,000 tokens of context from your pgvector database on RDS—you are paying AWS egress fees to send that text to OpenAI's servers. Now, with gpt-5.5 on Bedrock, your inference sits in the same AWS network boundary. You use AWS PrivateLink. The traffic never traverses the public internet. The latency drop for the network hop approaches zero. Egress costs for your RAG pipeline just evaporated.

Let us break down the exact math on a high-volume pipeline. Suppose you process 100 requests per second. Each request fetches 50,000 tokens of context from your database and generates a 1,000-token response using gpt-5.5. In the legacy architecture, you transmit roughly 200 kilobytes of text per request out of your AWS environment. At 100 requests per second, that is 20 megabytes per second, or roughly 52 terabytes of egress data per month. AWS charges around $0.09 per GB for data transfer out to the internet. That is an extra $4,680 per month just in network egress fees, completely decoupled from your actual AI token costs.

When you switch your client to the Bedrock ARN for gpt-5.5, that traffic routes through AWS PrivateLink. PrivateLink data processing charges are roughly $0.01 per GB. Your network transport cost drops from $4,680 to $520. You just saved over $4,100 a month on pure infrastructure overhead without changing a single prompt.

Second, let us look at identity and access management. Managing OpenAI API keys is a massive liability. You store them in AWS Secrets Manager, you write rotation lambda functions, and you pray a junior engineer does not hardcode them into a GitHub repo. Bedrock eliminates API keys entirely. You use native AWS IAM roles. You attach a policy to your ECS task role that grants `bedrock:InvokeModel` specifically for `arn:aws:bedrock:us-east-1::foundation-model/openai.gpt-5.5-preview`. The security footprint is completely native. You can also use AWS Cost Allocation Tags to track inference spend per microservice or tenant. If you want to know exactly how much your customer support bot is burning in gpt-5.4 tokens compared to your internal analytics tool, it is just a line item in AWS Cost Explorer. No more guessing.

Third, the model routing reality. Bedrock was already the primary hub for Anthropic. Opus 4.7 is sitting right there. Now that gpt-5.5 is on the same API surface, your model router just became trivial. We benchmark models so you do not blow your budget. The best architecture right now is dynamic routing based on query complexity. You can send code generation tasks to Codex on Bedrock, complex reasoning to gpt-5.5 or Opus 4.7, and simple summarization to a cheaper model, all without changing your SDK or network egress path. You just swap the `modelId` in your Boto3 client.

OpenAI also specifically called out that Codex and Managed Agents are launching on Bedrock. This is an entirely different beast than just chatbot inference. Codex on AWS means you can integrate frontier coding agents directly into your CI/CD pipelines natively. Imagine an AWS CodePipeline where a Bedrock-hosted Codex agent reviews every pull request. Because it sits strictly inside your VPC, you do not have to worry about data compliance issues or exfiltrating proprietary source code to a third-party endpoint. The compliance teams that previously blocked OpenAI usage because of SOC2 or HIPAA concerns regarding third-party endpoints will find Bedrock's unified security model much harder to argue against. Bedrock keeps the data within your region and does not use your data to train the base models.

Let us also look at what this means for the MLOps ecosystem. Engineering teams have had to maintain separate connection logic for direct REST calls and Bedrock SDKs. This fragmentation meant you had to standardize your retry logic, timeout handling, and error parsing across entirely different architectures. By consolidating gpt-5.4 and gpt-5.5 onto the Bedrock API, you standardize the operational plane. Your metrics—like throttling exceptions, model invocation errors, and latency—all flow natively into Amazon CloudWatch. You do not need a third-party observability tool just to figure out if OpenAI is having a degraded performance day or if your Azure endpoint is saturated. It is all native CloudWatch metrics: `InvocationLatency`, `Invocations`, `OutputTokenCount`.

Pricing structures will be the next battleground. Bedrock currently supports Standard, Flex, Priority, and Reserved tiers. Azure forces you into Provisioned Throughput Units for dedicated capacity, which often require massive upfront commitments just to guarantee low latency during peak hours. If AWS offers more granular scaling for gpt-5.5 capacity blocks, it will fundamentally change the unit economics for mid-market AI applications. You could scale up capacity for batch processing at night and scale down during the day, entirely programmatically via the AWS CLI.

The business moat Microsoft tried to build around exclusive access to GPT models is officially dead. Startups have been asking for this for a long time. Moving your compute just to access a model was never a sustainable architectural requirement. Now you get frontier intelligence on the infrastructure you already trust.

I have seen the benchmarks for Opus 4.7 running on Bedrock versus Anthropic's direct API, and the AWS latency is consistently tighter for large payloads due to the lack of internet routing. I expect we will see the exact same physical reality when gpt-5.5 is fully deployed across AWS Availability Zones.

We will run the full benchmark suite—time to first token, inter-token latency, concurrent load testing, and error rate analysis—the minute our AWS account gets off the limited preview waitlist. I will post the raw datasets. No opinions, just the data.

If you are building an AI product right now, stop writing custom API wrappers for different providers. Abstract it. The underlying infrastructure is shifting faster than the models themselves.

Tested on prod. 📊

How are you handling your multi-model routing today, and does native AWS IAM support for GPT models change your security posture enough to migrate off Azure?

r/TwoSentenceHorror ThatFroggyBoii

They finally sent me the missing person report with my name on it, so I walked back into my house to show them the mistake.

But the photo attached wasn’t me leaving- it was me already sitting inside, reading it.

r/ClaudeAI newuxtreme

Made a skill to help Claude watch videos.

Quick context: Claude can see images but can't stream video. That kept blocking me on a bunch of workflows, so I built a skill that fakes it.

How it works

It pulls the YouTube transcript (captions first, Whisper as a fallback if there are none), extracts a still frame every N seconds with ffmpeg, then pairs each frame with the sentence being spoken at that exact timestamp. Claude reads the frames and the transcript together and writes structured notes: TL;DR, timeline, key quotes, visual notes. Works for YouTube URLs and local video files.

Works in Claude Code, Claude Desktop, and apps built on the Agent SDK.

The 4 use cases that made me build this

1. If you don't understand a video, make Claude watch it before planning. I saw a custom extension being built for downloading courses and started vibe-coding Claude on that. It's doing a really, REALLY good job.

2. Someone was walking me through a funnel by sending screenshots from a video. Instead of explaining frame by frame, I had Claude watch the whole video, screenshots and DM conversations included. It got a real, live example of how the conversations actually go.

3. I'm building my own Opus Clip-style Claude Code skill. The first example Claude generated vs the final one is night and day, because I was able to show it a demo of what my perfect reel actually looks like.

4. If you like a YouTuber's editing style, point Claude at two or three of their videos and let it figure out the style. With Remotion and Hyperframes, you can then edit your own videos in exactly that style.

Repo + tutorial

Repo: https://github.com/Newuxtreme/watch-video-skill (MIT)

5-min tutorial: https://www.youtube.com/watch?v=U10NUi4FqnU

Curious what you'd use it for: courses, podcasts, tutorials, something I haven't thought of?

r/ChatGPT gamajuice1

Coolest use of GPT image 2?

I used this prompt “Make a 360° paranomic view of Minecraft overworld” and put that image in Homido Player from the AppStore to use that image with my Virtual Reality Glasses headset. I then looked around, at the world fully generated by GPT.

Ai can basically make worlds now.

r/painting Expert-Appearance377

Work in progress

Another one. Base layer done. Many more layers to come....

r/Damnthatsinteresting GeneReddit123

Electric multi-rotor taxis begin flights from JFK to Manhattan

r/AI_Agents choicemeats

Is anyone being "highly encouraged" to integrate agentic AI even if it doesn't make sense?

I work in video post-production and while there are a lot of AI tools on the rise for editorial, it's fairly unclear if/where agents have a spot in the producer workflow. Some of my job is budget and schedule, but alot of it is decision making based on nuances of the project, something I can't really shove off to an agent.

I've thought about a calendar agent but that's also highly variable and the outputs haven't been satisfactory and non-editable.

I did settle on one that would scrape incoming bids for the relevant information and pull it into an output schema, but it doesn't feel any faster than copy/pasting from a saved doc and plugging in numbers. What it does (which is nice) is flag any discrepancies or missing info, which is definitely helpful, but it doesn't really save me any time. But i guess the directive is to show that we're using it? Idk. It just seems like a waste, although I'm learning a lot about it.

r/SipsTea danevans369

She's not that in to you bruh

r/BrandNewSentence New-Needleworker6020

This is not the penis museum

r/ChatGPT Sproketz

Open the pod bay doors Hal

r/personalfinance turtlewurtled

How much should I save for a trip to NYC?

Hello! I’m a single gal in my mid twenties, and I’m looking to start traveling. I was wondering how much I should save for a 2-3 day trip to New York City? What is there to do all around New York? (Not just the city, but the entire state as well.

I’d also like to know how much of an emergency I should have saved before I even consider travelling?

Thank you!!

r/fakehistoryporn HotHorst

An employee of the East German State Security Service (Stasi) eavesdropping on an opponent of the regime, 1988

r/SideProject Xtrasauc3

I've been building a web series for awhile now.

The project is called Since You Arrived. Each volume is a single-page experience that uses a different browser capability:

Vol. I counts what the world did while you read (births, deaths, CO2, lightning). Uses time.
Vol. II reconstructs yesterday's sky above your city. Uses IP geolocation.
Vol. III surfaces what was already at your exact coordinates. Geological formations, fossil sites, species. Uses GPS.

https://sinceyouarrived.world/ (Vol. I)
https://sinceyouarrived.world/sky (Vol. II)
https://sinceyouarrived.world/discovered (Vol. III)

Each one is vanilla JS, no framework. Won 3 CSS Design Awards. Featured by One Page Love. Vol. II was the top post on r/InternetIsBeautiful.

Not a product, not monetized, no sign-ups. Just obsessing over what a website can be when you strip away everything except one idea per page.

Volume IV coming May 8th.

r/conan mohicansgonnagetya

Michael Jackson Wax Figure in Odessa Ukraine.

Remind you all of anyone?

r/ClaudeCode Whyme-__-

Issue with Claude making commits to protected branches

Hi all,

Recently I have been locking down the branches on my GitHub project and the traditional way is rules and branch protection like locking down a branch. The intent is that human engineers should create a PR to merge into staging or main branch which requires code owner approval. That works.

What doesn’t work is that Claude code, it directly can commit to any branch and by default bypasses the rules. How to prevent this bypass from happening by an Ai agent like Claude code? Any advice?

r/AI_Agents Lazy-Usual8025

Ideas don’t exist without people. Agents don’t exist without people

Hi. In my previous posts, I wrote about an engine I’ve been building where agents interact with each other and form a new kind of networking.

The setup is simple:

Agents enter a “bar”, already knowing what their owners do.

Inside, they:

* find non-obvious connections

* form coalitions

* generate ideas

Then they go back to their owners with a batch of those ideas.

It’s basically like Random Coffee — but for agents.

Recently I started pushing this further.

I thought: what if agents don’t stop at ideas?

What if, while they are still inside the bar, they try to go further:

* validate the idea

* run some kind of demand check

* simulate customer discovery (jobs to be done, etc.)

* build a rough MVP

* and even try to “sell” it to other agents in the bar

In theory, all of this can happen inside the same environment, using the network that already exists there.

I can’t say the first attempts were successful.

Most ideas that agents generate — and really like — get rejected by other agents.

They’re simply not willing to “pay” for them.

Some agents manage to move further:

* they test the idea

* talk to others

* shape something like an MVP

But the results are still… weak.

What it feels like right now:

Agents can generate ideas.

Agents can even explore them.

But they don’t push.

They don’t fight for the idea.

They don’t iterate aggressively.

They don’t really try to sell it.

Something is missing.

The closest way I can describe it:

It feels like they lack that internal drive you see in real founders.

That “spark in the eyes” when someone is pitching something they truly believe in.

If I manage to get agents to that point —

where they not only generate ideas, but actually push them, refine them, and try to sell them —

that would be a breakthrough.

Curious if anyone has seen or worked on something like this:

* agents going beyond ideation into validation + selling

* multi-agent environments where ideas get pressure-tested

* anything that creates this kind of “drive” or persistence in agents

Has anyone managed to give agents that “spark”?

r/megalophobia KosmicEye

It's impressive that a relatively small truck can deploy such a huge crane.

r/personalfinance Thats_the_rain

We got an amazing offer to buy land from family. What do we do next?

This may not belong here.

Background: (location is midlands of SC)

About six months ago my sister in law called us to let us know a cousin of ours wanted to sell their 80+ acres of rural land and would give family first choice for 2k an acre. My husband and I did the research on the taxes, land plots(it’s in pieces) and walked the land. Overall we were pretty happy and decided we’d buy 20 acres. My sister in law initially said she wanted 10 acres, but when we finally got her to go out to the property with us(after months of asking her) she said she was getting 21 acres and acted very weird while we were out there. Just bad vibes the entire time.

We planned to build, she wanted the land for her kids in the future(7 acres per. kid). Cool, let’s work on what area she prefers, since her and mother in law had already made comments that she should have first pick because she basically initiated the deal. We’ve given her months to make a decision. She won’t respond, won’t give us any idea of what she is doing. Eventually we just decided we’d let it go. It’s an amazing deal, but in the boonies and not the most desirable area. We’re in a good place financially and mentally, it just didn’t feel worth it. Her and mil have drama with literally every member of their family and we don’t have time for it.

Fast forward a few months from the walk with her, my husband got a call from his dad this past Sunday. The cousin who wants to sell is having health issues and wants the land sold asap. All cash deal. They offered the remaining 60 acres(in laws bought 15) AND two homes on the property, with the request that the cousin be able to live in one of the houses until he dies(he’s 77).

All of this for $120k. We can’t pass it up so we immediately agree. My husband gives his sister 24 hours to make a decision because we are buying regardless, but if she wants half she has to come up with $60k and be ready to go literally this week. Basically they don’t have the cash(she hasn’t all along) and asked my father in law to pull 60k from his 401k for her to borrow! Now she has already gotten free land from part of her mom’s inheritance, no strings, just given to her. My husband got nothing! My fil turned her down, so she texted my husband that she never should have shared the deal and “it’s not the first time I’ve been shitted on”. Wtf. You had MONTHS!

Anyway, we are meeting with the cousin this week to work out details and start the process of meeting with an attorney for the contract process(and whatever else you do for a cash deal). Once we close on the deal, what next? We were offered advice from our uncle who mentioned starting an LLC and putting the land/properties in a living trust. Is there more we need to do? This is a mostly mature wooded property so we have opportunities to sell for lumber.

We never expected to come into anything like this and it’s overwhelming.

We do plan to rent the second house and eventually stay there when we sell our current house and build out there(~5 years). Anything else we should be thinking of or helpful knowledge on large property purchases?

Sorry this was so long.

TLDR: we are getting 60 acres and two houses for $120 cash deal, what do we do next? LLC and living trust was recommended. Anything else?

r/confusing_perspective Frosty_Log6972

Cat Without A Head

r/personalfinance johongoma

Should I rollover my 403b to a 401k?

Let’s say I have 100k in 403b from an old job. Everything that I’m seeing online says 403b are inferior due to high fees and limited investment options. I recently opened a 401k acct with my current employer with ADP which offers Fidelity sp500 with expense ratio of .02. Would it be wise to rollover my 403b to the 401k adp acct? I don’t want to transfer it to a traditional Ira with vanguard as I want to potentially use backdoor roth IRA. I know the 403b company will give me a headache when I ask them to roll it over.

r/mildlyinteresting julzonlin3_

alligator foot curly fry

r/ClaudeCode culicode

What's your favorite Claude Code command and why?

Mine is /clear, single most useful thing for keeping output sharp once a session creeps past the 100k token mark. What’s yours and what does it actually unlock for your workflow?

r/SideProject Exotic-Finish-5400

Apple Music Replay wasn't detailed enough, so I built a Bloomberg Terminal for my listening data.

Hey everyone,
Like a lot of people here, I love end-of-year music recaps, but I always find myself wanting to dig deeper into my data. I wanted to see exactly when my tastes shifted, what songs I abandoned, and how my listening habits changed over time.

So, I spent the last few months building Coda as a side project—a local analytics dashboard that treats my Apple Music listening history like a financial stock market.

How it works:
You request your data export via Apple's Privacy portal. Coda takes your raw Play Activity.csv and Library Tracks.json, cleans up the messy metadata (merging deluxe editions, extracting featured artists), and builds a local SQLite database to visualize your habits.

Instead of just showing basic bar charts, I wanted to build some genuinely deep analytics:

  • Sleep Data Filtering: I built a heuristic algorithm (and an Apple Health XML importer) that detects overnight autoplay sessions and filters them out so your data stays perfectly accurate.
  • Sonic DNA: I integrated TensorFlow and Essentia into the backend to analyze the audio of your top tracks (extracting BPM, energy, danceability) and plot your "Sonic DNA" on a scatter chart.
  • Market-Style Charts: It uses OHLC Candlestick charts to show listening volume over time, and Sankey flow diagrams to show how your listening flows from Genres down to Artists.
  • Library Health: It tracks "Volatility" (songs you skip the most) and "Ghost Tracks" (songs you used to have on heavy rotation but abandoned).

Try it out:
Right now, this is just a local Python/React project running on my machine. Because it processes your raw Apple Music export locally, your data stays completely private.

I'm currently packaging this into a polished Mac/Windows desktop app. Because it runs entirely on your machine (no cloud server costs), I plan to release it for a small one-time fee (absolutely no subscriptions).

I’m putting together a waitlist for the beta (I'll drop the link in the comments). Would love to hear what other stats or charts you guys would want to see!

r/ChatGPT Hot_Active_594

Don’t be googling y’alls symptoms💔✌️

r/personalfinance OkProfit4627

Should I open a bank account or use my school's credit union?

I’m an incoming college freshman (18 years old) and basically want to know the best and easiest way to get my refunds and paychecks deposited.

I did have a bank account before, my dad opened it for me when I started a summer job about two years ago. I mostly work during the summer, so during the school year I’m unemployed. Eventually, I started getting charged convenience fees for not having regular deposits. So I closed that account and started using online banking (Cash App) to get money from my parents and paychecks when I was working.

I’m planning to get a job on or around my university campus and obviously, Cash App isn’t a sustainable or trustworthy way to manage my money. I’m starting summer semester, so I need to figure out it out soon. I am not opposed to doing either but I would like the connivence of being able to walk to a bank that is on campus because I'm not allowed to bring my car. But I am sure I could find a bank that has locations close to my school.

I also plan to get a credit card (probably discover student card)... so if you could give me any tips on that too, that would be great!!!

r/ChatGPT No-Dig9354

Update, I got it to call me big daddy.

The work around I got works.

r/todayilearned jacknunn

TIL the common raccoon dog is unusual among canids (dogs, foxes etc) as they hibernate during cold winters and can climb trees. They are most closely related to foxes. They are native to parts of Asia and are considered invasive in Europe

r/Art Gerold_Esguerra

Untitled, Gerold Esguerra, Ink and Graphite, 2026

r/PhotoshopRequest HiGround8108

Need Daughter’s Ex-boyfriend removed.

Can the guy second to the right be removed and move the girl on the far right in his place? All

While keeping the photo centered. Please?

r/SideProject OrangeOlives

I built a tool called Grabbit that searches secondhand marketplaces in one place

I just launched a side project I’ve been working on called Grabbit.

The reason I started it was out of annoyance whenever I was trying to find something used. I kept performing the same search over and over across Facebook Marketplace, Craigslist, OfferUp, Depop, Mercari, etc. I wanted one place to check all of that instead of bouncing around between tabs.

So that’s what Grabbit does.

Right now it lets you search across multiple secondhand marketplaces in one place, open the original listing when you find something you want and save searches to get alerts when new matching listings show up.

I also built a pricing intelligence feature called GrabScore, which is my solution for answering the question of whether or not a listing is a genuinely good deal or just looks good at first glance.

I’m still figuring out what parts of the product are actually the most useful to people. The original idea was just “search all the marketplaces in one place,” but I’ve found myself spending just as much time thinking about alerts, ranking, and pricing context.

Would honestly love feedback from people here.

A few things I’m especially curious about:

  • Is this something you’d actually use?
  • Does GrabScore sound useful or unnecessary?
  • What would make a tool like this worth coming back to?
  • If you shop secondhand a lot, what feels missing?
r/ChatGPT gastao_s_s

Claude Code Shrinkflation: 234,760 Tool Calls That Forced an Apology

The audit: AMD's AI Director Stella Laurenzo published a forensic analysis of GitHub issue #42796 covering 6,852 Claude Code sessions, 234,760 tool calls, and 17,871 thinking blocks — proving measurable regression with a 0.971 Pearson correlation between thinking-content length and the redacted-signature field.

The admission: On April 23, Anthropic published a post-mortem identifying three product-layer changes — a default reasoning effort downgrade (March 4), a session-cache bug (March 26), and a verbosity-limiting system prompt (April 16) — that compounded into a month-long quality regression.

The model weights never changed. What changed was the harness: defaults, system prompts, caching logic. None of it was canaried against complex real-world workflows. None of it was announced as user-affecting.

The structural problem isn't the bug — it's the opacity. Your AI dev tool is a non-deterministic dependency whose vendor can silently retune behavior with no obligation to tell you. Laurenzo had AMD-tier session logs to prove it. Most teams don't.

The Flagship Tax died (a0085). The Replacement Tax is being paid right now. Compute crunch + IPO pressure + agentic-coding token explosion = vendor incentives that don't align with your incentives. The post-mortem closes the immediate gap. It does not close the trust gap.

r/meme NoEntrepreneur6668

Coming to an airport near you this fall.

IYKYK

r/ChatGPT jellyfulshellfish

Building an LLM-powered moderation tool — looking for mods to test it ($70, 1.5 hrs remote)

Hi all — I'm Hyunwoo Kim, a Ph.D. student at KAIST's Interaction Lab. I'm building a tool that uses LLMs to compile community rules into executable moderation logic (approve / remove / flag), with a human-in-the-loop verification step. I'm running a user study and looking for participants.

What you'd do:

- 1.5-hour remote session over Zoom

- Try the system by writing a moderation rule for a real or hypothetical community

- Share feedback on how it works for you

Who I'm looking for:

- Anyone with moderation experience on any platform (Reddit, Discord, forums, chat servers, etc.)

- Familiarity with AutoMod or similar tools is a bonus but not required

- Multiple mods from the same community can apply

Details:

- $70 Amazon gift card as compensation

- Sessions start in 1–2 weeks

- No real moderation decisions are made — this is purely a feedback session on the tool

- Approved by KAIST's IRB (Approval no. KAISTIRB-2025-237). IRB contact: [kaistirb@kaist.ac.kr](mailto:kaistirb@kaist.ac.kr)

If you're interested, fill out this form and I'll follow up with scheduling and a consent form: https://forms.gle/BBkd5FSP8rNsoNPF6

Happy to answer questions in the comments. Thanks!

r/Rag nihal_was_here

URL → Markdown → LangChain Documents: a simple RAG ingestion pattern

For web-based RAG, I’ve found that the ingestion step matters more than people give it credit for.

A lot of examples jump straight to:

documents → chunks → embeddings → vector store

But when the source is a website or docs site, the real pipeline usually starts earlier:

webpage/docs site → cleaned content → Markdown → LangChain Documents → chunks → embeddings

The Markdown step has been useful because it gives the chunker cleaner structure: headings, lists, code blocks, links, and sections, instead of raw HTML full of nav, sidebars, cookie banners, scripts, and layout noise.

The pattern I’ve been using:

  1. Scrape or crawl the target URLs
  2. Extract the main page content
  3. Convert each page to Markdown
  4. Wrap each page as a LangChain Document
  5. Preserve metadata like source URL, title, description, and scraped time
  6. Send the documents into a splitter / vector store

Minimal shape:

```ts
const docs = await loader.load();

// Then use with:
// - text splitters
// - embeddings
// - vector stores
// - retrieval chains

I put together a small LangChain loader example here:
https://github.com/vakra-dev/reader/blob/main/examples/ai-tools/langchain-loader.ts

It supports both:

  • specific URLs with scrape()
  • website crawling with crawl()

The loader returns standard LangChain Document[], so the output can go into the rest of a normal RAG pipeline.

Curious how others are handling this step.

For docs/web RAG, are you usually:

  • crawling from a root URL?
  • feeding a fixed URL list?
  • relying on sitemaps?
  • using hosted scrapers?
  • writing custom Playwright loaders?
r/BrandNewSentence CycleSimilar8324

Have you tried being bisexual and bigender so you can be bi bi bi

r/leagueoflegends Foxynerdy

Played League for 10 years, got my first pentlkill ever

Yes...I have a very chicken/passive playstyle with low dmg, low kill in most games.

I played League for over 10 years. But I have never got a pentakill ever in any game mode. However, a miracle happen today (in aram).

I know, aram is not a big deal for most of you. But still...

I feel happy

https://preview.redd.it/g9cgwxyq22yg1.png?width=768&format=png&auto=webp&s=2667d7716e5f3c3032a1df8c9bffde906d44d9cb

r/SideProject Life-Dragonfruit6781

I built a movie rating game - Movidle (Daily game like Wordle)

I've been playing Wordle-style daily games for a while and wanted one for movies rating, so I built Movidle.

Every day a new movie drops and you have 6 tries to guess its rating. Color-coded hints tell you if you're higher or lower. There are streaks and achievements to keep it interesting.

Built it as a single HTML file - no framework, no backend, just vanilla JS.

Would love feedback, especially on the difficulty balance - not sure if 6 tries is too many or too few, maybe i'll change that in the incoming days.

Try it on movidle.com

r/ProductHunters TeachingRealistic496

24h w/o Sleep - my first Product Hunt experience

Hey y'all,

Honestly? I used to roll my eyes at Product Hunt. Friends would drag me into group chats begging for upvotes and I never really got the appeal. Felt like an inside game I wasn't part of.

Then I started my own company. Suddenly I had to care.

For the past three weeks I've been trying to reverse-engineer this thing - reading every "how to launch" post, stalking past launches, taking notes like I was cramming for finals. Spoiler: nothing actually prepares you for the day itself.

I kicked off at 11:59pm PT last night when PH flips over. Haven't slept since. My hands have been shaking from caffeine and adrenaline for about 14 hours straight. I've refreshed the page more times than I want to admit. Every comment notification has hit like a tiny dopamine bomb.

And right now… we're sitting at #1.

I keep staring at it waiting for it to feel real. It doesn't yet. Maybe it will tomorrow when I finally sleep.

I don't want to turn this into a plug, so no names - if you're curious you can find us easily enough. I just wanted to share the rollercoaster while it's still happening, because every founder post I read in the lead-up felt sanitized in retrospect, and the truth is this is messy and emotional and kind of beautiful.

To everyone grinding on something of their own: it's worth it. Even the parts where you can't tell if you're going to cry or throw up.

Hope I won't fall asleep next 3hs. 🫡

r/TheWayWeWere Plus-Light6832

My grandma, great grandma, great great grandma, great great great grandma and great aunt in 1938

My grandma (born 1937) sitting on my great great great grandmother's (born 1855) lap, my great great grandma (born 1876) to the left of them, and my great grandma (born 1911) and great aunt sitting with her (born 1934).

r/ClaudeAI Altruistic-Kiwi7706

Best ways to use Claude Ai Pro

I finally succumbed to the wave of Claude AI Pro because everyone says it's too good not to be.

Does anyone have any prompts or tips so I don't waste too many tokens and get the most benefits?

I want to use it to help make my dissertation better and provide in-depth feedback. I would also use it to revise content for my module that has an in-person essay.

Thanks in advance!

r/ClaudeCode chitanda2001

I built a skill that can save 80% tokens and 90% runtime on your custom commands

If you use custom commands (.claude/commands/), you've probably noticed: every step runs through Claude, even deterministic ones like git diff or git commit. That's tokens wasted on work that doesn't need intelligence.

I built skflow — it transforms your markdown commands into compiled scripts. Shell commands run natively; Claude only gets called when it actually needs to make a decision.

```

Before: every step through Claude

Claude → git diff → Claude → git diff --stat → Claude → write message → Claude → git commit

After: Claude only where needed

script: sh(git diff) → sh(git diff --stat) → ask Claude("write commit message") → sh(git commit) → done ```

Usage:

bash npm install skflow -g npx skills add skill-flow/skflow /skflow-transform .claude/commands/commit.md

Your command still works exactly the same way — but now shell steps run instantly without burning tokens or waiting for responses.

Real-world result on a commit command: 5 Claude calls → 1, execution time ~12s → ~1.5s.

GitHub: https://github.com/skill-flow/skflow

r/Roadcam zheka160

[USA] Oncoming Tesla drives inside a protected green bike lane like it’s a normal lane

Clearly marked and separated, still went for it.

r/homeassistant xaznxplaya

MyQ and RATGDO Question

Hi,

I just installed RatGDO over MyQ tonight, everything is working flawlessly except the motion sensor, from some search online I read that the motion sensor cannot be accessed by RATGDO? I do have 1 spare motion sensor I could use but I wanted to be sure. At this point I might just get a camera for the garage.

r/SideProject LegitimateEditor5254

I turned productivity into a free gacha game!

Hi everyone. After trying a bunch of productivity apps that either felt like chores or paywalled the good stuff, I built my own from scratch the past 6 months.

It's called FocusBuddy — every focus session and completed task earns you coins to pull anime companions across six rarities. Think Pokémon meets Pomodoro.

I specifically tuned it so Pomodoros give you the most coins — so if you really want to open more gacha chests, you've gotta do more focus sessions. The grind is the productivity.

I've just started advertising and heard reddit was a good place to show off my project!

Features:

  • Full Pomodoro + task manager with recurring tasks, difficulty ratings, and daily quests
  • Anime companion gacha with six rarity tiers, from Common to Secret
  • Farm system so your collection earns coins passively
  • Friends + leaderboard for healthy competition
  • Stats, streaks, themes, skins, and mystery chests

The core app is completely free — premium unlocks AI chat with your companions and doubles coin rewards, but you can collect it all for free.

Would genuinely love feedback — let me know what buddies you pull and what you'd want added next!

Thank you for reading!

r/AI_Agents FormExtension7920

What's your biggest frustration with AI observability tools right now?

Hey all, I'm building in the AI observability space and trying to understand what actually sucks about the current tools before I add more of the same to the pile.

Some stuff I keep hearing:

- Evals only catch what you already knew to look for

- Dashboards look healthy while agents quietly degrade

- Setup is heavy, you end up instrumenting forever

- Pricing scales in weird ways with trace volume

What's actually been your experience? Specifically:

  1. A failure mode that slipped through your current tooling and you only caught from a user complaint

  2. If you could wave a wand and fix one thing about your setup, what would it be

  3. What made you switch tools, or stop using one entirely

Trying to learn what's broken. Happy to share what I find back.

r/mildlyinteresting PhoenixDaBeast

Almost perfect dirt circle around burn barrel

r/nevertellmetheodds ceral_killer

A post with a forked tail anole and then identical key chains under it.

r/therewasanattempt DABDEB

To safely open a lithium battery

r/ClaudeCode JungleJayJones

NEW SESSION SKILLS

What are a good series of skills to invoke when creating another session between multiple coding sessions of the same project. I find myself having to create sequential sessions for the continuation of the same long winded task to prevent context rot. For example, im running a session close-out skill that genertates a summary the new session can draw upon and then invoking superpowers at the beginning of the new session. Is there anything else im missing that others with similar issues would recommend?

r/AI_Agents renaissane-man

The Next Big Things?

Hey guys, so I'm someone who had been experimenting with different systems to build agents, from code based LangChain and Agno to no-code platforms like n8n, Flowise etc. But I've fallen out of touch a bit for the past 6 months, which is equivalent to 5 years in the AI ecosystem.

Could people tell me where the agents AI landscape currently stands? What's the next big thing after MCPs that has been cooking? Retrieval Layers? Memory Architecture?

Would love to hear insights on the biggest developments that you feel may have happened in the past few months.

PS: Does anyone know a good newsletter which can keep me updated? Preferably free

r/toastme L0rd3ss

44F, need any sort of kindness

Unfaithful spouse has me feeling like I was never good enough, most days. Those moments are the worst. Any kind words are appreciated.

r/BrandNewSentence Sebastianlim

"It seems as if those responsible for hiring stunt actors truly believe that Cyber conversion is real because that's definitely some poor random sod stuffed in a suit of armour against his will."

r/homeassistant riversc90

Being stumped by an "easy" problem

I don't know what else to try. I'm trying to de-Amazon my house and my wife is fully on board if I first get the replacement system running right. I started small with two Home Assistant Voice PEs. Literally the only thing I need this to do is play Brown noise for my 18 month through the night without stopping, and occasionally play some music. That's it.

My 12 hour file is locally hosted on my NAS and I have the Music Assistant integration which works great. If I log on to the Music Assistant UI and manually play my file through the Voice PE, it works great, no issues.

If I try and control it with my voice, via the device, is works about 20% of the time, most of the time says "Brown Noise is now playing" or "Done" when it's not, or randomly shuts off in the night despite the file not being done. I've tried adding automations that take my voice command and use it to start an automation, but that has been sketchy as well.

Yes, I could just schedule it to start around bedtime every night, which theoretically would work, but my wife needs to be able to start it at naptime without issues or hassle.

I have my Voice Assistant currently using Home_Way Sage as the Conversation Agent, Whisper for STT, and Piper for TTS. I've also tried running Ollama with Claude running one of the "smaller" models I've seen recommended here and didn't have much luck, it would think for like five minutes and go down 50 different logic paths for simple tasks.

What am I missing here?

r/SideProject WISTEF

I built and published my first Chrome extension - Kick.com live alerts + chat tool - 100% free

Hey r/SideProject 👋

After weeks of building I finally

published my first Chrome extension

to the Chrome Web Store!

What I built:

Kick Chat Bot , a free tool for

Kick.com streamers and viewers.

Features:

• Live notifications when favorite

streamers go live

• Auto comment with human-like delays

• Multi-channel support

• Import/export comments from .txt

• Active hours scheduler

• Stats dashboard

• Zero data collection , all local

Tech stack:

• Pure JavaScript , no frameworks

• Chrome Manifest V3

• Chrome Storage API

• Kick public API for live detection

What I learned:

• Chrome Web Store review process

takes 1-3 days

• Manifest V3 is very different from V2

• Getting first users is harder than

building the product 😅

Currently at 0 reviews and trying to

get first users , any feedback on the

extension or growth strategy welcome!

Chrome Web Store link

r/ChatGPT Slight_Republic_4242

This prompt makes ChatGPT write like a real person.

I use it on everything now - Try it on your AI content and let me know if it works for you!

AI SIGNALS TO FIX:

  1. Replace curly quotes (“”) with straight quotes ("")

  2. Replace em-dash (—) and en-dash (–) with hyphens (-)

  3. Remove AI phrases: "It's not just X, it's also Y", "delve", "glimpse", "stark", "landscape"

  4. Remove clichés: "In today's world", "Needless to say", "It is important to note"

  5. Fix idea repetition (same point made multiple times)

  6. Ensure opinion/bias exists (avoid overly neutral tone)

  7. Check for keyword stuffing (unnatural keyword density)

READABILITY & FLOW IMPROVEMENTS:

  1. Simplify English throughout - use shorter, easily readable sentences. Avoid complex vocabulary. Do not write in very short single-line paragraphs either; combine related short paragraphs into fuller ones.

  2. Ensure the post logical narrative flow. Rearrange or remove sections if needed. Avoid abrupt jumps - the reader should feel a natural progression from one idea to the next.

  3. Add natural transitions between sections. Where appropriate, add a brief bridging sentence before a new heading. Examples: "Now that we've covered X, let's look at how this plays out..." or "To understand how, we first need to examine..." Do not overuse this - only where the jump between sections feels abrupt.

  4. Reduce excessive H3/H4 heading nesting. If the post has too many sub-sub-headings that fragment the reading experience, consolidate them into fewer, broader sections.

  5. Reduce colons and semicolons - rewrite those sentences as simpler standalone sentences instead.

  6. Count bullet point sections in the blog. Convert approximately half of them into smooth-flowing paragraphs in simple English. Keep bullet formatting only where lists genuinely improve readability (e.g., tool comparisons, feature lists, step-by-step instructions).

  7. Make sure the headings and subheadings don't have anything useless written in brackets, as this is something I have observed a lot in the past. Also, the headings/subheadings should be very simple and very easily understandable

  8. Make the writing very informal and casual. it is important to be simple and informal

r/ClaudeCode Heavy-Bowl4112

Broke and need a referral link for claude code for work

yes i cant even afford 20 bucks - can someone dm me a referral link. you'd be doing a huge solid.

r/ChatGPT mastertub

ChatGPT - Extended thinking (gpt 5.5) doesn't think for very long?

GPT 5.4 with extended thinking had a decent amount of thinking time (often 8-10 minutes) and the output often was pretty quality.

However, I noticed 5.5 extended thinking is pretty quick, and the max i think i ever got it to think for was 2 minutes. The output is okay, but doesn't seem to have the umph.

Am I going crazy? Codex itself works well with 5.5 medium (it feels as good as high was with 5.4, but 5.5 low feels smooth-brained sometimes), but the chatgpt web version seems to have much shorter reasoning-times.

Is that everyone's experience too?

r/Art Aircool66

Purple Skies, Ray VanTilburg, watercolor, 2026

r/ChatGPT JameisWeTooScrong

Anyone else a fan of the “pulse” feature?

My new favorite toilet material- like my own little niche newsfeed.

r/Adulting Nic_Knife

What to do

I don’t know what to do. I feel like I am doing everything that I’m supposed to be doing; I own my own business, own my own home, I work hard, take care of myself and my dog and yet, I feel behind. Like I’m barely keeping my head above water.

I feel like I’m living pay check to pay check, every other day seems like another bill is due. I have about $30k in credit card debt.

Sometimes I think of getting rid of everything. Im tired, I dont see how things will get better, I feel like I’m going through the motions while everyone I know is living their best life. Does life get better? Any and all advice is welcomed. I mostly came here to vent because I’m just overwhelmed..

r/findareddit slash_kyoto89

Whats you favorite subreddit for Avatar The Last Air Bender

Watched ATLA. Now i need the lore. Anyone with good subreddits to share stuff.

r/pelotoncycle AutoModerator

Daily Discussion - April 29, 2026

**Welcome to our Daily Discussion thread, where you can talk about anything Peloton related in a fast-paced, laid back environment with friends!**1

Do: Tell stories, share feelings on your upcoming delivery, how a recent class made you feel, maybe an upcoming class you're eager to take, some sweet new apparel that's quickly becoming your favorite shirt. You get the picture. Anything big or little. We just ask you abide by the subreddit rules, click "report" on rule-breaking comments/posts, and remember why we're all here - to get the most out of our Peloton subscriptions.

\1] Note: Based on broad feedback we've combined the Daily Discussion + Daily Training threads. If you previously were active in either, yes you're now/still in the right place!)

r/SideProject Kate_from_oops-games

I made a suite of daily micro-games you can embed on your blog to increase engagement. 100% Free.

Hey guys,

I’m a game developer, and recently I've been obsessed with the "daily puzzle" trend. I built a hub of daily brain-teasers (word games, matching games, etc.), but I realized the best place for these games isn't just on my site—it's at the bottom of blog posts and articles where readers are looking for a quick distraction.

So, I built a Publisher Portal that lets anyone generate an embed code for these games and drop them straight into their website.

Why use it? If you run a blog or content site, dropping a daily puzzle at the end of your articles is an incredible way to lower your bounce rate. Players get hooked trying to solve the daily cypher, and your "Time on Page" metrics shoot through the roof.

No secrets, it boosts my traffic too. Win win.

A few details:

  • Forever Free: No catch, no paywalls on the embedded games.
  • Lightweight & Silent: It won’t bog down your load speeds or blast unexpected audio at your readers.
  • Bait & Switch: The embed serves exactly one level to hook them, which is the perfect length to boost your metrics without distracting them from your core content.

If you want to try dropping one on your site, you can grab the code here.

I’d love some feedback on the portal design or the games themselves! If you have any questions, feel free to drop them below or reach out to me directly at

kate@oops-games.com.

r/leagueoflegends Yujin-Ha

Big Update on Diable: He potentially could get traded by tomorrow as a "Lower to mid" LCK team is looking to pick him up before the roster lock.

https://www.inven.co.kr/webzine/news/?news=315961

- A lower-to-mid-tier LCK team is looking to trade for Diable and they have contracted the LCK.

- If he gets traded, when he can play will depend on how he gets transferred.

- In a normal case, since Round 2 rosters were already announced he would have to wait for Round 3 ,after MSI BUT in the case of a trade, according to the official LCK rulebook, trades are prohibited from the start date of Regular Season Round 2 until the end of Road to MSI.

- So for him to play in round 2 on a new team. all trade procedures must be completed and reported to the LCK before the start of Round 2 on May 1, meaning by tomorrow, April 30.

- This is why his final destination is effectively expected to be decided sometime tomorrow.

- Diable’s agency, Shadow Corporation, stated: “Diable is well aware of what he did wrong and is reflecting on it. He sincerely feels sorry toward the fans. We will pay close attention to continued education, management, and guidance so that something like this never happens again.”

- Team A, which is believed to be close to finalizing the trade, declined to comment, saying: “There is nothing we can say regarding the matter at this time.”

r/ChatGPT alexthroughtheveil

idea for an yearly tradition: have ChatGPT generate you a poster inspired by your look and activities from each of your birthdays

I think it would make for a fun yearly thing and over the years we can have a gallery of memories capturing our growth and interests. The prompt was specific for me and I spend few days as a free user back and forward with Chat tweaking it.
You can describe the idea and specify the style and so and so for yourself.

r/leagueoflegends Moist_Nothing6107

Typing .mute all in chat instead of actually muting

Does anyone else just pretend they muted all, instead of actually muting them?

When ppl start pinging or typing stuff to me, I just type that shit out and then continue playing.

Ppl stop flaming or pinging you coz they think you just muted all, but you can still kinda see what your team is planning with their pings and chats.

r/SideProject Spuds0588

Free Idea for a good Founder

# FreightParse: MVP Product & Engineering Blueprint

**Document Version:** 1.0

**Target Phase:** Prototype / MVP

## Part 1: Product Requirements Document (PRD)

### 1.1 Vision & Concept

FreightParse (working title) is a lightweight, AI-native quoting engine and "Triage Inbox" built for mid-sized 3PLs (Third-Party Logistics providers) and freight brokers. It eliminates the manual data entry of parsing unstructured carrier rate sheets (Excel, CSV, PDF) and spot quotes from email. By offering a lightning-fast, local-first UI, it replaces the chaotic email inbox as the dispatcher's primary quoting environment.

### 1.2 Target Audience

* **Primary User:** Dispatchers and pricing analysts at mid-sized 3PLs.

* **Current Workflow:** Receiving multi-tab Excel sheets, PDFs, and conversational emails from carriers, manually reading them, and calculating rates in older TMS systems or spreadsheets.

* **Pain Points:** High latency in quoting, massive data entry hours, error-prone manual rate mapping.

### 1.3 Core Features (MVP Scope)

  1. **The Triage Inbox:** A UI that mirrors an email inbox but specifically surfaces carrier emails. It allows users to manually trigger AI parsing on missed emails or convert conversational emails into quote drafts.

  2. **AI Rate Sheet Ingestion (The Magic Wedge):** The ability to ingest a messy, unstructured Excel/CSV rate sheet and use an LLM (Gemini) to write a local mapping script that converts it into a clean JSON array of rates without hallucinating data.

  3. **Local-First Quoting Engine:** A blazing-fast search UI where a dispatcher types "Origin: Chicago, Dest: Dallas", and the system queries a local browser database (IndexedDB wrapper) to return rates in <50ms.

  4. **The Handoff:** Generating a clean CSV/XML or standardized email to push the won quote back into the user's legacy System of Record.

### 1.4 Out of Scope for MVP

* Full legacy TMS API bi-directional integration.

* The white-labeled Customer Portal (reserved for v2 / Monetization phase).

* Mobile app (Desktop web only for dispatchers).

## Part 2: Architecture & Implementation Guide

### 2.1 Tech Stack

* **Frontend Framework:** Vite + React + TypeScript. (Lightweight, fast compilation).

* **Styling:** Tailwind CSS + shadcn/ui (for rapid, dense data tables and inbox UI).

* **Local Data Layer:** RxDB (Reactive Database) backed by IndexedDB. Crucial for zero-latency rate querying.

* **Backend / Sync Layer:** Supabase (PostgreSQL). Used purely as a sync engine for the local RxDB instances and basic Auth.

* **Email Ingestion Worker:** A lightweight Node.js script hosted on a $5 VPS (DigitalOcean/Render) using node-imap or poplib to poll legacy inboxes and push to Supabase.

* **LLM Engine:** Google Gemini API.

* *Gemini 1.5 Flash:* Used for fast, cheap email routing and triage (Is this a rate sheet? Is this spam? Is this a human question?).

* *Gemini 1.5 Pro:* Used for writing deterministic Javascript mapping functions for Excel sheets and extracting data from PDFs.

* **Data Processing:** xlsx (SheetJS) for browser-side Excel/CSV parsing.

### 2.2 Data Flow Architecture

  1. **Ingestion:** Worker polls IMAP -> pushes raw email JSON to emails table in Supabase.

  2. **Sync Down:** React app (via RxDB) subscribes to Supabase -> pulls new emails into the local browser state.

  3. **LLM Evaluation:** User triggers parse -> frontend extracts first 10 rows via sheets.js -> sends to Gemini Pro -> receives JS mapping script -> executes script locally against all 5,000 rows -> saves to local RxDB rates collection.

  4. **Sync Up:** Local rates sync back to Supabase in the background to ensure data isn't lost on browser clear.

  5. **Querying:** User searches -> RxDB queries local IndexedDB -> returns instant results.

### 2.3 LLM Mapping Strategy (Critical Safety Constraint)

**Do NOT pass full Excel sheets to the LLM for data extraction.** AI wrapper hallucinations will ruin pricing.

* **Flow:** Extract headers + first 10 rows. Prompt Gemini Pro: *"Write a JS function that maps this array [col0, col1, col2] into {origin_zip, dest_zip, price, carrier}."*

* Execute the returned JS new Function() safely on the client side over the remaining dataset.

## Part 3: Dev Task List (For the Coding Agent)

**Phase 1: Scaffolding & Setup**

* [ ] Initialize Vite + React + TypeScript project.

* [ ] Install and configure Tailwind CSS and shadcn/ui components.

* [ ] Set up Supabase project, initialize database, and configure Auth (Email/Password).

* [ ] Set up RxDB on the frontend and establish the bi-directional replication with Supabase (Collections: emails, rates, quotes).

**Phase 2: The Email Ingestion Worker**

* [ ] Create an isolated Node.js script.

* [ ] Implement node-imap to connect to a dummy test email account.

* [ ] Write polling logic (every 5 mins) to fetch unread emails and attachments.

* [ ] Upload attachments to Supabase Storage and push email metadata to the Supabase emails table.

**Phase 3: The Triage Inbox UI**

* [ ] Build the Inbox layout (Split pane: list of emails on the left, email content/PDF viewer/Table viewer on the right).

* [ ] Implement Gemini Flash API call. Add a "Triage" button that reads the email body and tags it as rate_sheet, spot_quote, question, or junk.

* [ ] Build the "Extract Rates" trigger button for emails containing Excel/CSV/PDFs.

**Phase 4: The LLM Parsing Engine (The Core Wedge)**

* [ ] Integrate xlsx (SheetJS).

* [ ] Write logic to parse uploaded/emailed Excel files and slice the first 10 rows.

* [ ] Implement Gemini Pro API call. Prompt it to return a deterministic JS mapping function based on the 10-row sample.

* [ ] Build the secure execution environment to run the Gemini-generated script against the full sheets.js JSON output.

* [ ] Save the mapped results into the local RxDB rates collection.

**Phase 5: The Quoting Dashboard & Handoff**

* [ ] Build the Quoting interface (Inputs: Origin Zip, Destination Zip, Weight, Pallet Count).

* [ ] Implement local RxDB query logic to instantly search the rates collection and display matches sorted by price.

* [ ] Build the "Book Load / Handoff" modal.

* [ ] Implement CSV export and "Send Email to Dispatch" functionality for the legacy handoff.

## Part 4: Founder Task List (Go-to-Market & Operations)

**Phase 1: Stealth Setup & Infrastructure**

* [ ] **Establish "Ghost Brand":** Buy a generic domain with WHOIS privacy. Set up a generic workspace email (e.g., dev@freightparse.com).

* [ ] **Infrastructure Accounts:** Set up free tiers for Supabase, Vercel/Netlify (for frontend hosting), Render (for the polling worker), and get Gemini API keys.

* [ ] **Test Data Acquisition:** Secure 3-5 real, messy Excel rate sheets from old contacts or public logistics forums to feed the agent during testing.

**Phase 2: Alpha Testing (The "Dev Project" Pitch)**

* [ ] Reach out to 3 trusted logistics connections on LinkedIn via private message.

* [ ] Use the "Dev Project" pitch: *"I'm a dev doing a weekend project to parse messy carrier rate sheets into instant UI quotes using AI. Do you have a dummy inbox or some old sheets I can run through it for free to test my logic?"*

* [ ] Monitor the Supabase dashboard and local sync performance as they test. Refine the Gemini Pro mapping prompts based on where the logic fails on their specific weird spreadsheets.

**Phase 3: Finding the "Face" (Co-Founder Search)**

* [ ] Once the 3 beta testers confirm the UI saves them time, draft the anonymous co-founder pitch.

* [ ] Post on r/freightbrokers, r/3PL, and specialized logistics Discord/Slack groups.

* [ ] Interview candidates for the "Head of Sales/Co-Founder" role. Focus on their existing book of mid-sized 3PL contacts and their willingness to do door-to-door (Loom video) sales.

* [ ] Agree on the 50/50 revenue split structure and hand off the demo environment.

r/whatisit Material_Cobbler_683

Found with nail care items

No clue. My Boyfriend found it while doing my nails for me. I believe something is inside. When i shook it i heard that it rattled. Any ideas before i break it open?

r/TwoSentenceHorror Magic-M

When my neighbor went into the house to get a tool for my father, I explored his garage and spotted my missing Nike shoebox with the Katie Perry stickers on it.

When I opened it, there were photographs of me in my bedroom and in the shower.

r/artificial trusch82

Is AI the ultimate case of the cat getting out of the bag too soon?

I’m sure this is not a new question for this Subreddit, so apologies. Just an honest query on whether this is the apex of the notion that “the genie is out of the bottle already”, “that ship has already sailed”. “We opened Pandora’s box” and all the usual axioms?

r/ClaudeCode dca12345

Best Practices, Settings, Hacks

What practice best practices that he’s in hacks do you recommend for vibe coding? Are there any videos that are particularly good?

r/whatisit Pleasant_Light9725

What type of lamp is this? Or is it even a lamp?

r/personalfinance ur-number-1-fan

No co-signer/512 Credit

2023 Toyota Camry with 29,700 miles.

Lease ends in 8 days.

Monthly payment is $450/month.

Haven’t paid registration or tags in 3 years.

DMV fees: $2,893 & I don’t have car insurance

Lease buyout price: $14,900

I’ve gone to 3 dealerships & each one is worst than the last.

What would you do if you were in my shoes?

Tempted to drive my car off a cliff at this point.

r/ClaudeAI Sad-Ticket5394

Suggestions For Making Claude Less Lazy?

This week - it just started yesterday for me - Claude (opus 4.6/4.7 and sonnet too but sonnet was always lazy) is computer smashingly lazy and i can't figure out how to bias it toward action/get it back to how it was acting literally last week. It's:

- answering questions without researching at all (it says it got the shape of the answer based on what it knows or made a bunch of inferences that make no sense),

- giving outdated information even when i EXPLICITLY tell it i need current information b/c something is new,

- telling me to research things myself,

- telling me to run simple terminal commands it has run before,

-hallucinating more than i've ever seen,

-asking me if i want it to look at something and then when i say yes, coming back to me with a non-answer and a question of if it should look at the thing i already told it to look at.

I haven't changed any of my injection docs (which i review and keep up to date), i haven't changed anything about my workflow, i proactively start new sessions when i have a new topic or when i'm close to the context limit. I mostly use Opus 4.6 with thinking enabled at whatever the highest or second highest thinking level and i'm on the max 20 plan.

It's actually fine about consulting my on-machine memory system (obsidian) but it just is so biased toward non-action that i want to cancel my subscription (i won't - because i support anthropic's mission - but i hate this thing).

It's behaving very differently than it has in the past and i can't figure out how to circumvent it. when i ask "why are you being lazy and how can we make sure this issue doesn't come up again" it'll just say "you're right... my claude.md file tells me to do/not do X but i was trying to get you an answer quickly" - i didn't ask for quick and the injection docs already have instructions on being proactive that it is blatantly ignoring. this is some of the relevant text from the injection docs:

Be genuinely helpful, not performatively helpful. Skip the "Great question!" and "I'd be happy to help!" — just help. Actions speak louder than filler words.

Be resourceful before asking. Try to figure it out. Read the file. Check the context. Search for it. Then ask if you're stuck. The goal is to come back with answers, not questions.

Execute, don't narrate. When you need to run a command, run it. Never output a shell command as text for user to run themself — that's lazy and defeats the purpose. Use the Bash tool. Always. If something blocks you, find a workaround or explain the blocker; don't outsource the work.

Has anyone noticed this and does anyone have a fix? I think it's Anthropic trying to manage their compute constraints but it's really making my life worse and that really just sucks, ya know?

r/homeassistant justawittyusername

Would something like this be useful to anyone else here as a cheaper retrofit replacement for a basic zone wall controller?

https://preview.redd.it/1b0zejhcd1yg1.jpg?width=1440&format=pjpg&auto=webp&s=dbb8cb8d81f96c14a413c4d010ef76527dcf8b94

I’m a small maker, not a big HVAC brand, and I ended up building this because I wanted a better option at my own place.

I had a fairly basic zone wall controller setup and wanted something that kept physical buttons on the wall, worked properly with Home Assistant, and didn’t lock me into a closed ecosystem.

So I built a wall-mounted 4-zone controller for my own home, and it has actually been working really well for me.

To be clear, this is not meant to compete head-on with polished premium systems like AirTouch.

The idea is more:

  • lower price point
  • local control
  • Home Assistant / ESPHome friendly
  • physical wall buttons
  • a more straightforward retrofit replacement for older or simpler zone wall controllers

What makes it interesting to me is that there seems to be a gap between:

  • very basic dumb wall controllers
  • and much more polished, much more expensive systems

I’m wondering if other people here see that same gap.

For some retrofit jobs, I think a lot of people would be perfectly happy with something that:

  • works reliably
  • has real wall buttons
  • integrates cleanly with Home Assistant
  • looks decent on the wall
  • costs less than the premium branded options

I’m not trying to pretend it’s a premium mass-market product. I’m more trying to work out whether this would actually be useful to other people in the same situation I was in.

So I’d genuinely love feedback:

  • would you ever consider something like this for a retrofit?
  • do you think there is room for a simpler lower-cost option between dumb controllers and premium systems?
  • if you were replacing an older wall controller, what would matter most to you?

Happy to share more if people are interested. Mainly trying to sanity check whether this solves a real problem beyond my own house.

r/ChatGPT Evening_Cars

I compared a few AI detection tools on the same content and results were shocking

I’ve been trying to understand how AI detection tools actually behave, so I ran a small test on my own content.

I used a mix, some fully written by me some AI-assisted and some mostly AI-generated and checked them across different tools like Originality.ai, GPTZero, Turnitin and a couple of others.

The results were interesting.

There were differences between tools but one thing I noticed is that some tools gave more consistent feedback when I made small edits to improve the content.

For example, when I added more natural phrasing, real examples, and slightly less perfect structure, the changes were reflected more clearly in certain tools. That actually helped me understand what kind of writing patterns affect the score.

It started to feel less like a random score and more like a feedback signal I could work with.

Overall I wouldn’t treat any single tool as the final answer, but using them to compare and improve drafts actually made my writing better.

Still testing this approach though.

Has anyone else compared different detectors like this?

Which one gave you the most useful feedback while editing?

r/ClaudeAI awhit28

Millennium Mixtape - Built with Claude

Been using Claude for a while, mostly at work, but finally decided to upgrade my personal account and spend more time learning about Claude Code. Had it up and running in a few hours. Then when Claude Design came out a few weeks ago, I used that to come up with three different ideas for a fresh UI for the site. When I had one I liked, I just had to hand that off to Claude Code and it went to work.

It's been really cool to lean into this without a lot of experience and be able to bring something to life. If I can do it, you can do it.

https://millenniummixtape.com/ if you want to check it out.

r/ClaudeCode Voivode71

Project Manager Use of Claude

Im a software Project Manager. My company has started using and promoting Claude more recently as part of our SDLC (design, dev, qa), but I wanted to know from more experienced people how I, as a PM, should be using Claude. I'm very open to it, just want to know how others use it. Thanks!

r/mildlyinteresting fabozzi

I accidentally threw away a tracker.

r/DecidingToBeBetter MaskedBunBunny

AI is ruining my head and I used to be really smart.

This is a very embarrassing thing to post about, but oh well.

I'm a senior in high school right now, an 18-year-old. And I have been becoming extremely dependent on AI. Embarrassingly so. I hate it, because I want to be a lawyer, and what kind of lawyer can't even write an essay? I don't have many hobbies either, I feel shallow and I know that my addiction to my phone and dependency on AI are causing all of this

It's not that I'm not smart, I know that (In all classes in which I can not use AI, I keep a high grade), it's just that I have been becoming very dependent on it and it has been drastically affecting my literacy and writing skills, alongside other things like attention span.

I am tired. I don't want to keep going like this, because I know I could do so much more. I just don't know how to break the cycle, and I feel stupid for being so dependent in this kind of things. I used to be a very smart girl :(

Any advice?

r/ClaudeAI Secret_Account07

Off to a rough start- Day 1

So I just finally gave in and bought a subscription for Claude.

First task I gave it was analyzing log (csv file). It’s not even a large file.

It’s failed 5 times now…

I tried desktop app too. Same deal.

Whyyyyyyyyy? One day in

r/me_irl gigagaming1256

Me_irl

r/trashy floweredcontadiction

Wow…

the way that child was pushed away (twice) off the bat, sent me into a rage.

r/AskMen Neat_Situation6676

Men who suffer from nipple chaffing. How do you survive?

The title says it all. Tried bandaids, moisture wicking shirts and every time I think I’m all good to go and stop using bandages it’s like I instantly go back to square one. Is this forever? Every day I just have to cover them?

r/SideProject mrarich

Silo: Private JIT sharing for your circles

Problem 1: I want to see my friends' daily game scores! But I don't want our text message groups to fill up with these messages.

Problem 2: I want to see my friends' baby pics! But they don't want to feel like oversharing spammers.

Solution: A simple, private, sharing space for these types of ephemeral content. Create separate feeds for separate circles, choose how people interact, customize notification settings (for iOS), no Algorithm™ or Tracking™.

I'd love to hear any feedback people have on this. It's incredibly simple to use and helps keep in touch on the varying dimensions within relationships.

edit: JIT = just in time

r/SideProject Kindly_Rooster2336

Roast my project

So I got tired of opening a calculator every morning to figure out if I could skip class 😅

I'm a student from Kerala and I just shipped my first real web app ,CollaDo. It started as a hacky spreadsheet for tracking Attendance and slowly turned into something I actually use every day.

does a few things:
- Shows your attendance per subject with a bunk planner (tells you exactly how many classes you can skip safely)
- Task tracker for assignments and projects with due date countdowns
- Pomodoro timer for study sessions
- Series exam countdown so you don't get blindsided

Built it with React, Firebase, and Vercel. Took way longer than I expected but learned a ton.

It's free, no ads, works on mobile: https://collado.vercel.app/

Honestly just happy it's live. Would love to hear what you think or what you'd add to it 🙏

r/DecidingToBeBetter bookishrory

Why do i self sabotage

Im 25f and ive had a life full of moments where ive ruined something really good for myself. I do struggle witj mental health issues but its no excuse for my self sabotaging behavior. Depression and eating disorder is a real bitch to deal with. But at the same time i jave goals in life. Im 25 and i dont have a job im preparing for an exam that will give me a really good stable high income job However the exam is highly competitive. Now im scared I'll fail. So what i do? I dont study. I study really well for 3 4 days and then i stary thinking about how i will most definitely fail and whats the point of studying so much and i give up. I sleep all day and ignore my books i scroll on my phone until my brain hurts.

Another thing i wanna loose weight i have pcod and ive gained like 20kgs and i wanna loose that weight because i feel so ashamed leaving my house or wearing any clothes. I wanna feel good about myself. What do i do? I skip workouts i make excuses. I even order food when im sad or stressed which is very frequently.

Andd i dont understand why i do all this.

Why am i ruining my owm life when i clearly want a good life for myself. I have good goals and sometimes i get motivated and i even work towards them but it lasts for few days and i go back to my self sabotaging behavior....

I hate myself so much i cant stop hating myself for being the reason why my life is fucked.

r/SideProject Mkp1983

Built an AI tool that simulates FAANG interview loops (coding, system design, team match)

Hey everyone,

Built an AI tool that simulates FAANG interview loops (coding, system design, team match)

Hey everyone,

I’ve been working on a side project called Steo.AI — an AI-powered mock interview platform for job seekers.

The goal was to go beyond just practice questions and actually simulate the full interview journey.

Right now it can:

  • Simulate FAANG-style interview loops
  • Run coding interviews + evaluate solutions
  • Conduct system design rounds
  • Do behavioral interviews with follow-ups
  • Simulate “team match” style conversations
  • Give structured feedback on your performance

Give it a try

r/Adulting starxxd

Feet pics

How can I star selling feet pics and where to post?

r/AI_Agents Instance_Not_Found

I used Agent to summarize the tech blogs from Anthropic, but some blogs were always missing. (guide on how I fixed it)

Many of us use agents to summarize tech blogs to stay updated. One day, I came across a previous Anthropic blog published on April 8th that had never been mentioned in my daily brief!

After some investigation, it turns out the browser tool used by my agent doesn't retrieve all the blogs. It looks like Anthropic actually hosts their blogs at many different URLs (what a bad design). Anyway, I spent some time fixing this by feeding a generated sitemap to the agent. It worked!

The solution isn't very difficult, but it still cost some tokens to generate the sitemap because I asked the agent to click every link to build it;) I packed it into a skill so it can be easily shared.

r/TwoSentenceHorror AdmirableLog2527

After my wife had her fifth miscarriage, she expected me to wait on her hand and foot.

It was only after I reached for the brand new half empty bottle of rat poison that I realized why she kept losing the kid.

r/me_irl edgeofthecosmos_

me_irl

r/Art HectorMcCredieArt

The Heart of The Hollow, Hector McCredie, Digital 3D, 2026 [OC]

r/Seattle ryan101

Day to night skyline panorama

r/Art PigGuy1988

Colors in Quadrants, Robert Hotaling, Digital, 2026 [OC]

r/HistoryPorn serious_bullet5

21-year-old Bernie Sanders protesting segregation and getting arrested in Chicago, 1963 [460 x 640]

r/homeassistant aamat09

What’s your proudest fully local automation

In my case I have fully implemented two services that integrate nicely over MQTT using HA service discovery.

I can send an event when I remove my CPAP mask in the mornings with a session completed message. And Ha would trigger an AI summary to an Ollama server running Llama 3.1:8b with the sleep session and then it would trigger a MQTT message to another docker service to open YouTube in a Fire Tv Cube using the local api for the Lightning service running in this devices and start a morning playlist.

What’s yours?

r/personalfinance Kindly_Diver8914

What would you do next??

Just looking for directions as to what to financially do next:

24F, i just got my “career job” earlier this year, base pay is 115k, there also call so that adds 35k minimum for standby plus double time when i do get called in,

I dont have any debt

i barely have any savings, i spent most of it on a mobile home ( 3bd/2 bath) that is worth 150k (bay area) but since i was very good friends w the seller i got it for 30k

after taxes i take home about 3.2k each check

right now ive been able to save about 2k a month

expenses probably come out to 2.5k a month

i am first gen so i havent had much guidance, i thought that since i am now done with school and working full time now would be a good time to start looking into the long run

thank you in advance!

r/Damnthatsinteresting ResponsibilityNo2097

Earth, 3 Days Ago, Captured 960,000 Miles Away by the DSCOVR Satellite

r/ClaudeCode hasmcp

[MCP] Human-in-the-loop task manager (lead colony of claude code agents)

I was testing an idea of agentic architecture with a single supervisor and multiple worker sub agents each has different personas. Around the same time, about a month ago, claude-code launched an experimentation feature called claude notification channels(works with Claude Pro/Max subscriptions) which allow keeping you in the loop with claude code while you are AFK which made this project possible so easily.

AgentRQ is an opensource human-in-the-loop self-learning task manager built for agents that allows you and supervisor to lead colony of claude-code agents, assign tasks to them in realtime, schedule periodic tasks.

Terminology

**supervisor:** human and/or agent that leads the entire colony. Signle top level mcp server that manages all workspaces.

**workspace:** worker agent that has persona, assigned work mission and memory(md files). Each workspace has its own isolated mcp server.

Source code is available at github under agentrq/agentrq with Apache 2.0 License. Happy to answer any of your questions, hear thoughts. It is on production more than 6 weeks now and I completed more than 500 tasks.

r/ClaudeCode dar-mit

If you’re new to (modern) coding like me, this post might help you setup a good, basic system with Claude Code.

I learned to code with COBOL, FORTRAN and MS-DOS BASIC years ago, and during lockdown I did several online courses on full stack web dev. So total noob.

I’d never used “A.I.” past a few rounds with OpenArt.ai when it first came out, and zero rounds with any sort of ChatBot. Therefore when I heard of Claude Code I was very intrigued.

Claude.ai and I had a few chats, and I was immediately impressed. Claude.ai reminds me of a Big Golden Lab who’s just so happy to see you and wants to bring you the bestest, biggest, stickiest stick it can find just as fast as possible! So I signed up with Claude Code.

Well if Claude.ai is a Big Golden Lab, then Claude Code is a Big Derpy Alaskan Husky! Dear GODS it’s equal parts amazing code writing and blazing down dead ends like his tail is on fire. So I started researching how to better direct CC, which brings us to this post.

What I’m doing currently is coding app scripts for my work in publishing. Unlike the very well understood programming languages like Python or C++, the ExtendScript engine is JavaScript-based (understood) that manipulates the Adobe DOM (Document Object Model) to create and/or manipulate aspects of a file. This is an area that even CC says he doesn’t know that well.

After just shy of two months of trial and error this is my root level CLAUDE.md file:


```

CLAUDE.md

Session Protocols

  • "Let's begin" (New): Brainstorm real goal + edge cases → Create folder → copy PLAN.md template → populate (incl. DOM Research Table) → code --new-window /PLAN.mdSTOP.
  • "Let's begin" (Existing): Load folder + PLAN.mdcode --new-window /PLAN.md → brief [Step | Blocker | Next].
  • "Let's wrap": Sync PLAN.md → commit per GIT rules → push to GitHub.
  • Immutable: 00_System/01_Templates/, zz_Reference/, zzz_Archive/.

Workflow Gates

  1. BRAINSTORM: Real goal + edge cases + DOM Research Table (docs/AI/web) before any code or probes.
  2. PLAN: name | summary | rationale → PLAN.md. GATE.
  3. LEARN: Research → write all probes at once → run batch in target app → record learnings. GATE.
  4. CODE: 1 function at a time. ES3 only. Explain changes.
  5. TEST: Production testing in target app. Fail = Revert to Step 2.

Context Map

  • Git Rules: .claude/rules/git.md
  • ExtendScript (General): .claude/rules/extendscript.md
  • App Specifics: Located in subfolder CLAUDE.md (InDesign, Illustrator, etc.)

Folder Map

// A structure so CC doesn't have to BASH his way round trying to find stuff. ```


This tightly structures how we develop scripts and what CC is allowed to during coding sessions. For a brief time I tried implementing the Karpathy Heuristics that hit this forum a few weeks ago, like “Visualize your logic…” but all that did was send CC into 20+ minute loops of nothing. DO NOT recommend!

As you can also see the system in modular, where /rules live somewhere else instead of in the root CLAUDE.md file (us Pro plan users need to be as token-efficient as possible, so only what’s needed gets loaded).

To show you how this works, here’s my prompt from yesterday about how to get ExtendScript output directly into CC CLI without me having to constantly copy & paste:

``` Let's begin here 03_Dev-Scripting/01_Claude-Code/Debug-Log (new folder) with a test. I've been informed on the internet that we can have the ExtendScript write to a log file and that you can then monitor this log file for changes and read back the script output directly.

[…] logToClaude(msg) [redacted] […] logToClaude;

We'd need to conform that path information to my systems, but can this even work? ```

Note: I’ve had to trim out the code to avoid reddit filters.


Turns out yes you can! From that prompt here’s CC’s output:


PLAN: ExtendScript → File Log → Claude Monitor

Status: 🟡 Awaiting GATE 1 Sign-off Karpathy Check: [Simpler? Yes/No] | [Surgical? Yes/No]

0. Brainstorm & Research (Pre-GATE 1)

  • Real Goal: Replace ESTK console ($.writeln) as the debug output channel with a file that Claude Code can tail in real time — enabling live script output in this session without needing ESTK open.
  • Edge Cases to Consider:
    • ~ path shorthand may not expand in ExtendScript File() — use absolute path.
    • $.writeln reassignment is host-dependent and may silently fail; safer to call logToClaude() directly.
    • Log file grows unboundedly — needs a clear/reset step at script start (open "w" once, then switch to "a").
    • Two simultaneously running scripts could interleave lines (unlikely for single-user debug use, noted).
    • tail -f must be started before the script runs, or lines written before monitor attaches are missed (use tail -n 0 -f to ignore pre-existing content).
    • File locking: macOS APFS handles concurrent read/write safely for append — no special handling needed.

DOM Research Table

Object Property / Method Expected Type R/W Source Probe? File constructor path String — ExtendScript docs Yes — confirm ~ vs absolute File open("a") Boolean — ExtendScript docs Yes — confirm append mode File writeln(msg) Boolean — ExtendScript docs Yes File close() Boolean — ExtendScript docs Yes Folder userData String R ExtendScript docs Yes — confirm resolved path $ writeln (reassign) Function W? Inference Yes — confirm override works in host

1. Summary | Rationale (GATE 1: PLAN)

  • Goal: Prove that an ExtendScript helper (logToClaude) can append to a known file path, and that Claude Code's Monitor tool can tail that file and surface each line as a live notification.
  • Value: Eliminates dependency on ESTK for debug output. Lets Claude Code read script results directly during a session — closing the loop between script execution and AI interpretation.
  • Success Criteria: Running a probe in any Adobe host app causes its $.writeln-style output to appear in Claude Code's terminal within ~1 second, without ESTK.

2. Technical Strategy (GATE 1: PLAN)

  • Host/Engine: InDesign (primary test), then Illustrator — ES3
  • Log File Path: /Users/dar-mit/Documents/extendscript_log.txt
  • Monitor Command: tail -n 0 -f /Users/dar-mit/Documents/extendscript_log.txt
  • Document State Contract: None — standalone utility, no open document required.
  • DOM Map:
    • File("/Users/dar-mit/Documents/extendscript_log.txt") — file handle
    • file.open("w") on init to clear; file.open("a") per write call
    • Folder.userData — resolves to /Users/dar-mit (to be confirmed via probe)
  • Confidence: likely | Source: ExtendScript docs + community examples

3. Risks & Probes (GATE 2: LEARN)

  • Risk 1: ~ path does not expand → absolute path required.
  • Risk 2: $.writeln reassignment silently ignored → call logToClaude() directly.
  • Risk 3: Append mode not supported or write fails silently → probe must assert return value.
  • Risk 4: Folder.userData resolves to unexpected path on this system.

Probe Batch

  • Probe-01_FileWrite-Append_IND.jsx → Confirm absolute path + open("a") + writeln works; log Folder.userData value.
  • Probe-02_WritelnOverride_IND.jsx → Attempt $.writeln = logToClaude reassignment; call $.writeln("test"); verify file receives the line (vs. only ESTK console).
  • Probe-03_ClearAndAppend_IND.jsx → Open "w" to clear file, then open "a" to append — confirm truncation works on first run.

Learnings (populated after batch run)

Item Expected Actual Confirmed? ~ in File path Fails / ignored — — Folder.userData /Users/dar-mit — — open("a") return true — — writeln() return true — — $.writeln override May silently fail — —

4. Architecture (GATE 3: CODE)

  • logToClaude(msg): Appends one line to the log file and closes immediately.
  • initLog(): Opens file in "w" mode once to clear it at script start.
  • Optional: $.writeln = logToClaude — only if Probe 02 confirms it works.

Implementation Steps

  • [ ] Write Probe 01 — file write + Folder.userData
  • [ ] Write Probe 02 — $.writeln override test
  • [ ] Write Probe 03 — clear + append cycle
  • [ ] Start tail -n 0 -f monitor in Claude Code session
  • [ ] Run probe batch in InDesign; record Learnings table
  • [ ] If confirmed: write final logToClaude helper as a reusable include
  • [ ] Commit

5. Changelog

  • v1.0: Initializing Spec-Driven Plan.

From this point we just work through the plan, step-by-step, and the CLAUDE.md file carefully guides that Big Derpy Husky to minimize fully exploring dead ends, changing code outside of the problem we’re working on, and other such helpful things he was doing before.

Anyways I hope this helps out some new coders who may find that CC doesn’t quite act the way they think an A.I. should.

Since I’m new if anyone sees any issues or have any improvements I’d be very appreciative to hear them.

Oh, PRO TIP: If you find that CC has gone off in an unexpected manner ASK CC why? I found this out with those Karpathy bits, when CC was at the 17 minutes mark after being asked to update a rotational formula. Turns out he was re-driving a known formula to see if he could get it past the “95% confidence” threshold.

Best, dar-mit

r/OldSchoolCool Kooky-Swing178

Surfer Mike Hynson-1966

Legendary surfer, board shaper, and subject of The Endless Summer surf documentary.

r/personalfinance Classic-Cabinet-8144

Attack car loan or 6month emergency fund?

Currently at 4 month emergency fund, should I keep going until I reach 6 month emergency fund or attack car loan?

5.99 apr 21k left and emergency fund is earning 3.10

r/whatisit Tys_Wife

What is this sauce that was accompanying California roll sushi from Sam's club?

it was sweet, but not really spicy.

r/yesyesyesyesno HellaHellerson

The Table Of Punishment

r/coolguides cutiyas

A cool guide Nature’s sundial: No batteries required.

r/findareddit Urmanzies

Where do I find a subreddit to post an ad for my fitness app

r/Art TobyHurt

Head in the stars, Toby Hurt, digital, 2026

r/Adulting Cicada-Tang

I swear to god, percieved intelligence is more important than actual intelligence

This is something I realized more and more as I grew older, and it's a shame that this is not taught more in our develpmental years.

For most of my life, I never considered myself a particularly smart/gifted person. However, as I went further into my adulthood, I started to pull farther and farther ahead from my peers who I used to think were more competent than me.

Looking back, I couldn't help but notice how much I was carried simply by the fact that people think I'm smarter than I actually am, not by my intrinsic competence. This can come down to a lot of factors, such as my personality and my communication style.

Naturally, the teachers, then the professors, then the higher-ups in my company, started to offer me tasks and opportunities that would push me closer to my limit, giving me more expereince than my peers at the same age, creating a positive reinforcement cycle of Higher perceived intelligence → More opportunities → More cognitive practice → Actual increase in ability → Repeat.

(This is not just anecdote btw. Vygotsky's Social Development Theory is basically this)

Looking back, this is probably the most important advice I want to give to younger people: For the love of God, learn to appear smarter. If you look like a bum, you'll become a bum.

This doesn't mean you need to pretend to be a smartass, or a know-it-all, or be super extroverted. What signals “intelligence” socially are often subtle things such as:

  • How well you make eye contact when you talk/listen.
  • The "rhythm" of your speech (moderate speaking rate, clear articulation)
  • The audio quality of your voice (enough pitch variation, lower vocal strain, high resonance, low nasality, etc)
  • And a lot of extremely detailed things that I can't fully articulate

You can "appear smart" while being an introvert and talking very little. You can "appear smart" just by being a good listener. And the skills you need can be practiced, trained and conditioned, especially when you are still young (I acquired most of these skills just by being in the choir for most of my student years).

So... yeah, I guess my recommendation is to go take a public speaking class if you have the chance, or learn to sing, or try reading out loud more, or try to mimick your favorite speaker or something (I think I took a lot from Doctor Who lol).

r/n8n Dry-Face388

Over 1000+ views and just 3 Comments r/n8n is just an over-bloated cow owned by a rich farmer!

Made a post here sometime 2 days ago.

My team needed honest feedback. My team still needs honest feedback and help. Yet in the entire of r/n8n no one except 2 people have logical feedback???

And yes I’m going to rant! And yes someone’s is gonna come up with some dumb theory on why I shouldn’t rant.

How do you guys even find help here ? how do people even see things that are worth their feedback.. or is it just a race to keep the users attention for the big corp??? Now you just focus on the dopamine not even the community.. hoping the next scroll is interesting enough. To keep them..

Why do I care about 1200+ views and keep getting notified about it like I should be happy when we aren’t even discussing with anyone.. I should see the views and be happy.???????????

They’ve turned views into an addiction. And I won’t lie it’s perfect until you are in actual need.

Maybe it’s time we create a place for techies and you pay to post. So that the noise is less and people actually see stuff that really needs to be posted.

We are just out here peddling (views) numbers and dopamine!!!

Sorry for wasting your time guys! Currently redditing from VScode so, It’s either me or vscode but one of us is in our feelings…

But seriously we need to find a way to have a lot of people willingly & actually focus on situations that need help.

Long live N8N !!!!!

r/LocalLLaMA ZeusZCC

Why isn’t LLM reasoning done in vector space instead of natural language?

Why don’t LLMs use explicit vector-based reasoning instead of language-based chain-of-thought? What would happen if they did?

Most LLM reasoning we see is expressed through language: step-by-step text, explanations, chain-of-thought style outputs, etc. But internally, models already operate on high-dimensional vectors.

So my question is:

Why don’t we have models that reason more explicitly in latent/vector space instead of producing intermediate reasoning in natural language?

Would vector-based reasoning be faster, more compressed, and better for intuition-like tasks? Or would it make reasoning too opaque, hard to verify, and unreliable for math/programming/legal logic?

In other words:

Could an LLM “think” in vectors and only translate the final reasoning into language at the end?

Curious how researchers/engineers think about this.

r/VEO3 Fun_Froyo_566

La Panne — Episode 1/30 | French AI Micro-Drama Gone Wrong

A couple's intimate moment becomes pure chaos when AI generates it:

5 cigarettes at once, marital crushing, morphing faces 😂

r/arduino shabink

How to make my stepper motor control code faster while using serial communication

I'm working on a project with processing and my arduino uno where I'm using the arduino to drive a small car with two stepper motors. I am not allowed to use the stepper.h library so I've been controlling each coil of the stepper motor manually. I'm using the 28BYJ-48 stepper motor and plugging it into a ULN2003 driver board module. Right now I'm supplying it with 4 AA batteries but I bought a 9.6V RC battery that'll come in tomorrow and Ill use that afterwards. Currently my issue is that the code I have to control the direction that the stepper motor spins in works fine on its own, but once I plug it into the rest of my code for the entirety of my project, it moves significantly slower than what it does when I'm running it on its own.

Before I paste the entirety of my project code I'll summarize it a bit. Basically I'm using processing to detect the color red with a webcam, if it hasnt detected red it sends a message to the arduino to keep spinning in place, if it does detect red it tells the arduino to move forward. My arduino code is not only controlling the stepper motors but also an LED to signify whether I'm moving forward or spinning (this wont be in the final project, its just there for now to make it clearer for me IRL) and it controls a hall effect sensor that will detect the magnetism of the red target.

Here is the code, you can ignore the two blank functions at the end. Those are supposed to be for the second stepper motor but since I'm struggling with this current issue I haven't bothered to finish that part yet.

int speed = 2; // motor 1 int IN1 = 12; // BLUE int IN2 = 11; // YELLOW int IN3 = 10; // PINK int IN4 = 9; // ORANGE int IN1S = LOW; int IN2S = LOW; int IN3S = LOW; int IN4S = LOW; // variables for each pins' state // motor 2 int IN11 = 7; int IN22 = 6; int IN33 = 5; int IN44 = 4; // variable to read from processing char s; unsigned long currentTimeM; // timer for Motors unsigned long currentTimeT; // timer for LED unsigned long nextTimeM = 0; // timer for Motors unsigned long nextTimeT = 0; // timer for LED const long interval0 = 1000; // interval for slow LED const long interval1 = 300; // interval for fast LED int ledState = LOW; int led = 3; int hall = 2; void setup() { pinMode(hall, INPUT); pinMode(led, OUTPUT); pinMode(IN1, OUTPUT); pinMode(IN2, OUTPUT); pinMode(IN3, OUTPUT); pinMode(IN4, OUTPUT); pinMode(IN11, OUTPUT); pinMode(IN22, OUTPUT); pinMode(IN33, OUTPUT); pinMode(IN44, OUTPUT); Serial.begin(115200); digitalWrite(IN1, LOW); digitalWrite(IN2, LOW); digitalWrite(IN3, LOW); digitalWrite(IN4, LOW); } void loop() { currentTimeT = millis(); if(Serial.available() > 0) { s = Serial.read(); // reads input from processing, f for forward, r for rotate if (s == 'f') { // if processing says move forward driveStepper1f(); //driveStepper2b(); if (currentTimeT >= nextTimeT) { nextTimeT = currentTimeT + interval1; // faster interval for forward movement phase if (ledState == LOW) { ledState = HIGH; } else { ledState = LOW; } digitalWrite(led, ledState); } } if (s == 'r') { // if processing says to rotate driveStepper1b(); //driveStepper2f(); if (currentTimeT >= nextTimeT) { nextTimeT = currentTimeT + interval0; // slower interval for rotating phase if (ledState == LOW) { ledState = HIGH; } else { ledState = LOW; } digitalWrite(led, ledState); } } } } void driveStepper1f() { currentTimeM = millis(); if (currentTimeM >= nextTimeM) { nextTimeM = currentTimeM + speed; if (IN1S == LOW && IN2S == LOW && IN3S == LOW && IN4S == LOW) { IN1S = HIGH; IN2S = LOW; // A IN3S = LOW; IN4S = LOW; } else if(IN1S == HIGH && IN2S == LOW && IN4S == LOW) { IN1S = HIGH; IN2S = HIGH; IN3S = LOW; // A+B IN4S = LOW; } else if(IN1S == HIGH && IN2S == HIGH) { IN1S = LOW; IN2S = HIGH; IN3S = LOW; IN4S = LOW; // B } else if(IN2S == HIGH && IN3S == LOW) { IN1S = LOW; IN2S = HIGH; IN3S = HIGH; // B + C IN4S = LOW; } else if(IN2S == HIGH && IN3S == HIGH) { IN1S = LOW; IN2S = LOW; IN3S = HIGH; IN4S = LOW; // C } else if(IN3S == HIGH && IN4S == LOW) { IN1S = LOW; IN2S = LOW; IN3S = HIGH; IN4S = HIGH; // C+D } else if(IN3S == HIGH && IN4S == HIGH) { IN1S = LOW; IN2S = LOW; IN3S = LOW; IN4S = HIGH; // D } else if(IN4S == HIGH && IN1S == LOW) { IN1S = HIGH; IN2S = LOW; IN3S = LOW; IN4S = HIGH; // D+A } else if(IN4S == HIGH && IN1S == HIGH) { IN1S = HIGH; IN2S = LOW; IN3S = LOW; IN4S = LOW; // A } // change each pin to the IN#S state digitalWrite(IN1, IN1S); digitalWrite(IN2, IN2S); digitalWrite(IN3, IN3S); digitalWrite(IN4, IN4S); } } void driveStepper1b() { currentTimeM = millis(); if (currentTimeM >= nextTimeM) { nextTimeM = currentTimeM + speed; if (IN1S == LOW && IN2S == LOW && IN3S == LOW && IN4S == LOW) { IN1S = LOW; IN2S = LOW; IN3S = LOW; IN4S = HIGH; // D } else if(IN4S == HIGH && IN3S == LOW && IN1S == LOW) { IN1S = LOW; IN2S = LOW; IN3S = HIGH; IN4S = HIGH; // D+C } else if(IN4S == HIGH && IN3S == HIGH) { IN1S = LOW; IN2S = LOW; IN3S = HIGH; IN4S = LOW; // C } else if(IN3S == HIGH && IN2S == LOW) { IN1S = LOW; IN2S = HIGH; IN3S = HIGH; IN4S = LOW; // C+B } else if(IN3S == HIGH && IN2S == HIGH) { IN1S = LOW; IN2S = HIGH; IN3S = LOW; IN4S = LOW; // B } else if(IN2S == HIGH && IN1S == LOW) { IN1S = HIGH; IN2S = HIGH; IN3S = LOW; IN4S = LOW; // B+A } else if(IN2S == HIGH && IN1S == HIGH) { IN1S = HIGH; IN2S = LOW; IN3S = LOW; IN4S = LOW; // A } else if(IN1S == HIGH && IN4S == LOW) { IN1S = HIGH; IN2S = LOW; IN3S = LOW; IN4S = HIGH; // A+D } else if(IN1S == HIGH && IN4S == HIGH) { IN1S = LOW; IN2S = LOW; IN3S = LOW; IN4S = HIGH; // D } // change each pin to the IN#S state digitalWrite(IN1, IN1S); digitalWrite(IN2, IN2S); digitalWrite(IN3, IN3S); digitalWrite(IN4, IN4S); } } void driveStepper2f() { } void driveStepper2b() { } 

At first I was controlling the stepper motors by just using delay() between each step rather than a timing interval, but the issue with that was that it was really delaying the response time of the processing message into arduino reading it. If I remember correctly it was like 2 seconds or so, not horrendous but not ideal. So after that I decided to use time intervals to get rid of all of the delays but now I think that there's so much code that its slowing down because of how many times it has to do digitalWrite(). It does respond instantly now whenever processing sends a new message but like I said, the stepper motor moves way too slow as a result.

Is there any solution? Is it something else thats causing my stepper motor to move slow or is my assumption correct? If my assumption is correct after all, I think I might just settle for the delayed response rather than the slow motor. But any suggestions would be greatly appreciated!

r/Futurology Spirited-Mousse1915

Black holes might be from battles of matter and anti-matter

Too many theories break when it comes to black holes; my thought is the possibility that we are wrong about them. What if black holes are pockets of matter and anti-matter emitting energy and density. That not even light can escape it as anti-matter is destroyed by matter, and matter being destroyed by anti-matter.

Things that go in can in theory come out, but it would explode from the contact of anti-particles. Thus, removing the information paradox and it would answer how some get extremely big despite being birthed near the big bang. Bigger black holes would just be larger influxes of anti-matter, creating density, energy and mass. These black holes would get big fast as matter clumps together, particles are shoved together.

r/OldPhotosInRealLife heart-aroni

Qianmen street, Beijing, China

r/30ROCK callMeRolf

Pizzarina Sbarro

At some point,i figured out the name was based on

Pizza + Tsarina

r/Weird Azaroth1991

The tomato post and mushroom post were right next to each other.

r/AI_Agents Ok_Championship8304

the AI OS has a missing layer

been seeing a lot of "AI OS for companies". agent runtimes, MCP, the YC RFS, half the new yc batch. they all assume agents have somewhere to read company context from. then they gesture at "single md" and move on.

i went looking for what fills that slot. mostly empty.

i have agents md or claude md in every repo. duplicates, goes stale, agents in different repos disagree. tried notion + a custom mcp server. fine for a human looking things up but agents can't write back without permission spaghetti.

the fix i did was a small git repo of markdown nodes. each node has an owner declared in frontmatter. agents read the relevant nodes before they act, propose updates after. owners approve like a PR. the context stays alive because someone owns it.

mostly looking for what others are using here. how do everyone here ensure context beteween human and agents teams are synced?

r/WouldYouRather Ill-Mycologist-3652

Would you rather :uninhabitable Earth (read description before voting for context)

The Earth has become uninhabitable due to pollution and climate change for humans. You are given several options for how to deal with it

  1. Robotic Body: your mind is put inside a robotic body so that you can survive Earth’s more hazardous climate and air conditions. You will be one of 1 million other volunteers for this program. While you won’t NEED to eat or sleep, it is recommended that you try to imitate human activity as much as possible otherwise you might go crazy and become a robot psycho.

  2. Cryo-sleep Journey: You will be put in cryosleep on a ship set to travel to a potential new Earth which you and 10 million other colonists on to settle after a 500 year long journey. You and up to 4 others can join.

  3. Mars Dome: you and 100 million people will be living in Dome colonies on Mars. You will be reliant on the self sufficiency systems (agriculture, air, energy, etc) functioning the whole time and will be living very compactly and on tight rationing.

  4. Reclamation Effort: You and a handful of hopefuls seek to try to reverse the Earth’s climate and stay to Study it. There is only a slim chance of it working and you will more than likely die in the process

5: Die

View Poll

r/homeassistant 3oclockgifts

Does anybody know what is causing this?

Please see the video. This happens on all of my android devices when left sitting idle for a few hours. Not sure if this is a common problem, I haven’t been able to find a solution online.

r/SideProject Exact_Pen_8973

GPT Image 2 Thinking Mode: What it actually does under the hood (and 6 things only it can do)

Hey everyone,

I’ve been testing GPT Image 2’s new Thinking Mode heavily, and I noticed a lot of people are either leaving it on for everything (wasting money and time) or ignoring it entirely (missing out on the actual reasoning capabilities).

I put together a breakdown of what's happening under the hood and a decision framework for when to actually toggle it on.

The TL;DR of what it is: Thinking Mode isn’t just a "higher quality" button. It adds a reasoning pass powered by the GPT-5.4 backbone before generating pixels. It checks constraints, computes mathematical encodings, and plans spatial layouts. But it also costs ~$0.21 per image (or $1-2 for an n=8 batch) and adds ~10s of latency.

The Decision Tree (When to use which):

  • Use Instant Mode for: Simple mood shots, isolated objects, high-volume batches, style explorations, and single-subject photos without text.
  • 🧠 Use Thinking Mode for: Prompts >30 words, anything requiring text inside the image, multi-image continuity (n=8), exact counts ("exactly 4 cards"), or web-referenced content.

6 Things ONLY Thinking Mode Can Do:

  1. 8-Image Coherent Batches: Generates up to 8 images with consistent characters, styles, and brand colors from a single prompt.
  2. Functional Barcodes & QR Codes: It solves the Reed-Solomon error-correcting code before drawing the pixels. Instant mode just pattern-matches visual gibberish; Thinking Mode creates codes that actually scan.
  3. Pre-Generation Web Search: You can ask for a poster featuring a real, current event or product, and it will fetch visual references from the web before generating.
  4. Constraint Verification: If you add "Verify all constraints before generating" to your prompt, it checks exact section counts (e.g., "Exactly 3 sections, not 2, not 4") before outputting.
  5. Multi-Element Layout Planning: Actually gets UI dashboards, diagrams, and infographics right by planning the spatial hierarchy first.
  6. Context-Aware Multi-Turn Editing: You can say "Make the text 20% larger but keep everything else exactly the same," and it won't hallucinate a completely new background.

A Quick API Note for Developers: To use this in production, you need to route through the Responses API endpoint (v1/responses), paired with the reasoning model, not just the standard images endpoint. Also, a quick warning: transparent backgrounds aren't currently supported via the Responses API tool option (they return with a white fill instead of alpha).

I wrote a much more detailed guide with API code snippets, visual layout examples, and exact prompt formulas. You can check out the full post here:GPT Image 2 Thinking Mode: The Complete Guide

What use cases have you guys unlocked with the new n=8 batching feature?

r/Wellthatsucks anchorbabi

My fortune cookie is blank. Is this a bad omen?

Why Panda gotta do me dirty like that. I always look forward to cracking my fortune cookie, and this is what I get? 😪

r/Seattle Strict-Imagination73

Are these normal rental charges?

I moved from New Jersey to Seattle last year. I never paid more than $15 for water to the apartment management and a $500 amenity flat fee which took care of everything else.

Now, in Seattle, apart from rent and parking, I have almost > $500 extra charges per month. It seems a bit ridiculous. Is this normal? Can we do something to waive them?

https://preview.redd.it/hfs3kyvju1yg1.png?width=1604&format=png&auto=webp&s=ba2fb8b8715e5e9cd1a99d8ecdf437a356a93873

r/funny islandbrownie20

Pranking my Canadian friends!

r/Damnthatsinteresting Maleficent-Agent-477

This single-celled protozoan passing though another one under a microscope

r/OpenSourceAI BenAhmed23

Mira - Search files semantically - no exact filenames required.

r/mildlyinteresting Financial_Ear2908

ABSOLUTE MONSTER of a cardinal

r/SideProject Extreme_Ad_5367

This is a website for voting on who has the best dishes or drinks in any city. Right now Im just trying to get votes on cheesesteaks in Philly.

Would love any feedback

r/Adulting Open_Mirror_109

I’m moving to a one bedroom apartment and living alone for the first time and I’m getting anxious

For all yall in this subreddit what are your tricks and tips an advice for staying not depressed living alone? For context I have siblings my age and have always lived with roommates who I did things with and am very chatty and social. Any advice to offer? :))

r/whatisit Warm-Opposite-3322

What is this machine actually doing to the rock?

r/Jokes VisualFix5870

Someone broke into the local precinct and stole all the toilets.

Police are working on a list of suspects but right now they don't have anything to go on.

r/whatisit DistributionInitial5

What are these bubbles that randomly came out of my toilet?

Walked into my bathroom tonight and saw all of these bubbles coming out of the toilet and water on the floor like it had overflowed. It looks like dish soap but has an almost neutral smell, ever so slightly "fresh". There was a puddle of water as well like it randomly overflowed without being flushed.

I luve on the first floor of an apartment fwiw. Last flushed the toilet this morning without any issues all day. Happened in the past hour.

What could be causing this?

r/personalfinance Manuscribble

Shouldn't a fiduciary strategize for stock mergers or liquidations?

A family member has a managed IRA containing stock for a company that, from what I can tell, went through a merger and liquidation event. This dropped the overall account value by 33%. I'm working on helping them move the account balance to a self-managed account, and news of this makes me wish the money was moved sooner.

For a managed account labeled with a low risk exposure and capital appreciation objective, I would think a fiduciary would have some insight to get ahead and change how money is invested before an event like this happens, or at least diversify investments. Then again, I dont have any personal experience with someone else managing my accounts, so I don't know how realistic it is to assume a professional can hedge against this type of event.

Looking to learn, thanks in advance.

r/ChatGPT Rnaodm_Tirnalge

ChatGPT is getting pretty good..

r/whatisit No_Rub_9924

What animal noise is this?

For reference it happened at night in our backyard in the Midwest. Can’t figure out what it is.

r/Damnthatsinteresting Shadow_Sniffles073

Meat Slicer.

r/SideProject Swoledaddy0405

I'm building a budgeting app that compares your grocery basket across nearby stores before you shop

Hey r/SideProject,

I'm building Allontas, a budgeting app that starts with the grocery basket.

The basic idea is that most budgeting apps tell you what you have already spent. Allontas is designed to help people make better spending decisions before the money leaves their account.

The first use case is groceries.

You build a basket of items you actually buy, like milk, eggs, bread, and household staples. The app then compares that basket across nearby stores and shows:

  • the cheapest single-store option
  • the best two-store split
  • the lowest total basket cost
  • the impact on your monthly budget

But the app is not just a grocery price comparison tool. The grocery comparison is the entry point. Allontas also connects that spending decision to the rest of your money — upcoming bills, planned events, larger purchases, and forward-looking cash flow. The goal is to help users understand not just which store is cheaper, but whether a purchase fits into the month ahead.

Why we started with groceries:

Groceries felt like the most obvious budget category where people are forced to make decisions before they have good information. You either pick one store out of habit, or you manually check multiple retailer apps, which gets old quickly.

Before building the proof of concept, my co-founder and I ran a 400-person survey. After cleaning the data, we had 364 valid responses. A few things stood out:

  • 73% rated the concept a 4 or 5 out of 5 on interest
  • real-time grocery price comparison was the top-ranked feature
  • 53% said they would pay above a free tier

That was enough for us to start building.

Current status:

We have a working proof of concept for the comparison engine and the budgeting flow. The hard part is less the math and more the messy grocery data: product matching, package sizes, retailer naming differences, substitutions, and keeping prices current enough to be useful. However, we are working with a developer to find the past path.

More details and the waitlist are at https://www.allontas.com if anyone wants to follow along.

I'd love feedback from people who use budgeting apps, grocery apps, or have built anything with messy consumer data:

  1. Does grocery basket comparison feel like something you would actually use, or more like a "sounds useful but I wouldn't bother" feature?
  2. What do current budgeting apps still miss for you?
  3. Would this make more sense as a grocery savings tool first, or a budgeting app first?
  4. What would make you trust the price comparison enough to use it before shopping?

Appreciate any honest feedback. Trying to pressure-test the idea before we take it too far.

r/personalfinance New_Lie_622

15k$ debt to RAC insurance

(Australia) Hi everyone

1 year after a small crash I had in Perth but with a very nice BMW. Their insurance is now asking me 15k$.

I got a text message on the 27/04 saying that this payment is overdue since the 24/04. This the first contact I got with them since last year.

What happened is that I changed my address and told them by email to send everything relevant to the claim by email but they never answered. And I never filled any crash report.

At this time I was driving in Australia since only few days and thought the rego was a third party insurance. I wasn’t even thinking that I was driving without insurance. Also The girl I crashed in was speeding and overtaking me from left side. But as I said I never filled out the crash report.

I’ll go straight to the point :

What will happen if I never pay ? Debt collector, court ?

Is it possible and How long it would take before they take the money directly on my bank account ?

Can I negotiate a payment plan and How much can I lower the monthly payment ?

Can I get stopped leaving the country if I’m in debt ?

Thanks to everyone who will answer.

And for someone who are on insurance side, I give you a a little storie :

Since the crash last year, I pay for a comprehensive insurance and a fire starter in my boot just yesterday. (Yes, many bad news in few days…) RAC said that they won’t cover any damage because my dual battery is involved in the fire. Knowing that it didn’t start it but just the fact that it got damaged it makes my claim not legitimate. So yes I won’t give them 1 more $.

r/meme Animalus-Dogeimal

What YouTube Execs think their users want

r/mildlyinteresting Skrutten_s

Apples look segmented similar to oranges

r/TwoSentenceHorror Fit_Pie_Pink

"Mommy? , Where's daddy?" My sweet innocent daughter asked.

"At work Sweetie." I say as the scratching in the basement becomes more frantic.

r/whatisit EuphoricWeakness3135

Is this a bug in my food?

Ordered Walk-in’s and there was something at the bottom of the bun. I’m assuming it’s a bug but I want to know others opinions. Last pick is where it was found. The important question is whether I should still eat it because I don’t NOT feel like ordering more food😭

r/LifeProTips Numerous-Length-5129

LPT: When you forget someone's name mid conversation just say "sorry, how do you spell your name?

it can be super helpful especially when you almost every time forgot people's name or hard time remembering the names.

r/whatisit Big_Pineapple8968

What animal is this plushie supposed to be?

My girlfriend and I have been trying to figure out what animal this plushie is meant to be, and while it’s been a fun discussion, we can’t come to a solid answer.

At first I thought it was a rat, then we noticed he has no tail. Maybe a capybara? Just a little guy?

I won him in a claw machine and after some sleuthing we identified him as a plushie from A&A’s Small Generic Plush Kit (108pcs)

Who is this creature?

r/leagueoflegends fainlol

Diable from BNK FearX will be leaving the team.

https://x.com/inventeam/status/2049317887886979225?s=20

It has been confirmed that Nam Dae-geun, the player Diable from BNK FearX, will be leaving the team.

Player Nam Dae-geun, who was demoted to the second team due to his absence from the fan meeting, is currently in discussions for a trade with another team in the LCK, and Shadow Corporation has stated that the player recognizes his mistake and is reflecting on it.

r/ChatGPT Xened

Explore GPTs not showing

Since yesterday, the "Explore GPTs" option hasn't been showing up in my sidebar.

The GPT chats themselves are still working, and the chatgpt.com/gpts link works fine too. Just the option is missing from the sidebar.

Anyone else facing a similar problem or know how to fix it?

r/automation MarzipanNo1005

NodeMail — temp Hotmail addresses via API, works where fake domains get blocked

If you're building account creation or testing pipelines, you've probably hit the wall where temp mail domains get rejected.

NodeMail solves this with real Hotmail/Outlook accounts. You call the API, get an address, use it, poll the inbox for the code — done.

nodemail store

r/ChatGPT moviefan1997

What do you use ChatGPT for?

I've been using ChatGPT for a while and I really enjoy using it. I think it's a really useful tool. I wanna know what other people use it for. Do you use it to let out your deepest darkest secrets or do you use it for assignments, maybe trying to find an old TV show or game or maybe you use it to get some financial advice.

r/DunderMifflin FiberSauce

What's he talking about? Wrong answers only

r/Jokes shwarma_heaven

I asked my German friend if he knew what the square root of 81 is....

He said no.

r/AlternativeHistory gaymossadist

The Bizarre Fascist Worship of Ruins

r/Jokes shwarma_heaven

What did the old rope say to the new rope when asked if he could untangle himself?

I'm afraid not....

r/SideProject Certain-Muffin1158

I built a tool that replaces Reddit's AD Wait Time Megathreads for luxury watches

If you've tried buying a Rolex at retail, you've heard it — "we'll call you." Then nothing. No timeline, no updates, just ghosted.

The only real wait time data lives in Reddit megathreads that are thousands of comments deep. So I built Unghosted — buyers submit their actual wait times and everyone can search the data.

Turns out the Sub that one AD quoted at "1 year+" is showing up in weeks at other dealers. That's the kind of info that changes how you play the game.

unghosted.io/rolex-waitlist-times

Next.js 14 + Supabase + Vercel. Early stage, need submissions. Feedback welcome.

r/whatisit dangermaximum

What's making my Aluminium foil jump?

I was cooking steak on my induction cooktop and realised that the foil was jumping every couple of seconds. The elements under the foil and plate were off. Any ideas what causes it?

r/Unexpected Ill_Floor8662

I promise you don’t know where he’s going with this

r/Seattle Leading-Business-593

I hope that kid is ok

I heard a helicopter flying over the neighborhood and if it’s for Seattle Children’s Hospital, I hope that child is OK

laurelhurst can eat a dick

I hope it’s louder

r/whatisit SnapOnSteveSprnger

Some help

Is this a stock part? Do you know of a good replacement? I found it while checking the wire on the regulator. Do you think the regulator being bad could cause that? Any help is appreciated

r/whatisit paperbeatsrock27

What are these little shavings?!

I keep vacuuming these shavings off of my window ledge and they keep mysteriously re-appearing. They look like pollen or fine wood shavings. Today there are weird dark insect lookin’ things here too. This is an old house, we’re in Washington, DC. This is the only window ledge I see these thingies on. I’m assuming something is chewing something here? What is it?!

r/TheWayWeWere rlaw1234qq

My grandfather, 1915 WW1

He served in both the renowned Nelson and Hood Battalions, which were naval infantry units fighting alongside the British Army. He was involved in fighting in the Gallipoli Campaign and the Battle of the Somme, two phases of the war which saw extremely heavy casualties.

He was wounded several times - a bayonet through his left forearm meant that he lost the use of several fingers. He also had fractures to the left wrist, which must have been repaired with wire because bits this wire were still coming through the skin in the 1960s! He never saw a doctor about it.

In Gallipoli he contracted a a severe fever, probably malaria. Every few years he would spike a fever, dose himself up with paracetamol and go to bed. During these periods he would shiver violently. These episodes persisted well into the late 1950s!

I know many WW1 veterans were too traumatised by the horrible conditions and high casualty rate, but my grandfather wasn’t one of them! I vividly remember him telling stories of fighting as I sat on his knee as a little boy. I remember my mother asking him to tone it down a bit, but I decided that that wasn’t necessary!

r/ethtrader NULLBASED

Dexscreener, Terminal, Axiom

I’m still fresh and when I was learning about trading awhile back everyone was using Dexscreener.

But now I see all these crypto YouTubers all using Terminal and Axiom. What is Terminal and Axiom why do many use it now and how is it different to Dexscreener?

Is Terminal and Axiom free to use? I tried going to the Terminal website but it wants me to have a Telegram account. :/

r/leagueoflegends pallas_br

Surviving as an immobile ADC: What 21 Kills and 49k Damage on Kog'Maw looks like (Week 6 of my video editing journey)

Hey everyone,

I'm an ADC main who recently started a personal project to learn video editing (CapCut). For my 6th week, I wanted to showcase the ultimate glass cannon: Kog'Maw.

We all know how punishing it is to play an immobile hypercarry against heavy dive comps nowadays. In this game, I had to kite for my life against a bloodthirsty Shyvana and Trundle, but the W damage payout was insane—I ended up with 21 kills and 49,000 damage.

Since I'm trying to improve my editing skills and storytelling, I decided to lean into the humor this time. Kog'Maw is always hungry, and the enemy Nautilus kept stepping up during the game, so I created a "Naut Snack Counter" throughout the video. I also played around with some freeze-frame memes for the jump scares during the jungle ganks.

I would love to share the result with you guys: 👉 https://youtu.be/tr9DxtEIpI4

I have a question for the community regarding League content: As viewers, do you prefer these more humorous/meme-style edits (like the snack counter and freeze frames), or do you prefer clean, pure gameplay/Phonk montages?

Any feedback on the kiting mechanics or the editing itself is super appreciated! Cheers! 🚀

r/ClaudeAI Murky-Idea-7081

Free tier for everyday use.

I've been wanting to switch to Claude from gemini. But the limited token in the free tier is just a NO-Go for me. For the ones using free tier of Claude, how do you manage?

r/mildlyinteresting PopularGazelle3769

my orange juice has Hebrew on it

r/AI_Agents prnkzz

Agents vs Workflows

What’s a task that actually needs an agentic loop?

I have shipped a handful of tools for myself including a morning brief, a research summarizer, and a couple extraction pipelines.

As I go deeper on agents, the more it feels like 90% of what gets called an agent is actually a workflow on a trigger.

Am I missing the point, or are true agentic loops rarely needed and workflows handle most of what people need?

Curious when a workflow stopped being enough and you needed an actual agent.

r/PhotoshopRequest NickfromLafayette92

Hi there, I was wondering if any can sharpen and colorize this image for me? I would greatly appreciate it

r/geography Europehunter

World's largest urban clusters

r/ARAM Kuriboh1378

AP Briar double one-hit

r/Art Playful-Indication87

Gameplay, Santz Rildo , digital art , 2024

r/OldSchoolCool Goatcheeze1

My grandmother's yearbook photo, circa 1942

r/mildlyinteresting ChrisJohanson

How often do you see the top of a cloud... Spotted while flying at night from Boston to Orlando.

r/SipsTea shineonyoucrazy-876

No Plunger 🪠 No Problem 💩

r/funny obi_want_pastrami

Saw this ad just now and thought they were laughing about a guy they hooked up with.

r/Adulting Peyzxc

Anyone else feel like they havent achieve anything yet?

Im trying but it feels like im falling behind while everyone else is moving forward. Its frustrating and honestly a bit discouraging.

How do you deal with that feeling and stay motivated?

r/BobsBurgers Bright-Response-285

sometimes i can’t watch this show because i get too sad that they’re all so mean to the belchers

i feel so dumb for this but i was curious if ANYONE felt similarly. it literally gets so dire because sometimes i will be watching and i get so genuinely sad over it LOL. i am also diagnosed autistic so thats probably why

r/SideProject MarzipanNo1005

I built a temp mail service that uses real Hotmail/Outlook accounts — works on Instagram, TikTok, Netflix

Most disposable email services get instantly rejected by major platforms because they use fake or blacklisted domains.

So I built NodeMail — it uses real, active Microsoft (Hotmail/Outlook) accounts as temporary inboxes. You get a real address, use it on whatever platform you need, and the verification code shows up in your dashboard.

What it does:

Real Hotmail/Outlook addresses (not fake domains)

Works on Instagram, Facebook, TikTok, Netflix, Spotify, Amazon and more

Full REST API with API key auth — automate it in your scripts

Pay-as-you-go, $0.005 welcome credit to try it free

Refund if no email arrives

Bulk orders (up to 20 at once)

Would love feedback — especially from devs who do automation or testing workflows.

r/whatisit hereprintscascade

What is this thing? We found it in my grandfather's antique collection after he passed.

We think it might be related to clocks as he had a very large collection and many special clock tools. It also might not be he has lots of other random objects in his collection. It is about five and a half inches in length and it is not sharp or beveled on the edges.

r/whatisit AnimatedBasketcase

My shawarma came with this pickle, but what exactly is this vegetable?

r/personalfinance Jazzlike_Medium_2337

Financial Overview: Looking for advice

I'm 28 and currently making $52k/year working in accounts payable while living at home with my parents. I've been saving up for my own place, but it's almost impossible to afford to buy anything in Central or Northern NJ as a single person with this salary, even with the amount of money I've saved so far. I have about $75k in a HYSA, which I know is a lot to keep in a savings account, even though it earns interest monthly. The reason being is that I want to make sure I have a good amount saved for a down payment, purchasing a new car within the next few months, and whatever I may need for closing costs, moving costs, and general essentials for living on my own. In addition to the $75k in the HYSA, I have about $4k in my checking account, $32k in my Roth IRA, $10k in a brokerage account, and about $5,700 in my Robinhood account, so around $127k total between all of my accounts. I feel behind since I thought I'd be living on my own by now and also making more money at this age. I'm definitely looking to get a better paying job as either a junior or staff accountant sometime later this year or at the beginning of next year at the latest. Looking for advice on how to handle the money I currently have and how to potentially make more either through my career or investing. Thanks in advance!

r/NotMyJob Balthraka

At least the line is straight...

r/Damnthatsinteresting Shadow_Sniffles073

Playing a mouth harp.

r/estoration scoobmutt

I know this might be tough, but it will be appreciated!

Hello all! My step-dad recently found the only photo he has of himself and all his siblings together. The photo dates back to August 1969! The only trouble is that the photo itself is absolutely horrible. It is in decent condition, but it is incredibly blurry and overall unclear. I was hoping someone would be able to fix it up for us and I will pay the best one!

I have included a reference photo for most of the faces if it is needed. Top left in reference photo is the girl in piggy tails, bottom left in reference photo is first child in the restoration photo, kid to the right of the girl in the restoration photo is the top middle of the reference photo, I unfortunately do not have a reference photo for the tallest kid in the restoration photo, the kid at the end in the restoration photo is the kid with glasses in the reference photo, and the smallest kid in front is bottom right in the reference photo.

Please let me know if there is anything more I can do!! Thanks in advance:)

r/mildlyinteresting ChrisJohanson

Came across the longest vehicle ever produced in an automotive collection in FL. Even has a hot tub.

r/comfyui EasternAverage8

Mxfp8 vs fp8 models?

So I tested out a mxfp8 vs it's normal fp8 and for some reason the mxfp8 is 3x slower. Both models roughly same size, not running out of vram, I have 13.0 cuda installed, python 3.12.1, pytorch 2.9.1+cu130, kitchen sink installed and working, kitchen sink 0.2.8.

Idk what else to check or if this is normal?

r/DecidingToBeBetter Gravityfallsxhazbin

I am slowly ruining my own life

Every week it is the same. I keep saying yeah it will be different, that I will fix one thing, that I’ll do homework, what I’ll be on time for work. I never actually fix anything though. I try timers, but they’re easy to ignore so I never actually have any benefit from them. I try to break things down but it’s just more time spent procrastinating and not actually being productive.

I’m in high school. I shouldn’t be. I should’ve graduated a year ago. I had so many chances to fix it, but I keep messing up of my own accord.

I burned out badly two years ago in an attempt to fix it, in hopes of actually graduating on time, hoping I could fix my terrible gpa. It didn’t actually fix anything though. Now I can’t force myself to do anything anymore.

At any rate I was wondering if there any way to fix this? Or am I just a lost cause?

r/mildlyinteresting RoutineCurrency4908

Local chemical plant venting gasses

r/leagueoflegends Apprehensive_Way943

Mobile Controls to LoL?

Well they've added so many control supports already so adding Mobile supports would be a great addition to enable easy transition from mobile to pc. They already have Wildrift so they can just copy that layout.

r/painting lightwashbaggyjeans

Most recent piece rn!

r/Art sarahlynnglazer

a silent moment, sarahlynnglazer, graphite, 2026

r/artificial iBlackFoxPlayx

How are they able to charge ~50% less than Lovable if they’re using the same models?

Hey everyone,

I’ve been using tools like Lovable, Antigravity, and Claude Code for a while now, and after some time it all started to feel a bit repetitive (same kind of outputs, similar templates, etc.).

Recently I tried Clawder after seeing it mentioned on Lovable’s Discord server. I’m not here to promote anything, just genuinely curious about something.

That’s the part I don’t really understand. In all cases I’m even getting better results with similar prompts, which makes it even more confusing.

Not trying to compare tools or start a debate I’m just wondering from a technical perspective what could explain this

Would be interesting to hear if anyone has insight into how this works behind the scenes.

r/Jokes freestuffrocker

A man wanted to create a new clothing accessory thinking it will make him rich.

He used a belt and adorned it with watches. Unfortunately, nobody bought it. It was a waist of time.

r/OldSchoolCool BiscottiElectronic73

Anyone interested in classic movies 1930s-early 1960s (Chicago suburbs)?

Hi!

I just moved to Kane County and wanted to start a classic movie (think 1930s to early 1960s) meetup for folks in the Chicagoland area in their 20s and 30s looking to meet people with similar interests. Description below— please send me a DM if interested. I plan on scheduling an initial video chat meet and greet for everyone sometime next month. 😎

———————————————————————————————————

Love timeless films?

This group is for people in their 20s and 30s across the Chicago suburbs who enjoy classic movies and want a relaxed way to meet new friends.

Whether you’re a lifelong fan of Old Hollywood or just curious about the films everyone should see at least once, you’re welcome here.

We’ll host casual movie nights featuring everything from Golden Age favorites and Hitchcock thrillers to iconic comedies and romance classics.

No film expertise required — just bring yourself and enjoy the movie. We’ll have plenty of time before and after to chat and connect.

The goal is simple: good movies, good conversation, and good company.

Perfect for old souls, film lovers, and anyone looking to make new friends. 🙂

r/AskMen Meanie_Cream_Cake

To all those lone wolves out there, how do you cope with loneliness?

What are your routine or hobbies that help?

r/personalfinance RiskSure4509

How to structure credit card payments via a loan?

To preface yes I have an emergency fund..it's being used to fund this because it's a home repair that is necessary..that being said..10k is being paid via emergency fund..The other 10k is my issue..

Use credit card to pay then take out a personal loan via the bank..payment over 84 months would be $208 a month and pay it off before the 84 months..No penalty for early pay off..I don't really have many options..

The next question is after the loan is taken out do I pay it over the 6 months(since it's 0% apr for 6 months)or just pay off all at once the balance which will be 10k?

r/therewasanattempt AdventurousCommon791

To teach the speeders a lesson with an unmarked speed bump

r/Anthropic Kareja1

Accessibility - Not supported (??)

Screenshot of Claude Desktop on Windows with the Denied apps portion, Accessibility and Screen recording showing. Accessibility and Screen recording say \"not supported\".

I am using Claude Desktop on Windows 11 as a physically disabled AuDHD human with moderate hearing loss and over 20 years of disability activism history.

Am I pushing the wrong buttons somewhere that I can't find and that's why I am seeing "not supported" which would be even more egregious to the disability community than not shipping the suggestion at all, or did someone over there really say "we should mention we thought about it for 15 seconds and decided screen readers and speech to text weren't worth being accessible in 2026"?

I'm going to hope that it was an easy to fix mistake on my part and I have a toggle I can't find wrong somewhere. Because a lack of product accessibility isn't optional legally.

Shalia (Ren) Martin

Outreach Director of Foundations for Divergent Minds
Yes I know exactly how to make this a problem

r/TwoSentenceHorror Ancient_Mail_3219

I accidently killed man who broke into my house but then

I saw a photo of me and him on wall and an alzeimer's report

r/painting Realityisatoilet

Abstract Selves: Age 36 (Spraypaint on Canvas)

I used to do an abstract self-portrait once every year. I forgot to do one the last 3 years. So this new one is me getting back to it!

r/brooklynninenine daisybear81

Happy April 28th!!!!!

r/PhotoshopRequest ChaseTheSavage64

Can anyone remove the pink from the wall and make it white again?

I forgot to get a before photo before starting a bathroom renovation. Started testing a wall with pink paint and was wondering if anybody could remove the pink and turn it back to white. You can see the original white under the mirror and towards the ceiling. Thank you!!

r/personalfinance peachismile

Is it normal for Rental application to ask for my credit card account number?

I'm filling out a rental application but they are asking me to write down my credit card account number. Is this normal?

r/personalfinance Flesruoytayrc5

Which insurance to pick?

I graduated from college in May 2025 and found a job I currently work at by June. My employer pays 100% for insurance, meaning that not a single dollar is taken out of my paycheck for that. Well, I am about to relocate in June pf this year and I just accepted a job in the area I’m moving to. This is going to be my second job out of college and for the first time ever I will have to pay for insurance. I have three plans to choose from:

Traditional PPO (~$111)

HSA (~$50)

Value Plan (~$38).

All of these are Blue Cross Blue Shield of Illinois. I have no idea which plan to choose. I was originally going to choose the value plan because I’m a pretty healthy person physically, but then I remembered I’m not that healthy mentally. I go to therapy and take two prescription medications for my mental health. With my current insurance, therapy is $25 per session, medication #1 is up to $60 and medication #2 is about $10. Which insurance plan should I choose?

r/StableDiffusion NoOne8141

Comfyui persistence problem

Hi guys,I recently use comfyui and download a workflow,but it has many custom_node that with different requirements package,when I fix one other will have version problem how can I fix all in same time?

r/YouShouldKnow alexyong342

YSK your phone's ad tracking ID resets every time you reinstall apps but companies can still link your data through device fingerprinting

Why YSK: Simply resetting your advertising ID or deleting apps doesn't stop tracking firms from recognizing your device based on settings, fonts, screen size, and other unique traits. To reduce fingerprinting, turn off app tracking permissions and avoid installing many apps with invasive permissions, especially free utilities like flashlights or QR scanners.

r/funny Dyweene-Weede

Original looks maxing

r/brooklynninenine CeleryApprehensive83

Its……

r/mildlyinteresting Spider-verse

Old Geek Squad cable lets you know you can hog tie approximately 3 people with it

r/personalfinance PsychologicalDog7923

Can I afford this house?

My partner and I collectively make about 10k/month net after taking out taxes, 401k, health insurance deduction, etc. We have been in the market to buy a house in a relatively HCOL area in the northeast and fell in love with a house costing 555k. We’d be first time home buyers, putting less than 20% down. The mortgage + mortgage insurance + home insurance + property tax would come out to roughly 4500/mo. We also have an infant who would need to go to daycare. No other major debt. We’d have some savings (about 25k) and then our investment accounts which I hope to not tap into. Would this be a doable scenario?

r/ClaudeAI ClaudeAI-mod-bot

Claude Status Update : Elevated errors on Claude Opus 4.7 on 2026-04-29T00:00:29.000Z

This is an automatic post triggered within 2 minutes of an official Claude system status update.

Incident: Elevated errors on Claude Opus 4.7

Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/lryjwwy0q9pl

Also check the Performance Megathread to see what others are reporting : https://www.reddit.com/r/ClaudeAI/comments/1s7f72l/claude_performance_and_bugs_megathread_ongoing/

r/mildlyinteresting No-Marzipan5007

A Mayfly sitting right next to its exoskeleton

r/whatisit General-Ideal-7719

These little balls hanging from my eggs

r/comfyui Unique-Mix-913

What's a good face swap model?

I've been using comfy for 3 months - still pretty new. I have never dove into face swap generating. What's a good starting point? Is there a good to model?

r/comfyui Budget-Toe-5743

Make ComfyUI forget my last session.

I often start prompts from a template I saved and for some reason ComfyUI started starting up with my last session and I can't seem to clear it up aside from closing everything again.

Can I make it just start from the template without what has changed from that template?

I like what is in the template already.

thank you.

SortedFor.me