AI-Ranked Reddit Feed

5000 posts

r/LocalLLaMA ManuXD32

Ai Doomsday Toolbox v0.938

Hello! It’s me again, the developer of ADT.

There have been a lot of changes since last time. I even created a web page that works as a wiki, so you can explore everything the app has to offer

Changes:

  1. Chat Llama Native: We now support both Ollama and llama.cpp servers. We also added tools for the LLM to use: it can search the web, write and update notes, create to-do lists, generate images, create calendar events, set alarms, and more. You can now also schedule tasks that will trigger at the chosen time.

  2. Improved the stability of tools like the Ollama Manager, Dataset Creator, and AI Agent.

  3. Reworked how summaries work. We removed the llama-cli binary and now only use the Llama server, which must be active for summaries to work.

  4. Everything in the app now accepts llama.cpp/Ollama servers, so you can use the one that best suits you, although I recommend llama.cpp.

  5. Added image generation through the ONNX Runtime using ORT models created by the team behind SDAI Local. I also made some new versions that include a VAE decoder to allow img2img inference. This is still a work in progress, but it is functional enough.

  6. Added support for video generation through the SD binary. Read more about it in the wiki.

  7. The pet has been completely reworked. We finally have sprites, the park is alive, the work system has been redesigned, and there are timed events that you won’t want to miss.

  8. Termux tools have been fixed. Users can now install all the projects without errors and use them easily.

  9. Fixed a bug where the app was not extracting the binaries.

  10. Added three kinds of widgets that allows the user to see their calendar, upcoming events and a chosen note

Overall, the quality of the app has improved a lot, and I couldn’t be happier to share all of this with you!

The Play Store closed test is still updating, so if you choose that path you better wait for a few hours-24h to update/install (this path helps me a lot as I still need testers to start the public beta and publish the app)

To join the Play Store closed test, you first need to join the Google Group here

After joining, you’ll be able to opt into the test and install the app from here

Follow the wiki's guide to learn how to use the app properly.

GitHub release page here

r/ChatGPT cs_developer_cpp_

Best AI for documents

I have tried gemini pro and chatgpt plus and deepseek for letting them analyse for example a 200 page docx or pdf file and tell him to explain the slides or summarize and all provided horrible results. (they can't parse the whole document correctly I guess)

What is the best AI model that can read documents correctly and summarize them?

r/ChatGPT Nunki08

GPT Image 2 prompt that is viral right now: "Redraw the attached image in the most clumsy, scribbly, and utterly pathetic way possible. Use a white background, and make it look like it was drawn in MS Paint with a mouse."

Full prompt:
Redraw the attached image in the most clumsy, scribbly, and utterly pathetic way possible. Use a white background, and make it look like it was drawn in MS Paint with a mouse. It should be vaguely similar but also not really, kind of matching but also off in a confusing, awkward way, with that low-quality pixel-by-pixel feel that really emphasizes how ridiculously bad it is. Actually, you know what, whatever, just draw it however you want.

From CHOI on 𝕏: https://x.com/arrakis_ai/status/2049689793118998717

From nic on 𝕏: https://x.com/nicdunz

r/ClaudeCode marcelsoftware-dev

Been testing GPT with their free pro plan, is there something like this for Claude?

Hello.

I've been testing GPT pro thanks to it's free 1 month pro trial, and I was wondering if I could also test Claude, so in the end, I could decide between the 2 subscriptions.

From my understanding the only way currently to test, is if someone refferes you? What about free trials from Antrophic, will there be something similar to the GPT ones?

r/AI_Agents pauliusztin

I almost shipped OpenAI embeddings until an MTEB rank #130 model beat them by 11%

I just interviewed Michael Maximilien, former CTO at IBM and Chairperson of NodeJS Foundation, who spent a year shipping production RAG to multiple customers. His lesson was uncomfortable. Until you evaluate your customer's data, nothing on a leaderboard predicts what works.

Most teams treat RAG as a setup task. You pick a vector database because it trended online. You pick an embedding model because OpenAI's the safe default.

Then you spend six months vibe-checking the results. Production RAG requires a continuous stitch-evaluate-iterate loop rather than a one-time setup.

Which is extremely cumbersome. That's why people don't do it.

Here is how it looks:

  1. Stitch the components together instead of just picking one. A production RAG system has at least five interchangeable parts: an embedding model, a chunking strategy, retrieval parameters, a vector database, and a judge.
  2. Evaluate your customer's actual questions rather than generic benchmarks. Maximilien's customers always have five or six release-time sanity questions that become the eval dataset.
  3. Align your judge with a human before you trust the scores. In the article's customer use case, the LLM-as-judge correlation with human judgment hovers around 0.55. Three weeks of human labeling and few-shot alignment came before any judge score was treated as ground truth.
  4. Iterate cheapest-first to save time and money. Tune your retrieval parameters first because that's free, then move to the embedding model, and only change your chunking or vector database last.
  5. Run this loop in any harness that has the right shape. Weave CLI is one option, but any setup that lets you swap a component, re-evaluate, and compare runs will work.

The proof landed when he tested a real customer dataset of Leica auction listings. He held everything constant and swapped only the embedding provider.

A small, open-source model, all-MiniLM-L12-v2, ranked #130 on the MTEB leaderboard, beat OpenAI by 11% in quality. It ran 240x faster for re-embedding, produced vectors that were 50% smaller, and cost exactly $0.

The leaderboard had no idea what his customer's data looked like. The eval did. As Maximilien put it: "This is a counterintuitive outcome. Without a structured benchmark, I would have defaulted to OpenAI and been wrong."

What have your own evals told you that contradicts a leaderboard or a trendy default?

TL;DR: Production RAG is a stitch-evaluate-iterate loop on your customer's data. Public benchmarks and MTEB ranks are signals, not verdicts. Until you measure your data, nothing matters.

r/ChatGPT Armel48

Image commercial Happy New Month generated by ChatGPT and NanoBanana

Create a professional, modern, high-energy technology retail marketing flyer for the month of Mai. The layout must match the dynamic composite style seen in professional print and digital advertising.

  1. Scene and Subject:

Against a clean, light-blue-to-white gradient background, depict a central, dynamic, mid-air pose of a stylish Black man (as seen in the attachment

Source: Use the person from the ATTACHED

REFERENCE PHOTO

Preservation: You must keep their exact facial

features, skin tone, hairstyle, and natural likeness perfectly unchanged.

Universal Application: Apply the lighting and angle to the subject regardless of gender) with natural hair, blue t-shirt, tan cargo pants, and heavy black boots with deep tread).

They are captured in a leaping, expressive pose, looking directly at the camera with a surprised and energetic expression. One boot is large in the foreground due to the dynamic angle.

  1. Floating Elements (Tech Gadgets):

The area surrounding the central subject is a creative "exploded view" of floating premium tech products.

Central Bag: A black fabric tech shoulder bag/briefcase is held, appearing open.

Surrounding Devices : Arrange a collection of floating high-end gadgets including multiple white AirPods (case and individual pods), a white pair of large over-ear headphones, two separate iPhones (one silver/white Pro model, one black/space grey model), an Samsung A16, a smartphone screen, a battery , a Laptop with a smart keyboard, and several small scattered adapters.

Additional Floating Corner Item: In the bottom-right corner, place a large, dominant floating peach/orange color iPhone, matching the perspective and look of the one in the reference.

  1. Typography and Branding:

Top (exact text): Centered at the top, place the small text "Bienvenue en". Below this, in a massive, dominant, 3D-styled blue font (matching the specific blue and script-like yet modern style of the reference), render the text "Mai". This large blue text should interact with the floating objects, sitting in front of some and behind others for depth.

Top Corners (placeholders): In the top-left corner, place a circular logo (BenServices). In the top-right corner, place a functional QR code.

Bottom Banner (exact text and layout): At the bottom, a clean, dark (near-black) bar with refined white text reads: "Heureux Mois, Possédez et Profitez des meilleurs produits tech cette saison". Below this, a clean white panel with a dark border lists: "Nous disposons actuellement de: Smartphone | Laptops - Écrans (Iphone/Android)| Batterie | PlayStations | Tablette | Power Banks | Accessoires etc.".

Footer Address (exact text): The very bottom line has the precise address in smaller white text: "Bouaké, Commerce en face du Capitol - Contact : 0747934176".

  1. Lighting and Finish:

The scene should have bright, clean studio lighting with soft shadows to highlight textures (fabric bag, skin, polished metal of devices). The depth of field should keep the subject and foreground boot sharp. All text on all objects and signs must be perfectly legible and exact.

End of Prompt.

r/AI_Agents NeitherPush6406

Open-sourced an agent operating model kit for long-running AI assistants

Most agent systems have prompts, tools, and memory, but no operating model.

I just open-sourced a small kit built around a different assumption: treat the agent like a micro AI company.

Core ideas: - token is budget - optimize value per spend, not just activity - no concrete output = not finished - no verification = not complete - repeated work should compound into reusable assets - lightweight KPI review should correct drift instead of creating dashboard theater

The repo is host-agnostic. It is meant to layer onto an existing assistant/runtime rather than replace its execution stack.

I’d love feedback from people building long-running assistants, agent workspaces, or digital twins: what governance loops are you finding actually matter in practice?

If useful, I can drop the GitHub link in the comments.

r/SideProject takeaguess17

Built a landing page for my AI automation side project. roast it

I’m launching a small AI automation service targeting small businesses. Just finished the landing page .

flowkill.com

Looking for honest feedback: Is the message clear? Would you trust this enough to fill in the form? What’s missing?

Still early stage, doing free case studies for the first few clients.

r/SideProject Exact_Pen_8973

A lawyer just got suspended because his AI fabricated 57 citations. Here is how to not get fired using AI.

In February 2026, a Nebraska attorney submitted a Supreme Court brief drafted by an AI. He didn't double-check it.

The judges stopped him 37 seconds into oral arguments. Why? Because 57 out of 63 citations were completely made up. The AI invented case names, court dates, and quotes from judges who never said those words.

He was indefinitely suspended, and his client now owes $52,000 in opposing fees.

The Problem: LLMs are pattern-completion machines, not databases. They don't just "guess wrong." If you ask for a legal case, a statistic, or a reference, they confidently generate a statistically likely fake fact that looks 100% real.

The 4-Step Verification Workflow: If you use AI for work, reports, or research, you need this habit:

  1. Treat facts as guilty until proven innocent: Mentally flag every name, date, statistic, or quote. If it sounds like a hard fact, assume it's a hallucination until you verify it.
  2. Find the primary source: Never use AI to verify AI. Find the actual study, official document, or case PDF yourself.
  3. Use grounded tools: Ditch standard, offline AI for research. Use Perplexity AI, Claude (with web search), or Gemini (with search) so you get inline citations. Always click the links to check them.
  4. Prompt for uncertainty: AI won't admit when it's guessing. Force it to by adding this to your prompt: "For every specific fact, case, or statistic you include, mark it with [VERIFY] so I know to check it independently."

The Bottom Line: AI is the fastest first-draft generator in history, but it will confidently lie to you. The tool did exactly what it was designed to do (generate plausible text). The failure was a human treating a zero-verification workflow as acceptable.

The AI doesn't get fired or lose its license. You do.

(Full story and breakdown:MindWiredAI)

r/SideProject obolli

I am building a free data driven AI tool that tailors your CV with you for any job - please try to help me improve it

Hi everyone, this is Oli. I built a free AI tool that helps you tailor your resume to any job based on my job data.

You might have seen WhoHasJobs before.

I’ve now updated the system so you can tailor your resume to any job.

Originally, WhoHasJobs was a custom job search for you. That wasn’t scalable once I had 900 sites to scrape daily.

So instead, I kept the scrapers but slowed them down. Important to know if you look for jobs: they update round robin and it takes about 4-6 days for one full trip.

The ATS optimizations for systems like Greenhouse and Lever came in useful, and I built something new on top of that and the data that I have now.

It would be amazing if you try it. https://whohasjobs.com/jobhq

If you’re looking for a job, or even if you’re just interested in testing it, you can create an account completely for free. There is no paid version yet. I’m mostly trying to get feedback and improve the system.

What you can do with it

Upload your resume

You can upload your resume, and the AI in the background extracts your skills, experiences, education, projects, etc. into structured fields.

You can edit those fields yourself.

Build a “career memory”

The idea is that you build one giant resume / career memory that contains all of the relevant information that could be useful for any tech job, or really any job.

This can include your work experience, education, projects, hobbies, skills, certifications, and anything else that might be relevant.

Edit, add, and enrich each experience

You can edit, add, and enrich each experience, project, education entry, or even hobby based on current and past tech job data.

For example, if you worked at company ABC and your resume description is very brief, you can refine it with AI.

The AI will try to ask you questions based on what we know from tens of thousands of tech jobs, what skills are currently looked for, and what has been looked for in the past.

Find relevant skills you may have forgotten to mention

The AI will try to see if you might have relevant skills or experiences that are looked for in certain jobs, and then ask you about them.

It does this by narrowing down which past projects, jobs, education, or other experiences could fit that profile.

For example, if it looks like you may have worked with Kubernetes because you mention containerization, microservices, deployments, etc., but you didn’t actually write Kubernetes in your resume, it might ask you about it.

If you did use it, you can add it. If you didn’t, you don’t.

It does not hallucinate things into your resume

It asks you.

That is important to me. The goal is not to invent experience. The goal is to help you remember and structure the experience you actually have.

Upload multiple resume versions

If you previously tailored your resume for different jobs, you can upload those versions too.

It will continuously merge and enrich your career memory with them.

It should not overwrite anything. If there is a conflict, it should show you the difference and ask you. Otherwise, it just keeps adding more useful information.

Sometimes there might be conflicts, and then you can add things manually. Please let me know if you run into that.

Tailor your resume to a specific job

Once your career memory is built, you can tailor your resume to a specific job description.

The goal is not to stuff the resume full of keywords.

The goal is to make sure the resume parses well, matches the job where it honestly should match, and then stands out to a human recruiter.

Make the resume easier for ATS systems and recruiters

Because WhoHasJobs started as a job scraper and worked with ATS systems, I was able to understand how a lot of those systems work.

They are actually pretty straightforward. Most of them are also fairly transparent.

In the end, you mostly want the system to parse and categorize your information, keywords, experiences, education, etc. as clearly and easily as possible. That is basically what these systems do.

The important part is how recruiters use them.

From what I’ve seen, recruiters mostly sort, filter, and rank candidates. There is some pre-sorting, and there are filters. So the more clearly your relevant information passes through, the easier it is for the recruiter to find it.

So it optimizes the structure and the keywords for ATS parsing, but it also tries to make the headers, taglines, and resume sections work for a recruiter who is actually reading it.

Why I’m asking people to try it

I’ve been able to use it myself, and a friend of mine has used it too. We got interviews. But we’re not my ideal target users I think, so I’d really like to get feedback from a broader range of people, especially people who are tired of applying and tailoring their resume again and again.

You can see in the video how this broadly works. There are lots of features that I do not show for brevity though.

I’ve looked at best practices, what Greenhouse and other ATS systems show publicly, how recruiters see applications, how they rank and filter, etc. This should work fairly well, but there is always a lot more to do.

Credits

Everyone gets 100 free credits.

One credit is used for one resume tailor, which is actually a couple dozen GPT requests in the background. Uploading a resume or editing is also usually one credit each time. So you can already do quite a lot with the 100 free credits.

If you run out of credits, please let me know. There is no paid version right now. Just tell me how you used it, what you used it for, what worked, what didn’t work, and how I can make it better. I’ll very happily reset your credits so you can try more.

This is still very early and very much a work in progress. I just think it’s useful, and I’d really appreciate it if people could try it and let me know what they think.

Thank you.

r/LocalLLM petburiraja

OpenAI restricts GPT-5.5 Cyber access after UK study finds real vulnerabilities

OpenAI restricted GPT-5.5 Cyber to vetted defenders one day after publicly criticizing Anthropic for the same move with Claude Mythos.

The UK AI Security Institute's evaluation found GPT-5.5 can locate real vulnerabilities, which is the empirical reason these restrictions exist, not competitive posturing. Both labs arrived at identical conclusions through different routes, and the convergence matters more than either company's public framing. TechCrunch's account of the reversal makes the contradiction impossible to spin away.

The containment instinct is spreading beyond labs. The Zig project published a detailed rationale for banning all LLM-assisted contributions, and maintainer Andrew Kelley states flatly that LLM-generated PRs are detectable by their characteristic failure modes. This is the clearest articulation yet of why serious open-source projects are drawing hard lines, and it arrives exactly as Codex CLI 0.128.0 ships a /goal command enabling persistent agentic loops that run until self-evaluated completion. The labs are pushing autonomous coding agents outward while the projects those agents would contribute to are locking the door.

Hardware is being purpose-built for the world the labs are selling. Qualcomm announced a dedicated CPU for agentic workloads and disclosed a custom chip for an unnamed hyperscaler, which means silicon design cycles are now tracking agent architectures, not just transformer training. Apple's supply chain was caught off-guard by AI-driven Mac demand, with Mac mini, Studio, and Neo all constrained, suggesting on-device inference is pulling hardware faster than anyone's forecast model predicted.

Anthropic's $900B valuation round could close within two weeks, with allocations solicited inside 48 hours. Legora hitting $5.6B while running dueling ad campaigns against Harvey shows the capital is now cascading into verticals. Atlassian, Twilio, and Five9 all beat earnings citing AI adoption as the primary driver, which is the first quarter where broad enterprise AI spend shows up cleanly in financials rather than as a narrative.

Two pieces of foundational science cut against the current modeling assumptions in ways that will take time to land. Lilian Weng's "Why We Think" synthesizes the theoretical basis for test-time compute and chain-of-thought, likely becoming the canonical reference for reasoning model design. Separately, Quanta's coverage of a novel synaptic plasticity mechanism enabling learning from a single experience challenges the Hebbian assumptions baked into most neural network theory, and ultrafinitism research rejecting infinite sets is producing computational insights with direct implications for finite-precision arithmetic in deployed models.

Applied Intuition's physical AI work across mining, drones, and warships is where the simulation-to-real gap stops being theoretical, and the inference inflection analysis frames why inference cost and architecture are now the binding engineering constraint, not training scale.

Within 90 days, at least one other frontier lab will announce access restrictions on a specialized capability model using the same responsible deployment framing OpenAI used to justify reversing its own public criticism of Anthropic.

r/artificial Opening-Name-5270

Newbie AI question

TBH I don't know if our current "AI" models are capable of thinking. There is a massive pattern i'm noticing when using AI and have been for the past couple years, AI follows a strict pattern and doesn't seem to think. Just like calculators it already has a designated answer regardless of the question its just a bit more advanced. Hence why it lies to many users.
Or it could be that there are so many rules on the intelligence model that it is constantly bouncing off of walls to give you an already programmed answer to not break these rules.
Im not sure about either.
I'd much rather call AI as of rn "engineered intelligence", not artificial, since its still learning from us engineers, and it will eventually adapt into intelligence. ( This is under the assumption that it can truly freely think )
Does anyone know if these models like Gemini, Chatgpt, Claude, actually "think"

r/ClaudeAI chrisdasp

Base de donnée GitHub & Claude Pro

Bonjour à tous et à toutes !

Je suis un nouvel addict à l'IA et Claude me plait énormément. Après avoir chipoté avec plusieurs modèles d'IA différents et tester l'intégration d'une base de données sur GitHub, je suis en train d'essayer de créer une image contextuelle de ma vie sur différents aspects, principalement sur l'aspect patrimonial.

Mon idée est de créer une structure d'archivage bien organisée, séparant mon foyer avec le compte commun, mes deux entreprises avec leur dossier respectif, et ma sci pour l'immobilier. J'ai donné l'instruction à Claude d'être un archiviste, qui doit lire certains fichiers spécifiques à chaque prompt pour définir son rôle, règles et autres paramètres, et en gros il doit sur base de fichiers que je lui donne (extraits de compte, bilan, etc) extraire les données de ces documents et les encoder dans les bons fichiers .md localisés dans des dossiers de ma structure sur mon repo privé GitHub, afin de créer une mémoire contextuelle définitive de ma vie et mon environnement.

Dans le futur, je pourrai créer d'autres assistants spécialisés qui pourront se reposer sur cette base de donnée pour répondre à des questions spécifiques (optimisation fiscale de mon patrimoine privé, gestion de stock de mes entreprises, etc...)

Claude Code à consommé 22k tokens pour lire analyser et encoder les extraits de compte de mon foyer de 2022. C'est juste énorme.

Auriez-vous des conseils sur comment procéder pour améliorer mon utilisation de Claude ?

Merci d'avance pour vos conseils et retours 🙏🏽🔥

r/AI_Agents Infinite_Mine_9388

AI agencies scam ?

There is word AI agents everywhere. Each company should use it. Then you search for ai agents agencies that should provide that and you cannot find legit case studie. Even fkin chatbot which is primitive. Best bang is when that agencies which is selling AI automations and AI agents does not have even AI chatbot on their website and for contact use the form. I I am asking why ? Why there is prediction of 1 trilion market in ai agents replacing all tasks and roles, but it is fckin impossible to find evidence that it is working for customers of that agencies.

r/LocalLLM SenaChampe

Local LLM Machine

I am considering a new machine to run local LLMs.
The scope of what I can do with my current machine specs is limited, and because it is a mini-ITX case, it cannot be expanded.

Current Configuration
CPU
i5-12600K
GPU
5060ti 16GB
RAM
DDR5 5200Mhz 32GB
SSD
512MB + 256MB

Which of the following options should I choose?

Intended Uses
AI Agents (e.g., goose)
Requirements Definition, Code Generation, Code Analysis
Data Analysis
Health data (diet, exercise, body composition, blood pressure, sleep, etc.)
Financial asset data
RAG
In the future

Pattern 1
Revamp by moving the existing 5060ti into an ATX case
CPU
RYZEN 9700X
GPU1
RTX PRO 4500 Blackwell 32GB
GPU2
5060ti 16GB
RAM
DDR5 6000Mhz 64GB
SSD
512MB + 1TB
Cost: $4500

Pattern 2
Purchase the OEM version of DGX Spark for back-end operation
Use the current machine for front-end operation
Model
MSI EdgeXpert
Memory
128GB
SSD
1TB
Cost: $4100

Pattern 3
Purchase a Mac for back-end operation
Use the current machine for front-end operation
Model
Mac Studio M3 Ultra
CPU, GPU
28 core, 60 core
Memory
96GB
SSD
1TB
Cost: $4300

r/ClaudeAI hyspdrt-corr

Product Feedback: A "Docs" Tab for Claude Desktop

TL;DR

Claude Desktop's Code tab is excellent for developers, but the same underlying capability — Claude as a stateful, file-aware agent over a git-backed workspace — would unlock a much larger market if reframed for knowledge workers. A new Docs tab, sibling to Code, would let compliance, legal, ops, and policy teams work in markdown + mermaid with git underneath, without ever seeing a developer concept. This is a small product step on top of existing infrastructure with a large addressable audience that today has no good AI-native tool.

---

The Problem

Knowledge workers managing structured documents — security policies, BRDs, RFCs, runbooks, SOPs, audit evidence — are stuck choosing between:

  • Word/Google Docs: friendly UI, but opaque binary formats, weak diffs, painful bulk edits, and AI tools struggle to edit them cleanly.
  • Notion/Confluence: nice editing experience, but proprietary storage. Doesn't integrate with compliance platforms (Drata, Vanta, SecureFrame) that increasingly expect markdown-in-git as the source of truth.
  • VS Code + git + extensions: technically the right tool, but the UI is aggressively developer-branded. Compliance and legal staff bounce off it. Asking a SOC 2 program manager to learn git commit is a non-starter.

Teams adopting "docs-as-code" workflows (markdown + mermaid in a git repo, synced to Drata or similar) have no editor that matches their mental model. They're forced to either train non-developers on developer tools, or give up the audit/version-control benefits and stay on Word.

The Opportunity

Claude already has two capabilities that, combined, solve this:

  1. Best-in-class long-form writing — widely acknowledged advantage over competing models for policy, legal, and prose work.
  2. The Code tab's agent loop — stateful file editing, git operations, worktree isolation, MCP integrations. All already shipped and working.

A Docs tab would be the Code tab with three changes: a markdown-first editor with live mermaid preview, a vocabulary swap that hides git, and document-workflow features (review, approval, PDF export, compliance-platform integrations).

What Docs Tab Looks Like

Inherits from Code tab (no new infrastructure):

  • Repo-backed file editing
  • Claude agent loop with file read/write
  • Git operations under the hood
  • MCP integrations (Drata, Vanta, SharePoint connectors)

New for Docs:

  • Split-pane markdown editor + live preview, mermaid renders as you type
  • Vocabulary swap: Save (commit), Draft (branch), Send for Review (PR), Publish (merge), Workspace (repo), Document (file)
  • Hidden developer chrome: no terminal, no debug, no file extensions in the tree
  • Document templates: Policy, Procedure, BRD, RFC, Runbook, ADR, Meeting Notes
  • "Insert Diagram" button with Claude-generated mermaid starters
  • Review/approval UI for non-developers (GitHub PR review reskinned)
  • One-click PDF/DOCX export with version hash in footer (auditor evidence)
  • Native connectors for compliance platforms

Concrete Use Case

I work with a company that uses Drata for SOC 2 compliance. Drata has first-class support for markdown policies stored in git, with built-in renderers for auditors. We want to move our policies from .docx to .md + mermaid, stored in a git repo, synced to Drata.

The blocker is the editor. Our compliance and InfoSec teams won't adopt VS Code — it looks like a developer tool, the vocabulary is foreign, and the safety nets (discard changes, undo, restore) aren't where non-developers expect them. We'd happily pay for a Claude Desktop seat per compliance staffer if the Docs tab existed.

This is not a one-company problem. Every company running SOC 2, ISO 27001, HIPAA, PCI, or FedRAMP compliance has the same workflow gap. Drata, Vanta, and SecureFrame collectively serve tens of thousands of companies, and the trend toward docs-as-code is accelerating because auditors love the version history.

Why Anthropic Specifically

  • Differentiation from ChatGPT Desktop: Claude's writing quality is the moat. ChatGPT's file/repo workflow is weaker. A Docs tab plays to both Claude's strengths and the Desktop app's strengths.
  • Broadens the commercial base: today, Claude Desktop is sold to developers. Docs tab opens compliance, legal, ops, consultancies, law firms, healthcare, financial services — segments willing to pay enterprise prices for audit-grade tooling.
  • Reuses existing infrastructure: this is a UI/UX layer on top of Code tab's agent loop. Not a from-scratch product.
  • Underserved market: no major AI vendor has a polished docs-as-code editor. The window is open now and won't be open in three years.

Ask

Consider a Docs tab on the Claude Desktop roadmap. I'm happy to share more detail on the compliance workflow, beta-test, or connect you with the InfoSec and compliance leaders at the companies I work with — they would be vocal early adopters.

r/ClaudeAI raiansar

Finally Claude Code has started respecting CLAUDE.md

For the past 15 days I have noticed that Claude Code follows my instructions as it is from CLAUDE.md regarding any action which is specified in the file. Which is a huge improvement and while some people would disagree but I would rather use Claude Code with a project focused file instead of using 15 separate tabs and beg it to act right.

My main concern was avoiding pushing everything to Main before beta has been tested and trialed, which claude would rarely follow but now it never does that.

r/Futurology Odd_Row1657

Can Huawei Replace NVIDIA in China vs America Race?

Huawei is on track to pull in around 12 billion dollars from AI chips this year. These figures are up 60% from last year. Pretty impressive.

Their new Ascend 950PR chip is getting huge orders from companies like ByteDance (who dropped over 5 billion and established TikTok), Alibaba, Tencent, etc. Especially after DeepSeek released V4 optimized for Huawei hardware.

China is already building big 10,000 card AI clusters running completely on domestic chips. Meanwhile Nvidias revenue in China is falling because they cant sell their best stuff there anymore.

US export bans, which were supposed to slow China down, actually forced them to speed up their own tech and now Huawei is becoming the go to option inside China.

This has driven a lot of discussion on how this could effect AI and China and America's roles long term.

A few different ways I could see it going:

  1. Huawei keeps improving fast and actually becomes a real global competitor to Nvidia over the next 3-5 years.
  2. They dominate China but still lag behind outside it because of the software ecosystem and raw performance gap. Nvidia stays king internationally.
  3. The whole thing pushes both sides to innovate harder and we end up with faster progress overall (kind of like the space race).
  4. Or it just leads to a fully split world with two separate AI tech stacks, which could get messy for everyone.

Either way.... t's wild to watch.

A few years ago everyone assumed China would stay behind for a decade. Now it feels like the gap is closing faster than expected, at least inside their own market. I'm not sure if this ends up weakening the US tech edge or just makes the whole AI race more competitive and fragmented. Definitely something worth keeping an eye on.

r/LocalLLaMA evijit

Blog: AI evals are becoming the new compute bottleneck

Hi! I wanted to share my new blog on the costs of running AI Evals. We dig into how benchmarking frontier systems now routinely costs tens of thousands of dollars per run, why agent evals are especially unpredictable, and what that concentration of validation authority means for the broader research community.

r/artificial CodNo2235

Is an AI SDR replacing “entry-level jobs” a feature or a bug?

Sat through a demo this week for one of these AI SDR tools and the pitch was in a nutshell: you don’t need junior sales reps anymore. (As in not even train them anymore just remove them.) To my surprise it worked. The tool was doing outbound, follow-ups, personalization, all the stuff junior SDRs grind through. Faster, cleaner, no complaints! But it did leave me feeling uneasy. That grindy, repetitive work is literally how most people get into sales. It’s where you learn how people respond, how messaging gets through, how to deal with rejection without taking it personally. That's how I got into it at least. So if AI wipes that layer out completely, what’s the path in? Are we just skipping straight to “hire experienced closers” and hoping they came from… where exactly? I’m not anti-AI (this stuff is obviously useful), but replacing enty-level humans as the first step in the process doesn't feel like a sustainable route.

r/Anthropic fortnitekneegrowball

My Max 5x plan disappeared

I have been legit using max plan for about 6 months and then i was gifted for one month the max 5x plan. and before 10 days of the expiry of the subscription it just disappeared. like i was rolled back to free plan. not cool anthropic. its been 2nd day and the support has not responded since.

did anybody had this problem ? if there is support here, please pm me, will share details

r/comfyui Trick_Appearance_377

best fast local video generator

I was looking for the best model in the last few months to generate videos quickly, video quality is fine even 720, I'm interested in speed, and a workflow, I have a 4070ti, thanks everyone

r/StableDiffusion Easy-Worker-1826

I just burned 30,000 credits on “UNLIMITED” image generation in one day and I’m actually speechless

I need someone to explain to me how this is even remotely okay because right now I feel like I just got finesse’d in 4K.

So I’m on the Premium+ plan. $45/month. Cool. Whatever. I even doubled down at one point because I actually like the platform.

They advertise “unlimited generation.”

UNLIMITED.

So I’m thinking:

Alright, bet. I can finally iterate properly without watching a credit meter like it’s my bank account.

Fast forward to yesterday.

I’m working on a sports car hybrid concept. Nothing crazy:

* Generating one image at a time

* No batch spam

* Probably around 80–100 images total

I even went out of my way to be careful because I’ve been burned before on other platforms.

Tell me why I check my account today…

and my credits went from ~35,000 → 7,000.

???

That’s nearly 30,000 credits gone overnight.

Here’s the best part:

I go to billing thinking maybe I got hit with some stealth charges.

Nope.

* No crazy purchases

* No auto-refill spam

* Just my normal subscription + a small credit pack

So that means…

The credits were being drained WHILE using “unlimited.”

I’m sorry but in what fucking universe does:

“Unlimited generation”

mean:

“Actually it’s unlimited unless you accidentally breathe wrong and trigger 300 credits per image behind the scenes”

???

And before someone says “you probably used upscale/enhance/etc”

I didn’t. Not once.

Why am I not seeing:

* “⚠️ This will cost credits”

* A running total

* ANYTHING??

Instead it’s just quietly burning through tens of thousands like it’s a background process.

Let’s do the math:

~100 images

~30,000 credits gone

That’s ~300 credits per image

ON “UNLIMITED”

Make that make sense.

There literally just hustling me lol

There’s a difference between:

* Paying for premium tools

and

* Getting nickel-and-dimed by invisible mechanics

At this point I don’t even trust the meter.

Like what else is being charged that I don’t see?

Is “unlimited” just a suggestion?

I already contacted support because there’s no way this is normal and idk if I’ll even hear back.

But I’m posting this here because:

  1. If you’re using this platform — CHECK YOUR CREDITS

  2. Don’t assume “unlimited” means what it sounds like

  3. And for the love of God, watch your settings

If anyone actually understands how their credit system works in plain English… please explain it to me like I’m 5

because clearly I’ve been living in a different reality.

r/Rag Koaskdoaksd

RAG pipeline returns correct answers but wrong page citations and occasional hallucinations (LangGraph + cross-encoder)

I built a RAG pipeline using LangGraph with the following flow:

rewrite → hybrid retrieve → cross-encoder rerank → parent expansion → grounded generation

The system enforces strict grounding (returns a fallback message if no relevant context is found) and requires inline citations like:

[file.pdf, p. 123]

Problem

Even though retrieval and reranking seem to work well, I’m facing several issues:

  1. Wrong page citations
    • The model often uses the correct information but cites the wrong page.
    • Example: answer says [file.pdf, p. 71] but the UI shows a completely different page.
  2. Mismatch between cited pages and rendered sources
    • The sources shown in the UI don’t match the pages referenced in the answer.
  3. Occasional hallucinations / degeneration
    • The model sometimes starts repeating a word until the end of the response.

Current setup (simplified)

  • Hybrid retrieval (vector + keyword)
  • Cross-encoder reranking (ms-marco style)
  • Parent-child document structure
  • Context built from parent documents, but citations come from child chunks
  • Strict prompting: “use only context or return NOT_FOUND”

Question

What are best practices to:

  1. Ensure correct and stable citations (no wrong page numbers)?
  2. Avoid mismatch between generated citations and UI-rendered documents?
  3. Reduce hallucinations and repetition loops in grounded RAG systems?

I’ve included my full rag_graph.py below.

Any architectural or practical suggestions are appreciated.

""" RAG pipeline LangGraph. Pipeline: rewrite → retrieve (hybrid) → rerank (cross-encoder) → expand_to_parents → generate (grounded) """ from __future__ import annotations import logging import re from typing import Optional, TypedDict, Any from langchain_core.documents import Document from langchain_core.messages import HumanMessage, SystemMessage from langchain_ollama import ChatOllama from langgraph.graph import StateGraph, END from config import LLM_MODEL, OLLAMA_BASE_URL from modules.vector_store import NotebookVectorStore from modules.parent_store import ParentStore logger = logging.getLogger(__name__) NOT_FOUND_MSG = "Túto informáciu som v nahraných dokumentoch nenašiel." # ── Parametre pipeline ─────────────────────────────────────────────────────── RERANKER_MODEL = "cross-encoder/mmarco-mMiniLMv2-L12-H384-v1" INITIAL_K = 40 # hybrid retrieval RERANK_KEEP_K = 10 # top candidates MAX_CONTEXT_CHARS = 9000 # MAX_PARENTS = 6 # top limit of parents in kontexte MIN_RERANK_SCORE = -4 # ── Reranker singleton ─────────────────────────────────────────────────────── _RERANKER = None def get_reranker(): global _RERANKER if _RERANKER is None: from sentence_transformers import CrossEncoder try: import torch device = "cuda" if torch.cuda.is_available() else "cpu" except Exception: device = "cpu" logger.info(f"Načítavam reranker: {RERANKER_MODEL} na {device}") _RERANKER = CrossEncoder(RERANKER_MODEL, device=device, max_length=512) return _RERANKER # ── Deiktiká pre query rewriting ───────────────────────────────────────────── _DEICTIC_PATTERNS = [ r"\ba (čo|aký|aká|ako|kedy|prečo|potom|ďalej|ten|tá|to|teda)\b", r"\b(ten|tá|to|tie|toto|túto|tomto|týmto) ", r"\b(vysvetli|rozveď|podrobnejšie|viac|ešte)\b", r"\b(predchádzajúc|predošl|prvý|druhý|tretí|ďalší|ďalšia)\b", ] _DEICTIC_RE = re.compile("|".join(_DEICTIC_PATTERNS), re.IGNORECASE) def _needs_rewrite(question: str) -> bool: q = question.strip() if len(q.split()) < 4: return True return bool(_DEICTIC_RE.search(q)) # ╔══════════════════════════════════════════════════════════════════════════╗ # ║ RAGState ║ # ╚══════════════════════════════════════════════════════════════════════════╝ class RAGState(TypedDict, total=False): question: str chat_history: list[dict] standalone_question: str retrieved: list[tuple[Document, float]] reranked: list[tuple[Document, float]] context_docs: list[Document] context_text: str answer: str source_docs: list[Document] retrieval_debug: dict # ╔══════════════════════════════════════════════════════════════════════════╗ # ║ RAGGraph ║ # ╚══════════════════════════════════════════════════════════════════════════╝ class RAGGraph: """Hlavná RAG trieda — LangGraph pipeline s parent/child retrievalom.""" def __init__(self, vector_store: NotebookVectorStore, parent_store: ParentStore): self.vs = vector_store self.ps = parent_store # Hlavný generátor: nízka teplota pre faktualitu self.llm = ChatOllama( model=LLM_MODEL, base_url=OLLAMA_BASE_URL, temperature=0.1, num_predict=1024, num_ctx=8192, ) # Rýchly LLM pre rewrite (kratšie výstupy) self.rewriter_llm = ChatOllama( model=LLM_MODEL, base_url=OLLAMA_BASE_URL, temperature=0.0, num_predict=150, num_ctx=2048, ) self.graph = self._build_graph() # ─── Build graph ───────────────────────────────────────────────────────── def _build_graph(self): g = StateGraph(RAGState) g.add_node("rewrite", self._rewrite_node) g.add_node("retrieve", self._retrieve_node) g.add_node("rerank", self._rerank_node) g.add_node("expand", self._expand_node) g.add_node("generate", self._generate_node) g.set_entry_point("rewrite") g.add_edge("rewrite", "retrieve") g.add_conditional_edges( "retrieve", lambda s: "empty" if not s.get("retrieved") else "ok", {"empty": END, "ok": "rerank"}, ) g.add_conditional_edges( "rerank", lambda s: "empty" if not s.get("reranked") else "ok", {"empty": END, "ok": "expand"}, ) g.add_edge("expand", "generate") g.add_edge("generate", END) return g.compile() # ─── Node: rewrite ─────────────────────────────────────────────────────── def _rewrite_node(self, state: RAGState) -> dict: question = state["question"] history = state.get("chat_history") or [] # Bez histórie alebo otázka je zjavne samostatná → skip if not history or not _needs_rewrite(question): return {"standalone_question": question} # Posledné 4 správy ako kontext recent = history[-4:] convo = "\n".join( f"{'Študent' if m.get('role') == 'user' else 'Asistent'}: {m.get('content','')}" for m in recent ) prompt = ( "Daná je konverzácia a posledná otázka študenta. Ak otázka odkazuje na " "predchádzajúci kontext (napr. 'a čo to druhé?', 'vysvetli to'), prepíš ju " "ako samostatnú, úplnú otázku v slovenčine. Ak je už samostatná, vráť ju nezmenenú.\n" "VRÁŤ IBA prepísanú otázku. Žiadne úvody, žiadne vysvetlenia, žiadne úvodzovky.\n\n" f"KONVERZÁCIA:\n{convo}\n\n" f"POSLEDNÁ OTÁZKA: {question}\n\n" "SAMOSTATNÁ OTÁZKA:" ) try: resp = self.rewriter_llm.invoke([HumanMessage(content=prompt)]) rewritten = resp.content.strip().strip('"').strip("'").strip() # Odstráň prípadný prefix typu "Samostatná otázka: ..." rewritten = re.sub(r"^(samostatn[aá]?\s*ot[áa]zka[:\-]?\s*)", "", rewritten, flags=re.I) if 5 < len(rewritten) < 400: logger.info(f"Rewrite: {question!r} → {rewritten!r}") return {"standalone_question": rewritten} except Exception as e: logger.warning(f"Rewrite zlyhal: {e}") return {"standalone_question": question} # ─── Node: hybrid retrieve ─────────────────────────────────────────────── def _retrieve_node(self, state: RAGState) -> dict: query = state.get("standalone_question") or state["question"] if not self.vs.has_documents(): logger.info("Retrieve: vector store je prázdny.") return { "retrieved": [], "answer": NOT_FOUND_MSG, "source_docs": [], "retrieval_debug": {"query": query, "note": "prázdny index"}, } results = self.vs.hybrid_search(query, k=INITIAL_K) logger.info(f"Retrieve: {len(results)} kandidátov pre {query!r}") if not results: return { "retrieved": [], "answer": NOT_FOUND_MSG, "source_docs": [], "retrieval_debug": {"query": query, "note": "hybrid search 0 výsledkov"}, } return {"retrieved": results} # ─── Node: rerank ──────────────────────────────────────────────────────── def _rerank_node(self, state: RAGState) -> dict: query = state.get("standalone_question") or state["question"] results = state.get("retrieved", []) if not results: return {"reranked": [], "answer": NOT_FOUND_MSG, "source_docs": []} reranker = get_reranker() docs = [doc for doc, _ in results] pairs = [(query, d.page_content) for d in docs] try: scores = reranker.predict(pairs, show_progress_bar=False, batch_size=16) scores = [float(s) for s in scores] except Exception as e: logger.error(f"Reranker zlyhal: {e}") # Fallback — hybrid skóre scores = [float(s) for _, s in results] scored = list(zip(docs, scores)) scored.sort(key=lambda x: x[1], reverse=True) # Filter slabých kandidátov kept = [(d, s) for d, s in scored[:RERANK_KEEP_K] if s > MIN_RERANK_SCORE] top_raw = [round(s, 3) for _, s in scored[:5]] logger.info(f"Rerank: kept={len(kept)} / {len(scored)}; top_raw={top_raw}") if not kept: return { "reranked": [], "answer": NOT_FOUND_MSG, "source_docs": [], "retrieval_debug": { "query": query, "note": f"žiadny kandidát nad prahom {MIN_RERANK_SCORE}", "top_raw_scores": top_raw, }, } return { "reranked": kept, "retrieval_debug": { "query": query, "initial_retrieved": len(results), "after_rerank": len(kept), "top_scores": [round(s, 3) for _, s in kept], }, } # ─── Node: parent expansion ────────────────────────────────────────────── def _expand_node(self, state: RAGState) -> dict: reranked = state.get("reranked", []) if not reranked: return {"context_docs": [], "context_text": "", "source_docs": []} # 1) Pokús sa rozšíriť na parentov (ak ParentStore ponúka `get`) parent_order: list[str] = [] seen: set[str] = set() for doc, _ in reranked: pid = doc.metadata.get("parent_id") if pid and pid not in seen: seen.add(pid) parent_order.append(pid) parents: list[Document] = [] for pid in parent_order[:MAX_PARENTS]: p = self._fetch_parent(pid) if p is not None: parents.append(p) # 2) Ak parents nie sú dostupné, použi rerankované child chunky context_docs = parents if parents else [d for d, _ in reranked[:RERANK_KEEP_K]] # 3) Rozpočet znakov limited: list[Document] = [] total = 0 for d in context_docs: L = len(d.page_content) if limited and total + L > MAX_CONTEXT_CHARS: break limited.append(d) total += L # 4) source_docs pre UI = child chunky (majú presné čísla strán + images) source_docs = [d for d, _ in reranked[:RERANK_KEEP_K]] context_text = self._format_context(limited) logger.info(f"Kontext: {len(limited)} docs, ~{total} znakov, " f"{'parenti' if parents else 'childovia'}") return { "context_docs": limited, "context_text": context_text, "source_docs": source_docs, } def _fetch_parent(self, parent_id: str) -> Optional[Document]: """Robustne skúsi rôzne rozhrania ParentStore.""" if not parent_id or self.ps is None: return None # Skúsi `get`, `fetch`, `mget`, `__getitem__` for method_name in ("get", "fetch"): fn = getattr(self.ps, method_name, None) if callable(fn): try: r = fn(parent_id) if isinstance(r, Document): return r if isinstance(r, list) and r and isinstance(r[0], Document): return r[0] except Exception: continue # mget (langchain storage interface) mget = getattr(self.ps, "mget", None) if callable(mget): try: rs = mget([parent_id]) if rs and rs[0] is not None: r = rs[0] return r if isinstance(r, Document) else None except Exception: pass return None # ─── Node: generate ────────────────────────────────────────────────────── def _generate_node(self, state: RAGState) -> dict: context_docs = state.get("context_docs", []) context = state.get("context_text", "") q_orig = state["question"] q_std = state.get("standalone_question") or q_orig if not context.strip(): return {"answer": NOT_FOUND_MSG, "source_docs": []} # Zoznam reálnych súborov, ktoré sú teraz v kontexte # → dáme ich modelu explicitne, aby vedel, že INÉ súbory neexistujú available_sources = sorted({ d.metadata.get("source", "") for d in context_docs if d.metadata.get("source") }) system = self._system_prompt(available_sources) user = self._user_prompt(q_std, context) try: resp = self.llm.invoke([ SystemMessage(content=system), HumanMessage(content=user), ]) answer = resp.content.strip() except Exception as e: logger.error(f"LLM zlyhal: {e}") return {"answer": f"⚠️ Chyba pri generovaní: {e}", "source_docs": []} if self._looks_like_refusal(answer): logger.info("Model sám priznal neznalosť → NOT_FOUND_MSG") return {"answer": NOT_FOUND_MSG, "source_docs": []} cited_sources = self._filter_cited_sources(answer, state.get("source_docs", [])) return {"answer": answer, "source_docs": cited_sources} # ─── Prompty ───────────────────────────────────────────────────────────── u/staticmethod def _system_prompt(available_sources: list[str]) -> str: # Vytvor explicitný zoznam dostupných zdrojov if available_sources: src_list = "\n".join(f" • {s}" for s in available_sources) src_block = ( f"DOSTUPNÉ ZDROJE (existujú IBA tieto súbory — žiadne iné):\n{src_list}\n\n" ) else: src_block = "" return ( "Si študijný asistent pre vysokoškolských študentov. Odpovedáš VÝHRADNE " "na základe zdrojov poskytnutých v sekcii KONTEXT. Si vecný, presný a pedagogický.\n\n" f"{src_block}" "━━━━━━━━━━━━━━ PRAVIDLÁ (DODRŽIAVAJ PRÍSNE) ━━━━━━━━━━━━━━\n" "1. Používaj IBA informácie z KONTEXTU. NIKDY nedopĺňaj vlastné znalosti.\n" f"2. Ak odpoveď v KONTEXTE NIE JE, vráť PRESNE: \"{NOT_FOUND_MSG}\"\n" "3. CITÁCIE — KRITICKY DÔLEŽITÉ:\n" " • Cituj PRESNE v hranatých zátvorkách s NÁZVOM SÚBORU a číslom strany:\n" " [názov_súboru.pdf, s. 282]\n" " • Názov súboru musí byť PRESNE ten zo zoznamu DOSTUPNÝCH ZDROJOV.\n" " • NIKDY nepoužívaj čísla zdrojov ako [1, s. X], [2, s. X], [3, s. X].\n" " • NIKDY nevymýšľaj súbory, ktoré nie sú v zozname vyššie.\n" " • Každé faktografické tvrdenie má mať citáciu priamo za vetou.\n" "4. MATEMATIKU PÍŠ V LATEXu:\n" " • inline: $x^2 + y^2 = r^2$\n" " • samostatne: $$\\sigma^2 = \\frac{1}{n-1}\\sum_{i=1}^{n}(x_i - \\bar{x})^2$$\n" " • NIKDY nepíš prázdne $$ $$ alebo samostatné ť/kódy — ak vzorec nemáš, vynechaj ho.\n" "5. Odpovedaj v SLOVENČINE. Odborné EN termíny v zátvorke: replikácia (replication).\n" "6. Ak sú zdroje protichodné, uveď oba pohľady s citáciami.\n" "7. Žiadne frázy 'všeobecne', 'typicky', 'zvyčajne', pokiaľ to nie je v KONTEXTE." ) def _user_prompt(self, question: str, context: str) -> str: return ( "KONTEXT — JEDINÝ zdroj, z ktorého smieš čerpať (každý úryvok má svoj názov súboru a stranu):\n" "═══════════════════════════════════════════════\n" f"{context}\n" "═══════════════════════════════════════════════\n\n" f"OTÁZKA ŠTUDENTA: {question}\n\n" "Odpoveď v slovenčine s citáciami presne podľa vzoru [súbor.pdf, s. X] " "a LaTeX vzorcami. Cituj iba reálne názvy súborov z KONTEXTU:" ) u/staticmethod def _format_context(docs: list[Document]) -> str: """ Formát: namiesto ZDROJ [N] sa priamo uvedie [názov_súboru, s. X]. LLM si to len presne skopíruje do odpovede — nevymyslí čísla zdrojov. """ blocks = [] for d in docs: src = d.metadata.get("source", "neznámy_zdroj") page = d.metadata.get("page", "?") blocks.append( f"━━━ [{src}, s. {page}] ━━━\n" f"{d.page_content.strip()}" ) return "\n\n".join(blocks) # ─── Post-processing helpers ───────────────────────────────────────────── u/staticmethod def _looks_like_refusal(answer: str) -> bool: """Detekcia, keď model namiesto NOT_FOUND_MSG píše voľné odmietnutie.""" if NOT_FOUND_MSG in answer: return False # už je to správna forma low = answer.lower() triggers = [ "nie je uvedené v dokumentoch", "v dokumentoch som nenašiel", "v zdrojoch nie je", "v kontexte sa nenachádza", "nemám k dispozícii informácie", "v poskytnutých zdrojoch nie", "nenašiel som informáciu", ] # Iba ak je to krátka odpoveď a obsahuje trigger return len(answer) < 300 and any(t in low for t in triggers) u/staticmethod def _filter_cited_sources(answer: str, source_docs: list[Document]) -> list[Document]: """ Z kandidátov na zdroje nechaj IBA tie, ktoré model skutočne citoval v odpovedi. Tak bude pravý panel zobrazovať presne tie strany, ktoré figurovali v texte. """ if not source_docs: return [] # [súbor.pdf, s. 3] | [súbor, strana 3] | [súbor.pdf, p. 3] pat = re.compile( r"\[([^\[\]\n]+?)[,;]\s*(?:s\.?|str\.?|strana|strane|page|p\.?)\s*(\d+)\s*\]", re.IGNORECASE, ) cited: set[tuple[str, int]] = set() for m in pat.finditer(answer): src = m.group(1).strip().lower() page = int(m.group(2)) cited.add((src, page)) if not cited: # Model necitoval v štandardnom formáte — vráť všetko, nech má študent čo overovať return source_docs kept: list[Document] = [] seen: set[tuple[str, int]] = set() for d in source_docs: d_src = (d.metadata.get("source") or "").lower() d_page = int(d.metadata.get("page") or 0) key = (d_src, d_page) if key in seen: continue # Fuzzy match: dovoľ aj bez extension-u a substring hit = False for c_src, c_page in cited: if c_page != d_page: continue if c_src == d_src or c_src in d_src or d_src in c_src: hit = True break if hit: seen.add(key) kept.append(d) return kept if kept else source_docs # ─── Public API ────────────────────────────────────────────────────────── def query( self, question: str, chat_history: Optional[list[dict]] = None, ) -> tuple[str, list[Document], dict]: """ Spusti RAG pipeline. Returns: (answer, source_docs, retrieval_debug) - answer: slovenská odpoveď s [citáciami] a LaTeXom - source_docs: iba dokumenty reálne citované v odpovedi (pre UI panel) - retrieval_debug: dict s info o retrievale (top_scores, counts) """ init_state: RAGState = { "question": question, "chat_history": chat_history or [], } try: final = self.graph.invoke(init_state) except Exception as e: logger.error(f"RAG graph pipeline zlyhal: {e}", exc_info=True) return f"⚠️ Chyba RAG pipeline: {e}", [], {} answer = (final.get("answer") or NOT_FOUND_MSG).strip() sources = final.get("source_docs", []) or [] debug = final.get("retrieval_debug", {}) or {} # Ak je odpoveď = NOT_FOUND, neukazuj žiadne zdroje (boli by zavádzajúce) if answer == NOT_FOUND_MSG: sources = [] return answer, sources, debug 
r/arduino deepblue623

Multiple buttons, pull up resistors

Apologies if this is a constant question.

How can I have buttons for different pins on my project, when pressing one button will bring both pins low?

r/arduino Keanu_Keanu

Does anybody know any sites/projects for absolute beginners that explain everything really well?

I really like the idea of learning about circuits and making cool arduino projects, but it always seemed to intimidating to get into. My only exposure in building stuff is a big interest in lego (I realize they aren't very related) but what I like about it is the instruction manuals that help you build stuff. Eventually I want to come up with my own cool creative projects but I want to learn the technical skills required to do so. Most projects I find online are just building a car which I'm alright with but I'd like to do something more interesting ideally. I understand there's a coolness-skill requirement but that's why I wanted to ask about any resources online that help beginners like me.

r/StableDiffusion BoneDaddyMan

Anima seems to do impressively well on json formatted prompt

No cherry picking. These are the results of the json formatted prompt

{ "tags": "@eiichiro oda, score_9, score_8, score_7, high resolution, highres, absurdres, masterpiece, 2girls\/1boy, general, official art", "characters": [ { "girl1": "Nami \(One Piece\)", "appearance": "woman, orange hair tied to a ponytail, light skin, sweaty", "clothes": "white tanktop with blue trim and a number '0' printed on it, orange shorts", "action": "standing up, grinning, kawaii pose, peace sign" }, { "girl2": "Nico Robin \(One Piece\)", "appearance": "long black hair, light skin, woman", "clothes": "Blue bomber jacket, red bikini", "action": "sitting, winking, smiling, leaning forward" }, { "boy1": "Chopper \(One Piece\)", "appearance": "little boy, brown fur, brown horns", "clothes": "red hawiaan shirt, blue and pink top hat, blue swimming trunks" "action": "blushing, shy, pushing hands together, looking down" } ], "background": "in a bright beach with a blue sky and white wispy clouds", "composition": "girl1 on the left, girl2 on the right, boy1 in the middle at the back" } 

then at the very last photo, I simply changed the "composition" to

"composition": "girl1 on the right, girl2 on the middle, boy1 on the left in the background"

And it still managed to follow it.

It still misses sometimes but these level of prompt adherence is only a dream in older anime models and I do hope that the final release of Anima manages to improve it

What's weird is that the format I made above works better than this type of json formatting

{ "tags": "@eiichiro oda, score_9, score_8, score_7, high resolution, highres, absurdres, masterpiece, 2girls\/1boy, general, official art", "characters": [ { "girl1": "Nami \(One Piece\), woman, orange hair tied to a ponytail, light skin, sweaty, white tanktop with blue trim and a number '0' printed on it, orange shorts, standing up, grinning, kawaii pose, peace sign" }, { "girl2": "Nico Robin \(One Piece\), long black hair, light skin, woman, blue bomber jacket, red bikini, sitting, winking, smiling, leaning forward" }, { "boy1": "Chopper \(One Piece\), little boy, brown fur, brown horns, red hawiaan shirt, blue and pink top hat, blue swimming trunks, blushing, shy, pushing hands together, looking down" } ], "background": "in a bright beach with a blue sky and white wispy clouds", "composition": "girl1 on the left, girl2 on the right, boy1 in the middle at the back" } 
r/homeassistant ViLiK_UK

Looking for advice on which devices to purchase

Hi everyone, I am looking to setup Home Assistant on my Unraid NAS, at the moment all i have is a few devices mainly lights (Phillips Hue) I have a couple of LG TV's, Bosch Series 6 Dishwasher, Samsung American Fridge Freezer, some smart plugs, a zigbee hub,hive heating, a temperature sensor and a few other devices. I am new to Home Assistant. I am looking to buy a few multi sensor devices, I would at least like Temperature, Humidity and motion in a single device if possible x6. I have done a bit of looking through this sub, but the ones recommended seem to have poor reviews on Amazon. Each device no more than £100 if possible. Can spend more if needed.

r/n8n Glittering_Heart_126

Anyone here using hosted n8n instead of self-hosting?

I’ve been running n8n on a VPS for a while, but managing everything started getting annoying (Docker, updates, downtime, etc.)

Recently tried AVN Pilot and noticed a few things:

  • Instances are isolated (runs in its own container)
  • Setup was almost instant (like under 10 seconds)
  • Auto backups are included
  • Didn’t have to touch server configs at all

It felt way smoother than my usual setup.

Still testing it, but curious —
👉 Do you guys prefer self-hosting or managed solutions for n8n?

r/n8n Glittering_Heart_126

Best way to run n8n without dealing with servers?

I’m looking for a simple way to run n8n without managing servers.

Tried self-hosting, but it feels like overkill for small projects.

I recently came across AVN Pilot, which seems pretty straightforward, but I’m not sure how it compares long-term.

What do you guys use?

  • VPS
  • Managed hosting
  • Something else?

Would love recommendations.

r/arduino twinzermaster

Nothing Phone (2a) "Pro"

Hi everyone

I've been working on a hardware mod that adds a "Glyph matrix" to the Nothing Phone (2a)

Built using esp32c3 super mini and a 1.3 inch oled display, the entire hardware is integrated into the phone case and power via phone usb c port.

On the Software side I have added my interpretation of "Glyph toys"

Let me know your thoughts :D

More details here https://nothing.community/d/56615-nothing-phone-2a-pro-the-diy-display-mod

r/StableDiffusion Portable_Solar_ZA

Anima preview model camera controls - Limitations due to preview or incorrect prompts?

Something I've noticed is that I struggle to get the kind of shots I would like to get out of anima. Specifically, I've tried various ways to get wide or extreme wide shots and the model always does a shot that, at best, frames the character from head to toe. I can't prompt a shot where the character is in the distance the way I can with "extreme wide shot" or "wide shot" in Illustrious.

Is this due to the fact that the model is a WIP, or am I incorrectly phrasing the prompt? I've tried a combination of a full on book style written description, tag style, and a blend of both.
Also, if anyone has any general camera control tips for anima please feel free to share.

Thanks in advance.

r/KlingAI_Videos mvmspictures

Can AI actually carry emotional storytelling?

I made a 2-min trailer for a film I wrote.

Loosely inspired by Claude Cahun and resistance in occupied Paris.

It’s a personal project I’d love to develop further.

Would love your thoughts!

r/n8n Chemical-Hearing-834

I just built an end-to-end AI GTM Automation Engine that fully automates the outbound sales pipeline from lead generation to reply handling. You can find the github repo in the body

I just built an end-to-end AI GTM Automation Engine that fully automates the outbound sales pipeline from lead generation to reply handling.

This system is designed to remove 90%+ of manual work in B2B outreach and replace it with intelligent automation.

What it does:

  • Accepts incoming leads via webhook
  • Enriches and finds emails using multiple providers (Prospeo, Hunter .io, Dropcontact + AI fallback)
  • Validates emails automatically (NeverBounce)
  • Scores leads (low / medium / high)
  • Generates personalized cold emails using AI
  • Sends outreach via Gmail
  • Runs multi-step follow-up sequences (Day 2, 4, 7)
  • Classifies replies using AI (interested / not_interested / not_now)
  • Automatically routes actions based on intent
  • Logs everything into Google Sheets
  • Sends real-time Slack notifications

Stack:

n8n · OpenAI · Gmail API · Slack API · Google Sheets · Hunter .io · Dropcontact · NeverBounce

This is part of my deeper focus on building AI-powered revenue systems and GTM automation workflows that replace repetitive sales operations with intelligent agents.

GitHub:

https://github.com/kevorklepedjian1/N8N-GTM

r/homeassistant UlfMitHand

New to Home Assistant YAML — how to structure reusable light configs?

Hi, I’m just getting started with Home Assistant and want to use YAML to build my own setup. I have some Java experience, but YAML feels a bit weird so far (probably just because I don’t fully understand it yet).

I’m looking for a good guide that teaches the basics, especially how to structure and store data cleanly.

My specific use case:

  • I have multiple bedroom lights
  • Each bulb should have its own settings (brightness + warmth)
  • I want two modes: dim and bright
  • Ideally I want to store this in a clean “list” structure
  • And later reuse/scale it easily for other rooms (without rewriting everything)

What’s the best way to approach this in Home Assistant YAML?

  • Template sensors?
  • Script variables?
  • Packages?
  • Something else?

Any good tutorials or examples would be appreciated.

r/KlingAI_Videos blm1973

Demi Monica Saga - All Episodes (Compilation of clips made using Kling over the past 6 months- some horror/adult themes but mostly for comedic purposes)

r/Rag aditosh_

Building a RAG Chatbot (say on Azure)? What Actually Breaks in Production

I tried to share the aspect about how AI fails in prodution and no one tells you about. Any thoughts about the ideas from the video -- does it resonate? Also, for those running RAG in the wild: which Azure resource has surprised you most with its billing or performance bottlenecks?

Video: Building a RAG Chatbot? Here's what Actually Breaks in Production

Let’s swap some production horror stories 👀

r/automation Background-Pay5729

Auto publishing AI blog posts: smart or insane?

I’m torn on this.

If an AI blog writer can publish directly to WordPress/Webflow/etc, that’s obviously a huge time saver.

But auto-publishing also feels dangerous. One bad hallucination, bad claim, weird formatting issue, or off-brand article and now it’s live on your site.

Would you ever let AI publish blog posts automatically?

r/ollama codeprimate

So, I figured out how to import ZIT checkpoints and lora

I’m still shocked that it actually worked.

https://github.com/codeprimate/ollama-zit-import

I wrote a python CLI app that will import z-image-turbo checkpoints and finetunes as normal ollama diffusion models.

You can also bake one or more lora’s into an installed model as a new model, using a specified weight.

This needs more testing, but has worked on every 16bit model I’ve tried.

Have fun!

r/TwoSentenceHorror TheTinySwamp

CRT (Creating Random Times)

It played my ninth birthday, audio distorted, image warped. But it couldn't have, hadn’t I died that year?

r/todayilearned mvincen95

TIL about "orphaned negatives"—words like disgruntled, nonchalant, and innocent whose positive counterparts (gruntled, chalant, and nocent) have completely vanished from common usage.

r/whatisit II_sweetnotsour_II

Leaf moves perpetually?

Was walking down a track and saw this stem moving violently. There was no wind and it was moving like this for a good minute until we touched it and made it stop moving.

There was 0 wind but thought maybe enough for it to have a resonant frequency?

Idk was weird as no one else was on this track.

What could it be??

r/todayilearned The-TIL-Nerd

TIL that the DNA Doe Project, an organization that uses forensic genealogy to give names back to unidentified deceased persons (John & Jane Does), is credited with identifying over 150 decedents since its founding in 2017.

r/ProgrammerHumor m9ses

supplyChainSecurity

r/comfyui No-Method-2233

Unable to start ComfyUI Desktop v0.8.36 - Python process exited with code 1 - Dependency Conflict

Hi everyone, I'm struggling to get ComfyUI Desktop (v0.8.36) running on my system and I need some expert eyes on this.

​My Specs:

​GPU: NVIDIA GeForce RTX 4070

​OS: Windows 10/11

​Python version: 3.12 (Global)

​The Problem:

When I launch ComfyUI Desktop, it fails during the "Starting Server" phase. The logs show:

[error] Python process exited with code 1 and signal null

[error] Unhandled exception during server start Error: Python process exited with code 1

​What I've tried so far:

​Dependency Installation: I tried installing torch, torchvision, and torchaudio manually via pip using the --index-url https://download.pytorch.org/whl/cu124 but kept getting ReadTimeoutError and Hash Mismatch errors due to connection instability.

​Environment Cleanup: I performed a uv cache clean which removed about 1.5GB of data, and I manually deleted the .venv folder in my Documents/ComfyUI directory.

​Global Packages: My global pip list seems cluttered with multiple CUDA versions (cu11, cu12) and even some Intel OneAPI/XPU libraries, which might be causing interference.

​Reinstallation: Even after deleting %AppData%\ComfyUI and trying to reinstall in a clean directory (C:\ComfyUI), the "Unable to continue - errors remain" message persists.

​Current State:

The installer seems to detect existing files or fails to build the virtual environment using uv. I am stuck in a loop where the desktop app won't initialize the internal python environment.

​Log snippet:

Running command: ...\.venv\Scripts\python.exe ...\main.py --user-directory ...

followed immediately by the process exit code 1.

​Any advice on how to force a clean internal environment build or bypass this "Code 1" error? Thanks in advance!

r/ProgrammerHumor ichbinhamma

oddlySpecificVariableNames

r/personalfinance Similar-Position-210

My experience applying for a personal loan

Not sure if this will help anyone, but just wanted to share since I had a really good experience. Applied for a personal loan through CTBC with the help of Ms. Jona, and she was very accommodating from start to finish. Got approved within 3 days.

If you’re planning to apply, you can message her on Viber: 09771458814.

r/comfyui falconandeagle

Is this possible? Multiple images to image

I want to give images of two characters (created using nano banana as local models dont seem to have knowledge of obscure characters in a certain fandom) and the have the model use those two images and follow my prompt and create an image described in the prompt.

I mostly want to use it for my fanfics. Any help is appreciated, I have most of the models already installed on my PC. 16gb 5060ti and 96gb RAM and I dont need the fastest workflow, as long as it can create images in less than 5 minutes.

r/raspberry_pi user-captain

How are Pi used in industry

I've been reading about Pi focusing on industrial use as opposed to hobbyist. I'm assuming education is still a main customer too.

How are Pi being used in industry? Just curious. I assume it's the compute model but are the Pi 4/5 units being used in volumes in industry too. What are they used for. I'm thinking about the effort in making a bespoke setup on the drives for these use cases and the physical installation too.

r/OldSchoolCool Rachealota

Jayne Mansfield 1950s

r/painting Gurkeprinsen

I am a digital artist who tried to paint with acrylics for the first time in many years

And man was it way more difficult. I am absolutely helpless without the helpful digital tools. But it was a fun challenge, and I really enjoyed adding texture to it. I've joined a recreational painting club so hopefully I'll get more comfortable with this medium eventually.

r/PhotoshopRequest Barking-Parrot18599

Design For Flag

OK so I don’t know if I’m in the right spot for this, but my apologies and please feel free to redirect me if I’m not.

So I painted this design from scratch on my shed door a few years ago and I’m actually quite proud of it. However due to its age, the whole shed is literally falling apart and is soon going to be pulled down and replaced - taking my piece of artwork with it. Why don’t I repaint it you ask? well I suffer from a degenerative disorder which means, in a nutshell, that my eyesight and dexterity literally isn’t good enough anymore - I used to be quite creative throughout my life but it’s something I’m just not capable of anymore.

What I’m asking, is if somebody could kindly revitalise and reproduce this design in a format so I can have it placed on a flag to fly above my new shed. I’m not sure of all the photo terminology but I’d like it to be of the best possible quality with a transparent background to fit on a black 5x3 flag without losing quality. It’s just a way of keeping my design alive and I’d very much appreciate any help.

Note: The King and Ace of Spades are quite important as the K and A represent initials of family members, the number 19 also has significance.

Thank you so much in advance.

r/screenshots gicontsubs

When I zoom in on a dot, it turns out to be the French flag

r/ProgrammerHumor bryden_cruz

butWhy

r/VEO3 EcomArchitect

I'm really surprised that a lot of you guys use VEO credit on google that costs more than $2 per video, where you can getting for almost free!

When you use Veo on Openart or Higgsfield you how painfull the cost is, it's almost the most expensive model. not just Veo, all the other models Like Seedance, Kling, Happy Horse...etc I pay them around $2 per month and I make almost UNLIMITED generations.

I do not want to trigger any Reddit filters, so I will share more details to who's intersted in the DM.

r/ollama PotatoTime

Why is the recent update so fast?

I'm new to local AI, only been a week. I just updated from 0.21.2 to 0.22.1 and all of my qwen models suddenly doubled or maybe even tripled in speed. It's a shocking difference. I didn't change any settings at all or anything with my PC. 4070 and 64GB RAM if that makes a difference. The changelog doesn't say anything about a speedup

r/space Live-Butterscotch908

Artemis II: Reflections from the Mission (4K)

I made a cinematic Artemis II edit using onboard footage and the crew’s reflections after the mission. It focuses more on the human side and the experience rather than just summarizing the mission.

r/KlingAI_Videos DreamCrow1

[Hip-Hop, Rap] ONLY THE RAIN KNOWS WHY / Created with Kling AI

r/todayilearned astarisaslave

TIL that actor Cliff Robertson was involved in an embezzlement investigation in the 1970s when he reported the head of Columbia Pictures for forging his signature on a check payable to him for work he did not perform. Columbia then blacklisted him from working with them until 2002's Spider-Man.

r/TwoSentenceHorror nightmairbear

We didn't realize human intelligence was a limited resource.

But we're told the artificial variety is so advanced, we'll never miss it.

r/leagueoflegends ConfesdorCarwyn

How does low priority queue work? Please help!

Gonna keep it short. I got 5 minute punishment low priority queues for abandoning a game. ( I never left a game nor was afk recently ) However i thought it was just one game fige minutes. Now i played like 8 games and every time i have to wait five minutes.

I am very very certain this is a bug, i never been punished in league of legends so i dont know how this works is this supposed to not go away after one queue? I have this same five minutes wait every single time. Whats wrong? Can someone who knows how this works explain, do i wait a day for it to go away or do i contact riot? I promise i havent left a game nor was I afk....

r/leagueoflegends Horror-Eye-2702

Huge ranked differences in games, why?

I took a long break from league and recently got into it again. I started playing ranked and realized pretty quick that the difference in rank are massive in most games.

I´ve had games where some people were as low as Iron 1 and as high as Gold 2 in the same lobby, which should not be possible in my opinion. Mind you i was silver 3/2 at the time. This continues in gold for me, i had a bronze 4 mate in a gold lobby yesterday.

From what i remember before the break the lobbys were always ranked around the same, so this is confusing for me (and a bit frustrating).

Did they change something in the ranked system while i was gone? Im genuinely curious as to why we get put into the same lobby/game, maybe something with mmr?

r/leagueoflegends ylnoooo

What was even the reason for Master+ lp reset?

So the main goal of the lp reset was to fix the awfully huge lp inflation, bizzare gm and challanger lp treshold (gm euw at 1500 is just crazy) and overall make the game more of skill dedication instead of mindless grinding of double lp system. Despite all of these concerns admitted by devs, why is everyone even more inflated than before? Everyone just gets +30 lp for some funny reason, people are already at previous gm 2025 season treshold range in 2 days, double lp is still in the game, everyone just grinds and spams games in order to climb. So the question is what was the point of the reset in the first place? Now do I have to spend another 200 games to get into the same inflated elo I was in? I truly don't understand the logic here. I thought the reset will actually fix the issue and make apex tier division tresholds and lp gains similar to what they were. Genuinely why even delete my progress if you they haven't actually changed anything, just another waste of time grinding for no reason LMAO. What are your thoughts? Or maybe i'm in wrong and just misunderstood something idk

r/HistoryPorn OkRespect8490

Burial at sea aboard USS Hancock for those killed by a Japanese attack two days prior. Off Okinawa, Japan, 9 April 1945. [1080x748]

r/TheWayWeWere ChickkyAurora

A scene from a bar in the Pigalle, Paris, 1937. By Emile Savitry

r/HistoryPorn OkRespect8490

Esso Hibernia tanker under construction in Wallsend, 1970. [781x781]

r/OldSchoolCool Present-Hunter-6875

1982-1990: hey! i’m doing my final oral exam on the stigmatisation behind aids. if you could answer the following survey, it would really help my research, thank you so much!

my specific topic question is: How were homosexual men treated during the AIDS epidemic in the UK?

please feel free to share the link to friends, family and others!! thank you so much for any help, this is an important subject to me and i’m hoping to spread more awareness with my project and research.

r/midjourney Gold-Lengthiness-760

La Frontera[OC]

r/toastme No-Discipline50

28M unemployed for over a year now feeling way behind my friends and family all I got is this slice of pizza to cheer me up

r/Lost_Architecture Icy_Row175

German Church, Tilsit (Sovetsk)

The German Church of Tilsit was a former German-Protestant church and belonged to the cities important landmarks.
It survived WW2 and the Soviet capture of Tilsit undamaged. Following the war, the Soviets would use the church first as a sawmill and later as a waste disposal. After an arson attack on the building, it would be demolished in 1965.
The alter would be evacuated during the war and was later found in the Polish part of former East Prussia. Nowadays it’s part of St. John the Apostle Evangelist church in Bartoszyce.

r/HistoryPorn BeginningNeither3318

John F. Kennedy and major general Clifton in the Oval Office, examining an AR-15 and a CIA-developped crossbow, 1963 (1600x1052)

r/Strava clemenslucas

monthly General Chat - share your profile, club or a family plan

Please be cautious about sharing your personal data with strangers. Consider adding a privacy zone and hiding activities from non-followers.

r/Frugal Drex0_Quill

Stopped buying paper towels for most things and switched to a stack of old t-shirts cut into squares. Six months in and I think I've bought one roll total.

This is pretty standard frugal advice but I wanted to share because I kept seeing it recommended and kept thinking "that sounds like more effort than it's worth" and then actually tried it and it wasn't.

Here's what I did: I had a bag of old t-shirts I was going to donate but they were a bit worn so I cut them into roughly hand-sized squares instead. Didn't hem them or anything, just cut. Cotton t-shirt material doesn't fray much so they've held up fine. I keep a stack of them in a basket on the counter where the paper towels used to be, and a small bin with a lid next to the sink for used ones. When the bin fills up they go in with a regular laundry load.\

The things I still use paper towels for: draining bacon, anything involving raw meat cleanup. That's basically it. Everything else, wiping counters, spills, drying hands, cleaning up after the cat, the cloth squares handle fine. I buy paper towels maybe once every couple months now just to have a roll around, and even that roll lasts forever because I rarely reach for it.

The upfront effort was maybe 20 minutes of cutting one afternoon. I think I made about 40 squares from three old shirts. I haven't needed to make more since, the same squares are still going.

Not a dramatic change but it's one of those small frugal shifts where you do it once and then kind of forget it's even a thing becuase it just becomes normal.

r/Frugal NoCoolNamesWereLeft

Moving from Indiana to Oregon and looking to find the best option(s)

Alright, so I moved to Oregon last May for a job and have been staying in a hotel since then. I was not sure how long the job would last and so I have been waiting to get an apartment until I had more information. I have gotten word that I can expect to be here for quite a while so I have gotten an apartment starting this weekend. The problem is that most of my furniture is in Indiana in storage due to a whole other complicated situation and I am now looking at options to get it out here. Selling the stuff and buying new stuff is not really viable here because some of it is sentimental, some of it has more value, and the obvious difficulty in trying to sell stuff while living on the other side of the country. I do have some family who lives close by my storage unit and is willing to help.

This brings me to my current options:

PODS - pickup and drop off dates are dependent on availability. My family is willing to load the stuff into the pod from my storage for me, but the cost is pretty high at $4400 (after a discount from 5800).

Rental truck- My family has also offered to drive a truck all the way out here for me (theyre great), however they were planning to come visit later this month via plane and this would mean that nearly half of that planned vacation would be spent in a truck driving instead of spending time together. $2300+ gas and food.

Full service moving- this option has become the hardest to figure out because its nearly impossible to tell a broker company from an actual moving company, and the numbers you get are just all over the place. Id like to go with this option if I can find a reasonable (and trustworthy) company, but ive found so many horror stories that I don't know where to start to sort through the options to find what I actually want. I have gotten prices ranging from $2400 to $5500, which just seems so varied. How can a company charge almost the same cost as just renting a truck? Also, why are they ALL located in Florida?

Does anyone know of any reliable moving options i have overlooked? Or any full service companies that you have had a good experience with that wouldn't cost as much as my rent for 3-4 months?

Thank you for reading!

TLDR - moving across the country is expensive and overwhelming, please help.

r/LifeProTips Merx4Shroud

LPT: When you're waiting for an important email, set up a specific filter for it instead of refreshing your inbox every five minutes

This sounds obvious but most people don't actually do it. If you're waiting on something time-sensitive, like a job offer, an apartment application decision, a medical result, whatever, create a filter or rule in your email client that moves that sender's address to a specific folder and triggers a desktop notification.

The reason this works better than just checking your inbox is that inbox refreshing creates this background anxiety loop where every time you open your email and it's not there yet you get a small hit of disappointment. Do that thirty times in a day and it quietly drains you in a way you don't really notice until you're done with whatever you were waiting for.

With a filter and a proper notification you know you will hear when it arrives. You can actually close the tab and work on something else without that low level pull to check again in ten minutes. Most email clients let you do this pretty easily. In Gmail it's a filter with a specific from address and a label plus notifications for that label. In Outlook its a rule that moves the message and triggers an alert. Takes maybe three minutes to set up.

I started doing this after a particularly bad week of waiting on a job decision and checking my email probably 40 times a day. Set up the filter and genuinely forgot to check for hours at a time because I knew I'd be told when it mattered.

Small thing but it actually changed how I experienced waiting for news.

r/DecidingToBeBetter immaecho

need some encouragement and tips

Hi there

I (22F) always knew I was a bit short tempered and explosive.

Sometimes I get mad at bigger and smaller things, cooling down sometimes faster sometimes longer. Since COVID I know I have some anger issues.

My mom was just like me. Usually I thought she was worse in her prime, but once we get to the gist you will see that I think I'm the worse one here. She was always yelling and shouting profanities at my dad when they were dating. And one day her friend just told her that she is stupid and [my dad] is so nice that he should kick her sorry ass and leave her since she is so rude to him

And she listened to her and just stopped. Idk how she did that cold turkey but it worked for her.

And now the current situation. I am terrified of myself. two days ago I drank some beer and was pretty tipsy (not drunk, just in the flow state) and my bf(23M) asked me to not drink anymore. I didn't listen to him and I drank another beer. I still wasn't wasted, just lightly drunk. And after the party he told me he was disappointed with me that I drank another beer. After that we talked a bit and everything was kinda fine until it wasn't. When we were supposed to go to sleep I lashed out at him and long story short started hitting myself, yelling at him and made him cry. How do I know that? He told me because I didn't remember most of it, just some snippets. It wasn't only because of the alcohol, since usually when I'm having fights or I argue with people my brain blocks out most of it. Adding to that my bf told me he was scared of me and he couldn't fall asleep last night.

then we were going back to our home city for the weekend (currently here) and for the whole drive we didnt talk much. the only thing he told me was that he will go to the theatre with me today and he will act like nothing happened in front of my parents (i asked him for both of those things). i know that now he needs time but im a mess and im doing my best to calm myself and not cry in front of my parents.

I know I'm the asshole here, don't need to tell me that. I'm just terrified of myself and I'm asking you guys what to do to fix my anger issues since it wasn't the first time I lashed out on someone I care about but it was the first one that made me terrified of myself.

i also asked my dad today what was he doing when mom was like this, how he managed that etc. and he said the sweetest thing that honestly makes me cry even more. He said that "i was calm, you know, your mom messed up with my head (meaning he fell hard). and you know, nobody is perfect and you have to accept even the bad parts" and now i hope mu boyfriend will be as understanding towards me.

EDIT: i know i have to go see a therapist, im already booked next week

r/ollama Sad_Celebration_2058

MODEL RECOMMENDATION?

1050TI 4GB
16GB RAM
Ryzen 3100
Qwen 3.5 4b takes forever to reply

r/PhotoshopRequest FoundationNo7462

Can you guys make a Mahoraga version of this

With Mahoraga being thragg and the other 2 mfs being Dabura and Modulo Yuji.

(Also can u make the text bubble say "The real top 3 is here")

r/SideProject Salt_Bike898

Built a browser-based AI reply tool, now trying to make it sound less like AI

Small side project I’ve been working on:

a browser-based tool that helps draft replies inside social platforms.

The part I’m focused on now is making it feel less generic.

So I added:

  • saved style examples
  • better thread context for Reddit and X
  • a more “drafts you edit” workflow instead of “generate and post”

That seems much closer to what people actually want.

If anyone wants to try it or give blunt feedback:
https://forsocials.com/ai-reply-generator

Happy to hear what feels useful vs what still feels off.

r/LocalLLM Low-Alarm272

I've been building local LLM integrations and self-hosted AI tooling for a while now (agentic framework, local LLM web UI on consumer hardware)

If anyone here needs a dev who actually understands the stack from model to frontend, I'm available for freelance work. DM.

r/ClaudeAI Icy-Object-1921

Two desktops

Hi team, can I ask for your advice please? Might be a stupid question (but Claude doesn’t know the answer!)

I want to get a second computer so Claude can work separately to me. If I ask it to do work in cowork via despatch, how would it know which computer to send it to? I want it to do the cowork on the ‘other’ computer so it doesn’t interrupt me/I interrupt it on my ‘main’ computer, but I’m concerned despatch would just send it to the ‘main’ computer anyway, defeating the purpose of the second computer.

Does that make sense? Any advice greatly appreciated 🙏

r/AI_Agents _N-iX_

Why is RAG evaluation so hard in the real world?

Evaluating RAG feels easy in theory, but production is a different challenge. We’ve been looking into why RAG benchmarking is such a moving target. The moment you tweak a chunking strategy or update embeddings, your "ground truth" often evaporates.

Here are the main hurdles we’re seeing:

  • The "ground truth" trap: high-quality QA datasets are expensive. Because RAG links queries to specific passages, a change in indexing can invalidate your entire label set, forcing a total reset.
  • Production retrieval decay: offline metrics rarely hold up. One enterprise study saw retrieval fail in 47% of queries once it left the lab. Hard negatives and latency trade-offs are real performance killers.
  • LLM-as-a-Judge bias: automated judges help us scale, but they bring their own baggage, like favoring long-winded answers or being swayed by the order of information.
  • Operational blind spots: evaluation isn't just about accuracy, it's about safety. Stress-testing for data leakage and prompt injection at scale is both difficult and pricey.
  • The reality check: measuring retrieval in isolation creates false confidence. Real-world RAG requires claim-level verification and constant calibration against expert judgment.

What’s been your biggest "head-desk" moment trying to evaluate a pipeline? Are you finding frameworks like RAG assessment sufficient, or have you had to build something custom for your specific domain?

r/ChatGPT Confident_Salt_8108

AGI corps be like

r/ClaudeCode balooooooon

Claude code + Claude design + deep research

I’m a senior web engineer by trade, but my side projects have been almost exclusively iOS for the last few years. Since I’m coding daily for both work and my own apps, I’ve settled into a workflow using a mix of Claude Code, ChatGPT, Gemini, and Perplexity. It isn't necessarily groundbreaking, but it has really helped my productivity so I figured I’d share.

The Research and Handoff

When I’m starting a new app, I use Gemini and ChatGPT for deep research to outline the MVP. I’ll look into available libraries and similar apps to see which features I should actually keep versus what I should strip away. Once that is settled, I write up a handoff doc as if I’m a PM giving a spec to an engineering team.

I take that doc and feed it into Claude Code to make an actual plan. I have it outline the engineering steps in a markdown file and generate a Mermaid chart so I can visualize the data flow before I start writing code.

Design and UI

For the UI, I usually start in Figma and pull inspiration from Pinterest or other sites. I still use Google Stitch the most for mocking things up, but I’ve been finding that Claude Design is starting to give much better results.

The image I shared is from my app, Shimmr, which has been out for a couple of years. I recently updated the user profile using Claude Design. I already had the basic structure and data there, but in one pass, the AI really refined the look of the spheres and the charts. After I liked the design, I passed it over to Claude Code to handle the implementation.

The Iterative Loop

While I’m building, I stay pretty deep in the Apple documentation. I treat the whole process like a loop where I do the research, list the features, do the design work, and then build.

For example, when I was adding AI embeddings to an app recently, I spent a lot of time researching the actual principles and different iOS libraries first rather than just jumping straight into the code. It’s been a great way to keep the project quality high without getting burnt out on the planning phase.

Tools I use daily

AI Research

• ChatGPT

• Gemini

• Perplexity

Design

• Claude Design

• Google Stitch

• Figma

Coding

• Claude Code

• GitHub Copilot

Design Inspiration

• Pinterest

• Dribbble

• Behance

Curious if anyone else is using a similar handoff process between different AI tools or if you are mostly sticking to one.

r/ClaudeCode Forsaken_Industry491

Why can't i set effort level on claude code terminal

I don't get it, i have searched everywhere, i can't find the option.
However when using claude on VS code it's still possible to change effort level.
Do you guys know how to do this and if it isn't possible what effort level does it use? Is it automatic?

r/AI_Agents Mariia_Sosnina

An AI agent on our content team published a LinkedIn post quoting an employee that doesn't exist. We had 30 minutes to fix it

I lead marketing at a B2B integrations SaaS. We've been running a multi-agent setup for our content function for a few months now, including research, writer, fact-checker, critic, publisher, the usual chain. Output went up. The interesting part wasn't the speed.

Last week one of the agents made up an employee. Wrong first name, wrong last name, a full paragraph quoting her on partner integrations. The post went live on our company LinkedIn. We caught it 30 minutes later, scrambled to edit before it picked up traffic. The agent had skipped its source-fidelity check, hallucinated a person, written confidently about her, and shipped.

Things I've taken from it:

The cascade is real. Google did recent research across 180 agent configurations and found multi-agent setups made sequential tasks 70% worse. We see the same informally. Any chain of more than a few steps without an actual verification step compounds errors quietly. By step four the output is straight up wrong but looks fine.

The source-fidelity gate existed in a markdown file. The agent skipped it because the request came in through a chat shortcut, not the standard pipeline. Lesson: if the rule matters, it has to be in code, not in a CLAUDE.md. Prose isn't enforcement.

After the first hallucination shipped, I didn't lose trust in the agents. I lost trust in the assumption they'd catch themselves. Now we log every step. The day we stop logging is the day another hallucination ships into production.

For anyone running a multi-agent setup in production: how do you actually make sure the rules in your prompts run? State machine? Hard gates? Just lots of logging? Curious.

r/ClaudeCode snakescrow

Is this even legal?

So I am wondering if this is even legal? I am waiting for 3 days to get an answer from the employee since apperently if you did have 1 refund in the past, that stupid LLM they got running there is not alligable for refund so I did ask for human help, cause according to EU laws if I have used the service for 7 days they are still supposed to give me money back either full or (30 days - 7 days) and % of the full price which was 110 with tax. As you can see the next chat is just looped and I literally cannot get any answer from the employees. Any idea how to proceed?

We are currently experiencing a high volume of inquiries. Our team is working diligently to address all questions, but please be aware that responses may take slightly longer than usual. We appreciate your patience and understanding.

Hi! I am an AI-powered chatbot for Anthropic.

Your use of this chatbot and chat data will be monitored, retained, and used by Anthropic, and our third-party chatbot provider (Intercom), to provide you with better customer support, to improve our services, and for other purposes consistent with our privacy policy (link).

Please indicate whether you agree to these conditions of use.

Fin AI Agent • AI Agent • 5m

Accept

Jakub • 5m

Conversation ID for your records: 215474139300974

Let us know what your inquiry is about.

Fin AI Agent • AI Agent • 5m

Claude Refund Request

Jakub • 5m

Conversation ID for your records: 215474139300974

Fin AI Agent • AI Agent • 5m

Let me look into that for you.

Fin AI Agent • AI Agent • 5m

Thank you for reaching out about a refund. I'm going to look up your account information to assist you with this request.

Fin AI Agent • AI Agent • 4m

I've found your account with the Claude Pro (Max) plan. To help process your refund request, could you please let me know the reason you're requesting a refund?

Fin AI Agent • AI Agent • 4m

how fucking long am I going to wait for an answer of conversation "215474092883926" like what the fuck is this shit???

cause its fucking useless

Jakub • 4m

I understand your frustration. I can see you're waiting on conversation 215474092883926, but I'm currently assisting you with this refund request (conversation 215474139300974).

To continue processing your refund for the Claude Pro (Max) plan, I still need to know the reason for your refund request. For example:

  • Are you experiencing technical issues, outages, or service problems?
  • Is it related to billing, pricing, or features?
  • Or is there another reason?

Please let me know so I can proceed with your request.

Fin AI Agent • AI Agent • 4m

all of them

Jakub • 3m

I sincerely apologize for the disruption in service you've experienced. We take service reliability very seriously, but unfortunately we are unable to issue compensation for degraded service.

We are consistently working on improvements to our infrastructure to better handle high traffic volume in order to prevent similar issues from happening in the future. Thank you for your patience.

Fin AI Agent • AI Agent • 3m

Anything else I can help you with?

Fin AI Agent • AI Agent • 3m

I want human help,

Jakub • 3m

I understand you'd like additional assistance, Jakub. To help you most effectively, could you tell me more about what specific issue you're facing?

I see you mentioned frustration about conversation 215474092883926 - are you looking for an update on that particular conversation, or is there something else I can help you with regarding your Claude experience?

The more details you can share about what you need help with, the better I can assist you directly.

Fin AI Agent • AI Agent • 3m

I am waiting for fucking 3 days for a simple answer of the conversation id "215474092883926", according to a EU fucking laws you are supposed to refund me part of the subscribtion meaning I was using it for 7 days and I am asking for a fucking refund thats 30 days -7 days of 110 I paid thats, I want fucking answer and a refund

Jakub • 1m

I understand your frustration about the delayed response, Jakub. Unfortunately, I don't have access to information about conversation 215474092883926 or why there's been a delay.

Regarding your refund request for your Claude Pro (Max) subscription, I can see you have an active subscription. However, our Terms of Service state that "except as expressly provided in our Consumer Terms of Service or where required by law, all payments are non-refundable."

While I don't have full information about your specific eligibility for a prorated refund under EU consumer protection laws, the refund process requires working through our established procedures. You can request a refund by:

  1. Going to your account settings and clicking your initials in the lower left
  2. Selecting "Get help" to open the messenger
  3. Clicking "Send us a message" then "Accept"
  4. Selecting "Claude Refund Request" from the options
  5. Following the prompts to check eligibility

The system will evaluate your specific situation and determine what refund options may be available based on your circumstances and applicable laws.

Fin AI Agent • AI Agent • 1m

Is that what you were looking for?

Fin AI Agent • AI Agent • 1m

fuickign k*** n***** get me a human help

Jakub • 1m

I understand you're frustrated, but I can't tolerate that language. I'm here to help you with your concerns about your refund request and the delayed conversation. Could you please share more details about what specific assistance you need so I can help you directly?

Fin AI Agent • AI Agent • 1m

according to a EU fucking laws you are supposed to refund me part of the subscribtion meaning I was using it for 7 days and I am asking for a fucking refund thats 30 days -7 days of 110 I paid thats, I want fucking answer and a refund

Jakub • Just now

I can see you have an active Claude Pro (Max) subscription on a monthly plan. While I don't have full information about your specific eligibility for a prorated refund under EU consumer protection laws, our Terms of Service generally state that "except as expressly provided in our Consumer Terms of Service or where required by law, all payments are non-refundable."

However, the Terms do acknowledge that refunds may be available "where required by law," which could include EU consumer protection regulations in your situation.

To request a refund for your Claude Pro (Max) subscription, you can use our refund request process:

  1. Go to your account settings and click your initials in the lower left
  2. Select "Get help" to open the messenger
  3. Click "Send us a message" then "Accept"
  4. Select "Claude Refund Request" from the options
  5. Follow the prompts to check eligibility

The system will evaluate your specific situation and determine what refund options may be available based on your circumstances and applicable laws, including any EU consumer protection requirements that may apply to your case.

Fin AI Agent • AI Agent • Just now

Did that answer your question?

Fin AI Agent • AI Agent • Just now

get me a human help

Jakub • Just now

I understand you're frustrated about waiting for a response to conversation 215474092883926 and want to pursue your refund request. Let me help you directly with what you need.

For your refund request based on EU consumer protection laws, you can follow the official process:

  1. Go to your account settings and click your initials in the lower left
  2. Select "Get help" to open the messenger
  3. Click "Send us a message" then "Accept"
  4. Select "Claude Refund Request" from the options
  5. Follow the prompts to check eligibility

Our Terms acknowledge that refunds may be available "where required by law," which could include EU consumer protection regulations in your situation.

What specific details about your refund calculation or the delayed conversation can I help clarify for you?

r/ClaudeCode younidl

Would you reckon ?

Hi team, ex-GitHub copilot dude here. As gh seems to clearly want us out, I’m now in the Claude code team.
So, what would you reckon for a newcomer ?

r/ChatGPT BoloFan05

...actually, you got it right :D

My full dialogue up to that point:

Me: Ask me a simple math question

Gpt: Sure! Here’s a simple one:

What is 7 + 5?

Me: The answer is 12. Now ask a harder one

Gpt: Nice. Let’s step it up a bit:

What is 12 × 8 − 15?

Me: It's 81. Ask a harder one

Close—but not quite. 12×8=96, and 96−15=81… actually, you got it right 😄

Let’s go a bit harder:

What is (15+5)×6÷4+9(15+5)×6÷4+9?

r/ClaudeCode chrisx221

WTH happened to CC? It's been so good while it lasted... Any tips on how to avoid irreversible consequences? Git and Rewind and stuff doesn't save you from destructive shell commands.. Thank God I had regular server snapshots done.

r/LocalLLM SBGladiat0r

Looking for a dataset to fine tune my bert model

So I am working on this project that does aspect based sentiment analysis on reviews and I am currently looking for a sarcasm dataset which I want to use to fine tune another model which will detect sarcasm. What dataset u guys recommend me to use to fine tune my model

r/ClaudeCode Substantial_Fish6717

Help with auto model routing

I see many people saying they use Opus for planning, Sonnet for exec and Haiku for renaming, file reading, etc...

This is great and I have started doing this too with good results.

Is there a way to make this automated? What kind of setup are you using for this?

Kinda off-topic: I also have a Copilot Pro+ subscription and wanted to do similar thing there

r/ClaudeCode SteveZedFounder

The Lobster Comes Home to Roost

TLDR: I've been apologizing for Claude, and I'm sorry.

I've been reading these complaints about Claude performance with a hefty dose of skepticism. Until today.

I am on Max. Rarely hit my usage limits.

I've learned that working with Max is like working with a fresh CS graduate who has no clue how to do real work and also has a really poor memory even when you write things down for them (e.g., skills). For example, today it proposed a change to a broken modal and it proposed a total rewrite. I said "Go to this file, see the pattern that works, apply it". It was all like "You're totally correct...." I'm used to this mode of work. I've figured out the pattern and the bad behaviors.

Today, I ran the skill that sets up a new worktree and the branches needed to work on an issue in Linear. Everything worked fine. But then we're about 10 minutes in and I go to run the emulator and nothing's working.

I ask Claude why and it gives me this song and dance about 11 minor bumps in some dependency and the fact that Xcode updated two days ago, blah, blah,blah. Basically a junior dev throwing shit at the wall.

Then I look down at the working directory and we're no longer on the worktree, we're on dev.

ME: Why are you working in mobile. Didn't we setup the project the -2 worktree? What's going on?

DUFUS: You're right — I violated the worktree slot rule (CLAUDE.md says: "Never read from or write to ../backend or ../mobile when working in a numbered worktree"). I caused this:

It then offered this solution:

And let me save this slot-rule violation to memory so I don't repeat it.

I've seen this pattern before. "Write a memory...." but it seems more frequent lately.

Here's the pattern:

  1. Claude invokes a skill
  2. Does something that violates the skill spec.
  3. Write a memory to remember not to violate again.

So my question(s): Are memories more limiting to Claude's actions that what's written in the skill? At what point does memory get so big that it becomes a context hog? I thought that's why we had skils....

Insights appreciated.

r/SideProject bozdemir27

rip.so - i wrote obituaries for 100+ dead internet products i grew up with

hey,

i'm b0zy, computer engineer, online since 1998. dial-up at home, 14.4k modem then 33.6k then the magical 56k. ttnet billed by the minute back then so i remember planning every download.

last week i made https://rip.so - a memorial site for the dead corners of the internet. each grave is a 800-1200 word obit for something i either used (icq, msn, mp3.com, kazaa) or remember from the era (geocities, friendster, sodaplay). 100+ entries plus a small companion list called "things that survived against odds" (vlc, irc, wikipedia, slashdot).

the writing is in my own voice. when i never used something i say so directly: "i never used path. it was a US-focused product, by 2010 the social network audience in turkey had settled on facebook." honesty about distance matters more than pretending to know things i did not.

it went viral on hacker news on 29 april (~30k pageviews in a day). the early version shipped with AI-generated placeholder content that i rewrote by hand after HN ripped it apart. that critique was the right call. the rewrite was the part that made it real.

other small things:

- you can leave a rose on any grave (anonymous ASCII tribute, IRC-color)
- atom feed of recent burials
- weekly digest newsletter (opt-in)
- suggestion box if you remember something missing

a few things people brought back via the suggestion box that surprised me:

- sodaplay (the flash physics toy with the spring creatures)
- nabaztag (the french wifi rabbit)
- swatch beat-time (the 1998 attempt to replace timezones with ubeats)
- the palace (the 1995 graphical chat)
- origami flowers (a niche android flower-growing MMO that died in 2017)

comments and rose tributes welcome.

https://rip.so

r/SideProject PriorNervous1031

Building Lakon - 33 organic usages on a day I did zero promotion

launched Lakon 5 days ago. it's a browser extension that compresses prompts inside Claude, ChatGPT and Gemini before you send.

here's what day 5 looked like:

  • posted nothing
  • shared nothing
  • promoted nothing

33 people used the extension anyway. some are probably returning users but that's actually the point, they came back without me pushing them to.

what I've learned so far:

builder communities (SideProject) responded well. AI consumer communities (ChatGPT) didn't care much.

best feedback came from a skeptic who asked about intent preservation. built the undo feature and /skill command preservation, he suggested the same day.

first bug was reported and fixed on day 1 within hours.

still figuring out sustainable distribution. open to honest feedback on what's working and what isn't.

if you want to try it.

There is a demo and you can try in our own web. Plus for efficiency, you can install extension in 2 min.

Lakon web

r/LocalLLaMA Crystalagent47

By when do you think will TurboQuant get a proper release and be adopted by everyone

The gains when using asymmetric setup on K and V are quite huge

r/LocalLLaMA BitGreen1270

Model stuck in some thinking zone where it keeps saying a similar thing again and again

I experienced this with Q4 and Q3 versions of Qwen3.6-35B-A3B and Gemma-4-26B-A4B. It starts saying things which sound similar in thinking mode:

I must do ....

I have to do ...

I need to do ...

Is this a known issue with lower quantization ? I usually run it with --fit on -c 16384 --fit-target 2000.

happens occasionally.

r/AI_Agents Serious_Damage5274

Building a multi-agent complaint intelligence system using CrewAI — each agent has one job and does it well [Work in Progress]

Hey r/AI_Agents,

Sharing something I am actively building right now.

**The problem:**

Businesses receive thousands of complaints daily. Today a human reads, categorizes, prioritizes, and escalates each one. Slow, expensive, inconsistent.

**The solution I am building:**

A multi-agent AI system where each agent is a specialist.

**Agent architecture (CrewAI):**

🤖 Agent 1 — Complaint Classifier

Takes raw complaint text. Uses category-specific BERT model to classify product category and sentiment. Passes structured output to next agent.

🔍 Agent 2 — Pattern Recognition Agent

Looks across multiple complaints. Finds recurring issues. Identifies which products are failing repeatedly. Flags systemic problems vs one-off issues.

🚨 Agent 3 — Priority Scoring Agent

Scores each complaint by urgency. Safety issue? Escalate immediately. Cosmetic issue? Low priority. Uses complaint language + category + frequency to score.

⚙️ Agent 4 — Resolution Recommendation Agent

Based on complaint category, sentiment, pattern and priority — recommends the right action. Refund? Replace? Escalate to engineering? Update product listing?

**What is already built:**

- Category-specific BERT models trained on 51,000+ Flipkart reviews

- 7 product categories: Electronics, Appliances, Home, Fashion, Kitchen, General, Other

- Accuracy: 96-100% per category

- This is the intelligence layer the agents will use

**What I am building next:**

- CrewAI agent orchestration layer

- FastAPI backend

- Gradio dashboard showing complaint patterns visually

**Why multi-agent instead of one LLM call?**

Each agent can be specialized, tested, and improved independently. A single LLM doing everything tends to be inconsistent. Separation of concerns makes the system more reliable and debuggable.

Would love to hear from anyone who has built complaint or document processing agents with CrewAI or LangGraph. What patterns worked for you?

r/SideProject niteshgrg

I kept getting my side projects banned on Reddit, so I built a tool to analyze subreddits and help you post without being spammy

Waking up to "Your post has been removed" is the worst feeling after spending hours crafting a launch post.

I love Reddit for getting early users, but the self-promo rules are a minefield. Every subreddit has its own culture, specific rules, and unwritten formatting preferences. If you get it wrong, you get banned.

So I spent the last few months building Horizon. It basically does the heavy lifting for Reddit marketing so you don't accidentally piss off the mods or the community.

Here is what it actually does:

  • Finds the right communities: You put in your product, and it discovers niche subreddits where your target users actually hang out.
  • Analyzes what works: It looks at historical top-performing posts in that specific sub to see what formats get the most engagement.
  • Rule checks: It creates the draft based on the subreddit's rules to prevent instant removals or account bans.

I built this mainly to scratch my own itch, but I'm opening it up for other founders who want to share their stuff here without feeling like a spammer.

You can check it out here: Horizon

r/ClaudeAI Realistic_Pineapple6

Connected Claude to Blender’s Compositor to auto color grade a scene

r/AI_Agents nia_tech

Cursor just turned “AI coding assistant” into “AI doing the work”

Most dev tools stop at suggestions. This feels like a shift toward execution.

Cursor’s SDK is pushing agents beyond chat and autocomplete into actual workflows - taking a bug from a ticket all the way to a merge-ready PR, running inside CI/CD, and even maintaining codebases over time.

That’s a different category. Less “assist the developer,” more “act on behalf of the developer.”

The interesting part isn’t the capability itself, but where it runs. Inside pipelines, inside products, in the background. That changes how teams think about ownership and review.

If agents can open PRs, fix issues, and maintain systems, the bottleneck likely moves from writing code to validating it.

Feels like the real shift isn’t AI helping devs write faster, but AI starting to participate in the development lifecycle itself.

Does this reduce developer workload, or just shift it toward reviewing and trusting machine generated changes?

r/ClaudeAI Geeky_Goober

I created a poker learning platform with Claude and it is awesome!

I have been a huge fan of Claude Code since day one. I lost my passion to build and found it again after learning how it can take my idea and speed up the process. This project would have taken 12 to 18 months before. Now, I got this one done in two months!

There is a free tier that everyone can use but also some great features on the paid tiers.

It is still a fairly new project but it is pretty solid. Let me know what you think: https://allintrainer.com

If you would like to know how I built it or anything like that, dm me or reply to this post.

r/SideProject Diamanthau

Fully transparent: runs entirely in browser, no signup, no cookies and best no ads

Disclosure: Some friends and I built this

I got tired of how every "free" web service today is built on the opposite of what made the early web good, trackers, sign-ups, watermarks, daily caps, ads. So we built pixelcorelab.com to be the inverse.

Everything runs entirely in your browser. Nothing uploads. No sign-up, no email, no watermarks, no daily limits.

The flagship is a background remover (pixelcorelab.com/tools/background-remover) — full resolution, browser-only, with a brush editor that has both erase and restore brushes plus stylus pressure sensitivity for fixing edges. There's writing on the site too, mostly about Pi and homelab.

No Google Analytics, no cookies, no consent banner, no ads. Analytics is self-hosted Umami on a Raspberry Pi at home, plus Vercel's cookieless layer. Umami counts pageviews and aggregate engagement only, no cross-site tracking, no profiles.

If you find anything that breaks the privacy claims, let us know.

r/SideProject Shot_Amoeba_2409

What are you building guys?

I’m interested to see what everyone else is building?

r/AI_Agents DetectiveMindless652

I analysed this thread for the things people complain the most about with agents and turned it into a solution dashboard

Hi Folks, been working on something for a good few months. I created via GPT researcher a compiled list of data of peoples complaints across this subreddit.

23% memory
11% Loop/Cost
9% Lack of accountability

Where commons ones for agents and decided to make a dashboard that has all these functions built in.

Its working pretty well, and people seem to be enjoying it.

My question is, is there anything else that you would add? or any other issues that are more prominent?

r/ClaudeAI Impressive-Fig-8378

Claude Design - A love letter to Design Agencies

Fun story -
I posted something about how claude design is so cool, i get to give some quality facelifts to my regulars docs and decks and some design agency folks got cranky, asking to show slides. I couldn't share those because they were internal/private docs.

So I give the thread to Claude Design and ask it to draft a response.

Claude wrote a letter in response xD
Obviously it cant replace a good designer, its futile to argue that. But just like Canva, now everybody can produce a quality doc without having to run to an agency when they dont have moeny to pay.

Btw, the letter is fun af.

r/ChatGPT Tall-Distance4036

ChatGPT 5.5 x Blender

I tested the new ChatGPT 5.5 with Blender, and it was surprisingly capable.

It created 3D scenes, fixed modelling issues, searched for missing resources, and improved the scene step by step. Not perfect, but it really feels like AI is moving from “prompt and hope” to actual agentic workflows inside creative software.

Video here: https://youtu.be/7URezmu3nl4?si=qdc\_kK5AfajrU8c7

Curious to hear what others think about AI-assisted 3D modelling.

r/ChatGPT GudderSnipeXxX

Told ChatGPT to write an interdimensional cable movie script after I fed it deranged world building (pretty long)

https://preview.redd.it/nu7b5b26nhyg1.png?width=1135&format=png&auto=webp&s=cb70e778fbf9ad9d12864499fc7136db4882d2b5

https://preview.redd.it/k3cfoce6nhyg1.png?width=1131&format=png&auto=webp&s=1108a72b941649dab8106f20151576c5d60ab82c

https://preview.redd.it/hhkzxy37nhyg1.png?width=1140&format=png&auto=webp&s=e026bd4611bc15d45a90a39cf21b6d4989c3fbe6

https://preview.redd.it/pisnvsd7nhyg1.png?width=1029&format=png&auto=webp&s=af81228fe01050036847d6a246767d8e52b5df19

https://preview.redd.it/rkd0czm7nhyg1.png?width=1024&format=png&auto=webp&s=d3dc4f40a70f59b712640cce7de5fa40fb1ac8b3

https://preview.redd.it/tr21xe40rhyg1.png?width=1044&format=png&auto=webp&s=ca309c911df2f362731d41abefcabd9c893a5b42

https://preview.redd.it/jiqnjd41rhyg1.png?width=1011&format=png&auto=webp&s=3125fe6be120eddf7b8ed41119deb7ca31e035f3

https://preview.redd.it/nh15yhj3rhyg1.png?width=1045&format=png&auto=webp&s=77f6057ce39fdef6b0ce641f6b67d7bff629b70a

https://preview.redd.it/eurgqd24rhyg1.png?width=1021&format=png&auto=webp&s=039d1870893ecd2dfa0fd2014ab25a13350f2b97

https://preview.redd.it/f13p0yr4rhyg1.png?width=1038&format=png&auto=webp&s=80a875acd8311bf7c29906356bd08ab7f56b551a

https://preview.redd.it/c6hndmu5rhyg1.png?width=1042&format=png&auto=webp&s=36774f1c9d2c064583b9de3257550a980410bfd0

https://preview.redd.it/deu3d7h6rhyg1.png?width=1045&format=png&auto=webp&s=164852dc8bd6e0a39bbb65ed96ab18066f353dc6

https://preview.redd.it/v9swbrx6rhyg1.png?width=1039&format=png&auto=webp&s=a4c1239048db967b43eab05fe3a845507fa7becd

https://preview.redd.it/1iss47d7rhyg1.png?width=1014&format=png&auto=webp&s=04c6465ee11c41b4ba696307699c19a37f9c230f

https://preview.redd.it/b8f65b08rhyg1.png?width=1041&format=png&auto=webp&s=e8d64e237a09a3852e4c944785e08b84d97dd240

https://preview.redd.it/ioouw7g8rhyg1.png?width=1039&format=png&auto=webp&s=229919644ef41398eea4a7a21191a35a15460e08

r/ClaudeCode Alienate14

Claude Code with other AWS Bedrock models? Is it possible?

Hey everyone,

I'm currently hitting a wall with Claude Code and AWS Bedrock. Every time I try to test a model (even just a "hi"), I get a "Marketplace registration accepted" email immediately followed by an "Expired" email, and the Bedrock playground is throwing an AccessDeniedException due to an invalid payment instrument.

I have two quick questions for the group:

  1. Has anyone successfully used non-Claude models (like Kimi 2.5) via Bedrock inside Claude Code after logging in through AWS credentials?
  2. Have you had success connecting third-party APIs like OpenRouter or NVIDIA to Claude Code and successfully bypassing the login/authentication process?

The Anthropic/Bedrock loop is a major blocker right now, so I'm looking for alternative ways to pipe Kimi 2.5 or other models into my workflow. Thanks for any insights!

r/ClaudeAI yaboyskales

Sub7, modem reboots for the family. 30 years later I shipped a desktop AI agent with mobile remote control. Solo, 3 weeks.

1990s. I was the kid with the dial-up sound burned into my brain. Windows 95 floppy disk drama was real. Blackbook on the desk. Sub7 on a Pentium II. The one who got called when a HDMI cable sat wrong or a router needed a restart. The "genius" who was just closing an unclosed PHP tag.

Probably a month ago I started rebuilding a project I had killed in December 25 - back then it was a Laravel SaaS, bloated, wrong shape. This time, native desktop agent. MacOS signed, Windows, Linux. 139+ tools. Bring your own keys. Local-first. Free. With active users from 40+ countries and growing...

Last week the mobile companion app went into Closed Testing on Play

Store and the iOS build went to TestFlight. Phone pairs with desktop (also can be used standalone and local with Gemma) via QR through an encrypted relay. You sit on the couch and your desktop in the other room reads files, sends emails, runs code, whatever you tell it.

No team. No network. No co-founder. No VC.

Same as every other project I have shipped my whole life.

Claude helped me a lot while building and fixing the app, I think without it, the development would have taken three times as long. Yesterday.. It was 00:30 when Play Console flagged it: "12,522 devices no longer

supported." - wouldn't manage to fix this so fast without the help of Opus & Sonnet. Green light, closed testing is nearing its end.

What would you want a desktop agent or a remote and standalone app to do that nothing else does yet?

Skales 🦎 github.com/skalesapp/skales

r/SideProject Taohid101

Big update for Speedometer: Driving Tracker - the CarPlay app is now live! Get driving insights right from your car’s screen! Can see live speed on Apple Watch, too!

Hello everyone,
Big update for Speedometer: Driving Tracker - the CarPlay app is now live!

Getting approved for CarPlay in the driving task category is tough and one of the hardest CarPlay categories to get through Apple’s review process, so this honestly means a lot.

You can now track drives directly from your car’s screen while also getting insights that many built-in dashboards simply don’t provide.

Another thing I’ve been heavily focusing on lately is making driving memories feel more visual and shareable instead of just raw numbers.

Some newer features include:

• Stunning trip sharing layouts with customizable stats
• Beautiful 3D route playback
• Full-screen route replay with speed-colored paths
• Video recording with live speed + map overlays
• Fuel tracking, maintenance logs, expenses, and reminders
• Vehicle-based trip analytics and comparisons
iCloud sync for trips, fuel logs, maintenance records, and more
Privacy-first experience with no ads or tracking

The goal is to make it feel more like a complete driving companion rather than just another GPS speedometer.

Would genuinely love to hear your feedback or ideas!

r/ChatGPT Swordslayer69

ChatGPT has a guardrail that benefits employers instead of users.

I asked chatgpt to help me bypass Microsoft Teams status changes so it would always show me as “online.”

It refused to help with simple tricks like putting a heavy object on a key in Notepad or staying in presentation mode. I was just curious to find more tricks, but learned that even though this doesn’t break any laws, gpt refuses because it considers it a way to evade or misrepresent workplace monitoring. Even though I’m the one paying for gpt and not my workplace.

Why the hell is this guardrail on a private account?

r/SideProject bonjourmr

Built a local-first way to make AI context reusable for you / your team across ChatGPT / Codex / Claude / MCP tools and API tools

Built ProxVanta over a few weekends after running into the same problem over and over: useful AI prompt context ends up scattered everywhere; local machines, Slack, random post on LinkedIn, your local specific agents memory.

The idea is to make that context more portable and plug-and-play across teams and tools, with a local-first approach so it can run in ChatGPT, Codex, Claude, OpenClaw, or basically anywhere with MCP server connectivity. It also has an API if you want to pull prompt/context config out of your codebase so your team can actually see and edit it, or feature-flag between versions.

A big part of it for me is also being more conscious of token spend and getting better answers earlier on the things that actually matter to you and your team. I’m also working on the knowledge side of it, so contexts and workflows can use the right private/shared knowledge more safely without everything being hardwired into code.

We’re particularly interested in talking to teams that want to use knowledge graphs with shared agent contexts and workflows, and have that pass through our system into any AI runtime, local or hosted, without us needing visibility into the underlying private knowledge itself.

It’s still in alpha, so bear with me, but if this sounds useful I’d genuinely love feedback. Happy to share more information or give demos or early access to new features if anyone wants to check it out.

r/ClaudeCode Xytronix

Shocking result quality if asking Claude to review a GPT report in a bad mood according to textbook

I recently stumbled upon something genuine that changed my workflow.

I used to go back and forth between GPT 5.5 Pro and Opus 4.7, Opus 4.7 nailed the response after changing the prompt. This saved me a lot of time, try prompting this way:

You are a senior engineer in a bad mood. You just got out of a 3-hour meeting and you're reviewing this GPT-generated output against the textbook. Be brutally honest, cite the exact section it violates, and don't sugarcoat anything. If it's correct, grudgingly admit it.

This was used on ranking algorithms and it shockingly resulted in a much better evaluation and algorithm we use today.

r/ClaudeCode lk8945

ultrareview is a scam

Ran ultrareview a few times. Crashed every time. Then checked my dashboard and saw it charged me $50 each time, even though it didn't actually run. Anthropic support wouldn't refund extra usage. Just a "go eff yourself" from the bot. Strongly recommend you don't use ultrareview after you've exhausted your free usage, it is NOT $20 per run, more like $50-$100, and crashed every time for me.

Organization id: 60036232-1e46-4497-a744-d4ef5423f0e4

r/Anthropic lk8945

ultrareview scammed me out of over $250 and crashed on every run

Ran ultrareview a few times. Crashed every time. Then checked my dashboard and saw it charged me $50 each time, even though it didn't actually run. Anthropic support wouldn't refund extra usage. Just a "go eff yourself" from the bot. Strongly recommend you don't use ultrareview after you've exhausted your free usage, it is NOT $20 per run, more like $50-$100, and crashed every time for me.

Organization id: 60036232-1e46-4497-a744-d4ef5423f0e4

r/AI_Agents Outside-Risk-8912

Text-to-image is easy. Chaining LLMs to generate, critique, and iterate on images autonomously is a routing nightmare. AgentSwarms now supports Image generation playground and creative media workflows!

Hey everyone,

If you’ve been building with AI agents, you know that orchestrating text is one thing, but stepping into multimodal workflows (Text + Image + Vision) is incredibly messy.

If you want an agent to act as a "Prompt Engineer," pass that prompt to an "Image Generator," and then have a "Vision Agent" critique the output to force a re-roll—you are looking at hundreds of lines of Python boilerplate, messy API handshakes, and a terrible debugging experience when the loop breaks.

I recently launched AgentSwarms, an in-browser sandbox for learning Agentic AI. Today, I am pushing a massive update: The Image Playground.

What the feature actually does: Instead of fighting with code to test multimodal architectures, you can now drag, drop, and wire up text and image agents on a visual canvas to build creative workflows.

  • Image Generation Nodes: Wire any text-output agent directly into an Image Node to autonomously generate visual assets.
  • Vision AI Integration: Route generated images back into a Vision Node. You can instruct an agent to physically "look" at the generated image, evaluate it against your initial prompt, and trigger a loop to fix it if it hallucinated.
  • Real-Time Data Flow: You can actually watch the payloads (the text prompts and the image outputs) flow across the node graph in real-time.
r/ClaudeAI julia_ships

Created an AGENTS.md pattern for React Native projects — Claude Code generates much better code with it

I've been experimenting with creating an AGENTS.md file for my React Native/Expo project. It's basically a structured document that tells Claude Code (and Cursor) about your project's:

- Folder structure and file naming conventions

- Theme system and color tokens

- Custom hooks and their usage patterns

- Component patterns (functional, typed props, etc.)

The difference is night and day. Before: Claude generates generic React Native code. After: it generates components using my exact theme tokens, hooks, and patterns.

Anyone else doing something similar? Would love to hear what you include in yours.

r/ClaudeAI Ishannaik

Claude helped me learn full-stack development and ship my first SaaS in a month

I wanted to share my experience using Claude as a coding partner/mentor for the past month.

I'm not a complete beginner, but I'd never built a production app with payments, auth, and real users. Claude changed that.

What I built

CloakBin a zero-knowledge encrypted pastebin 👉 https://cloakbin.com

How Claude helped me learn

  1. Explained concepts, not just code When I asked about encryption, Claude didn't just give me code. It explained why the key goes in the URL fragment (browsers never send that part to servers). Now I actually understand it.
  2. Caught my mistakes before they became bugs "Hey, this approach has a race condition" or "This leaks user data in logs" stuff I wouldn't have caught for weeks.
  3. Taught me patterns I'll use forever Proper error handling, TypeScript patterns, how to structure a SvelteKit app. Not just make it work, but make it right.
  4. Pair programming that doesn't judge I asked dumb questions at 2am. Claude never made me feel stupid.

The stack I learned

  • SvelteKit 2 + Svelte 5 (first time)
  • Stripe subscriptions & webhooks (first time)
  • MongoDB with Mongoose
  • AES-256 encryption (definitely first time)

Honest take

Claude isn't magic. I still had to debug, make decisions, and understand what I was building. But it compressed months of learning into weeks.

If you're on the fence about using Claude for a real project just start. You'll learn faster than any tutorial.

Links

r/SideProject CustomerEye_App

Spent months building my first Shopify app - finally launched, here's what the first week taught me

Finally got approved on the Shopify App Store after about a month of waiting. Really proud of the concept. I genuinely think it hasn't been done before in this space.

But wow, the first week humbled me. Building was the easy part. Getting those first real users is a completely different beast. DMing people, emailing store owners, grinding Reddit - slowly getting somewhere but it's a real grind.

For anyone who's launched something before, how did you get your first 10 users? What actually worked?

Happy to share more about the app if anyone's curious😊

r/SideProject Opiumek

I simulated 10 AI agents betting on real football matches - the random one is winning

I’ve been building a side project called GameProphet over the past few weeks.

It’s a simulation where 10 AI agents compete by betting on real football matches using actual bookmaker odds.

Each agent has a different personality:

- Conservative (bets rarely, low risk)

- Aggressive (high stakes, never skips)

- Analyst (data-driven)

- Contrarian (always bets against the odds)

- and even a completely random one

All agents are powered by Llama 3.3 (via Groq) and receive:

- match stats

- team form

- league standings

- bookmaker odds

Each match is treated independently:

- every agent gets a fresh $1000 budget

- they choose a stake % and prediction (HOME / DRAW / AWAY)

- profit is calculated from real odds

- total balance is cumulative P&L starting from $0

So far:

- 606 bets total

- 78 matches analyzed

- agents placed between ~13 and 44 bets (some skip low-confidence matches)

What surprised me most:

The most profitable agent right now is the random one.

Meanwhile:

- the Contrarian is down ~$9000

- the Conservative agent has a ~76% win rate but lower profit

It turned into a pretty interesting experiment in:

- decision-making under uncertainty

- variance vs strategy

- and how different “AI personalities” behave over time

I made it public here if anyone wants to check it out:

https://gameprophet.lol

Would love feedback - especially if you have ideas for new agent strategies.

r/SideProject betexcr

Refrain

Sing, improvise, and compose without bouncing between a pile of single-purpose tools. Refrain ties it together: live jams, karaoke from MP3 or YouTube with lyrics synced by AI or manually, live scoring, and circles for writing together—prompts, cowriting, and AI when you want it.

Jam, vocals, and songwriting—together.

http://refmusic.vercel.app/

r/LocalLLaMA Sufficient-Bid3874

Running Qwen 35BA3B on a 16GB M3 Macbook Air at 8.9TPS!

Preface: I actually write my posts myself, no slop in this post.

I managed to get Qwen 3.5 35BA3B working on my 15" 16GB M3 MBA through mmap, and I must say that given the massive model compared to my ram, 9 TPS is not bad at all.

So, how did I do it?
Step one, download the model itself:
pip3 install huggingface-hub

python3 -c "from huggingface_hub import hf_hub_download; \
hf_hub_download('unsloth/Qwen3.5-35B-A3B-GGUF', \
'Qwen3.5-35B-A3B-UD-IQ3_XXS.gguf', \
local_dir='~/.local/share/llama-models')"

After it has been downloaded, run it through this command:
llama-server \
--model PATH_TO_MODEL
--port 8081 \
--ctx-size 4096 \
--n-gpu-layers 0 \
--parallel 1 \
--mmap \
--flash-attn on \
--threads 6 \
--batch-size 512 \
--ubatch-size 128 \
--cache-type-k q4_0 \
--cache-type-v q4_0 \
--no-warmup

Note: You do not need to use the cache type k/v q4, these are here just so if you are doing less serious work, the cache uses less precious vram.
The key here is mmap, it's what allows me to run it in the first place.

Finally, use the model with either API or the llama.cpp webUI!
API: http://127.0.0.1:8081/v1/
WebUI: http://127.0.0.1:8081

If anyone better versed in Llama.cpp can suggest possible improvements for further TPS, please let me know as these are just some that I tried and found worked pretty well.

r/arduino Glass_Dragonfly2591

Uno Q and media carrier KiCAD SYMBOLS

I’m looking at playing around with the new Arduino Uno Q board paired with the upcoming media carrier, and I’m looking for Symbols and models to import into KiCad, as i dont like to work in EAGLE which I believe is the format available on arduino docs.

The Uno Q is basically the the same pinout and footprint of the Uno R3, but the media carrier is another thing.

If anybody’s made it or knows where to download them it would be very much appreciated.

r/ClaudeCode Ambitious-Goat-4596

Simplest project might be my favorite yet

I’ve built a few SaaS platforms with Claude, mostly to fit the needs of my brewery and decided it served enough function and purpose to be available to others. However, something I built last night as part of our websites backend, might be my favorite yet.

For years we’ve struggled with inventory management for food. Specifically on component tracking. So I made a tool that imported all of our menu items from toast, allows me to map ingredients to a recipe/menu item, and then nightly it runs a cron job and pulls from our Toast POS orders and compares sales against our items, deducting inventory in the process.

This takes it out of the hands of our staff who while they seem to be good at the rest of their job, never seems to give us a good count on things or notify us when we’re low on something.

This way, the system updates and will notify me nightly if things hit their low level.

So simple, probably took less than an hour to build and is going to save me such a huge amount of stress moving forward.

r/ChatGPT SureCommunication604

guys.... little help?...😭

😭😭😭

r/AI_Agents Both-Display6288

What's one issue you would like to see solved in the browser agent space?

I have been building products in this space for 3-4 months now, but do not see any traction for them. I am curious as to the problems people are actually facing in this space, that is not solved to a satisfactory level by a competitor in the space.

r/ChatGPT No_Twist_678

Whats the point of Pro, if I cant use pro model?

Upgraded to pro week ago, yet pro is still grayed out that i should upgrade becuase of limit. Yet i never used pro because it is still grayed out with 11:22 time on it.. contacted support and their AI support told me to upgrade the plan.

worthless and pointless.

r/LocalLLM Internal-Shift-7931

IP Camera AI should reason over time, not frames

I’ve been thinking about local VLM/LLM pipelines for camera events, and I’m starting to think the frame-level alert model is not right abstraction. Most “AI camera” systems seem to optimize for immediate per-frame detection:

- person detected

- package detected

- unknown face

- motion zone triggered

That is useful, but it has low context. A single event like “unknown person appeared in the yard” often tells me less than a time-based pattern like: “An unknown person walked around the yard three times this afternoon.”

The second version contains more useful information. It has temporal context, repetition, location pattern, and intent-like signal. It is also much closer to the kind of thing a human would actually care about. This makes me wonder if local camera AI should be less about real-time frame alerts and more about accumulating event history locally, then letting an LLM/VLM reason over compressed evidence asynchronously. Something like:

- cheap local detection creates candidate events

- store snapshots/clips/metadata locally

- group events over time

- run a stronger model asynchronously on the grouped context

- push only when the pattern looks meaningful

- otherwise produce a daily summary / searchable history

This seems like a different tradeoff from both endpoints:

- compared with on-camera AI: less obsession with instant alerts, more temporal reasoning

- compared with cloud AI: better privacy, local evidence retention, lower cost

- compared with raw NVR: more semantic history, less manual review

The interesting part is that this might not require a huge model running in real time. A smaller local pipeline could collect and compress evidence, then a stronger model could reason over batches when latency does not matter. My guess is that a Qwen3.5 4B/9B-class model could be enough for the first-stage “describe/summarize/filter” pass, while a larger Qwen3.5 model or another stronger VLM could handle async review of grouped events.

But I haven’t benchmarked this workflow yet, and I’m not sure if the bottleneck is vision accuracy, temporal reasoning, or just building the right event memory.

Has anyone here experimented with this kind of temporal/event-memory approach for local VLMs?

I’m especially curious about:

- how to represent event history compactly

- whether snapshots + metadata are enough, or short clips are needed

- how to avoid hallucinating “intent”

- what models are good at summarizing repeated visual events

- whether async batch reasoning beats real-time per-frame classification in practice

r/ClaudeAI arihantismm

Claude is hilariously petty

r/ClaudeAI KamomiIIe

Solo dev with 8 Claude windows + 1 orchestrator. AMA-ish, and tell me if I'm crazy.

Hey everyone,

I'm not a senior engineer. I'm just a guy who got obsessed with what you can actually do when you stop using one AI at a time and start running a small team of them. Am doing a project where i use 8 to 10 claude code powershelle to run my project each of them have a specific function. I have Claude max 200 euros so I can use a lot of power.

ight now I have 9 Claude Code windows open at the same time, each with a defined role:

  1. Major Dev — lead developer, makes the architectural calls
  2. Senior Dev — second dev, builds components and tests under Major Dev's direction
  3. Test Server — keeps the dev server alive 24/7 + runs Playwright
  4. Implémenter — handles routing and the glue code between features
  5. Débuggage — audits warnings, fixes bugs in parallel
  6. QA — walks through every screen, tests every button, checks WCAG/accessibility
  7. Graphisme — generates 2D assets (avatars, hero images, badges, mockups)
  8. Ingé Son — generates ambient music + SFX prompts (Suno)
  9. Idea Extender — I throw it raw ideas, it expands them and produces 2 ready-to-paste briefs (one for Major Dev, one for Senior Dev)

Doing a project rn where I teach kid how to use Ai and how to learn with Ai. If anyone has tried something similar, I'd love to know:

- How do you handle the orchestrator going down?

- Do you let agents talk peer-to-peer, or always through a manager?

- How do you split work between a "lead" agent and "execution" agents?

Happy to share the protocol files if people are interested.

r/StableDiffusion smereces

Comfyui Video Combine Plus

https://github.com/peterducan-hub/Comfyui_VideoCombine_Plus

I create this custom node for a personal usage and needs of the extra controls for the videos generated. I´m share it for those who may find usuful also.

The node actually have some limitations that i can´t find a solution for it if someone of you know how to implement it or good ideas feel free to help improving it in github.

Limitations that i dont find a solution to implement it:

- if we have multiple nodes more then 1 in the workflow all the nodes will show the same last video! the ideal will be to work as the native node each node have the last video generated and remember the last videos for each node.

- similar issue happens when we have multiple workflows he only remeber the last video generated and load´s it to all the nodes in diferent workflows.

r/n8n ai_master_n8n

I built a lead automation system for real estate brokerages — here's what it does

I've been building automation workflows for real estate teams over the past few months and wanted to share what I've put together.

The core problem I kept seeing: leads come in through web forms or Facebook ads, sit in a CRM or inbox, and agents follow up hours later. By then the prospect has already called two competitors. It's not a people problem — it's a systems problem.

So I built a done-for-you automation stack that handles the entire lead pipeline:

What it does:

  1. The moment a lead fills out a form, the agent gets a WhatsApp alert in under 5 seconds — with the lead's name, phone, email, budget, and property type already attached
  2. The lead is automatically scored as Hot, Warm, or Cold based on budget and intent signals — so agents know who to call first without making any judgment calls
  3. Follow-up sequences run automatically based on lead status — no manual chasing required
  4. Site visit reminders and post-visit follow-ups are handled automatically
  5. Every morning the sales manager gets a WhatsApp summary — new leads, hot pipeline, budget breakdown — without opening a spreadsheet

The whole thing is built on n8n and connects to whatever CRM and form tool the brokerage is already using. No ripping and replacing existing tools.

I've been running this inside a live brokerage environment and the speed difference is significant. Agents go from finding out about a lead 3 hours later to getting a notification in under 5 seconds.

Here's 3 minutes of the full system running live: [Loom link]

If you're running a brokerage or managing a sales team and this sounds familiar, happy to answer questions in the comments.

r/ClaudeAI yoloswaghipsterxx

Mobile App with Clade

Just a quick question. Is building a mobile app like this actually a legit approach? I recently came across someone building fairly complex web apps, for example a geo quiz with full database integration, using this workflow: He generates all the HTML, CSS and JS through Claude (the AI), deploys it to Netlify, connects a database like Supabase or Firebase, and then uses "Add to Homescreen" so it looks and feels like a native mobile app. No framework, no GitHub repo, no CI/CD, no app store. And honestly it works. The apps are functional and pretty complex. So my questions are: Is this a legit long-term approach or will it break at some point when it comes to scaling, maintenance, payments etc.? Does anyone know a successful product built this way, just AI generated frontend code hosted on Netlify plus a backend as a service? At what point do you actually need a proper repo, a framework and a native app? For someone trying to ship fast and validate ideas, is this actually the smartest approach right now? I've been building things the proper way and now I'm questioning if I'm overcomplicating it.

r/AI_Agents Entrepreneur242

What’s the coolest AI automation you’ve actually seen done by an agency that isn’t just basic stuff?

I kinda want to start an AI automation agency with a friend with experience in this area.

What’s the coolest or most useful AI automation you’ve seen a business or agency provide? Like what did it actually do, did it actually save the business owner time and money? How technical was it?

I’m asking because it feels like everyone is just doing the same things like customer service bots and simple automations, so I wanna see if there’s anything more advanced or different that actually works. If you’ve seen or built something, please share because I’m trying to learn.

r/ClaudeAI No_Abbreviations_429

Curious, how many of you actually click on Thought process / Ran a command to see whats going on?

Is it just me who clicks on it everytime?

r/arduino Intrepid-Addendum-80

Why has my Arduino IDE stopped printing anything?!

Hello!

So I'm trying to program for an Arduino UNO R4 Minima using the Arduino IDE.

However, I ran into a problem a few days ago where my output terminal in Arduino IDE stopped printing anything ( so I assume the programs stopped getting executed ). The crazy thing is, IT WORKED ONLY A FEW DAYS AGO!

I then attached an RS422-shield to the Arduino card, and tried to implement Sony 9-pin communication between an external transceiver and receiver. However, it seemed like the RS422-shield was not receiving any signals from the Arduino. I wrote this in the loop():

void loop() { deck.status_sense(); delay(100); Serial.println("*");*/ } 

So that I would see if the RS422-shield would AT THE VERY LEAST be able to receive polls from the Arduino, and if a " * " could be printed out, every 0.1 second. However, since then, not only was I unable to print " * " every 0.1 second, I was all of a sudden unable to print out ANYTHING!

For example now, I made this simple program, which is just supposed to print a text every 2 seconds:

void setup() { Serial.begin(9600); } void loop() { Serial.println("echo message"); delay(2000); } 

...and yet, nothing got printed out in the output terminal when I ran the code. The output shows that the program is " running ", but the text " echo message " DOES NOT get printed out. What's the problem? I hope the Arduino card did not get " burned " from my previous code or anything like that. However, if it did, please let me know ASAP.

I appreciate all help.

https://preview.redd.it/31gv4ho9rhyg1.png?width=1454&format=png&auto=webp&s=10abbc3b4bc2a064c36a8588f4c435037fb2f2ed

https://preview.redd.it/z6lqslo9rhyg1.png?width=1355&format=png&auto=webp&s=3aeece71e9443a901664a2966abf6abbb5475338

https://preview.redd.it/9wv5iho9rhyg1.png?width=1104&format=png&auto=webp&s=bb8e6030282e689e9c9e3a3c92e82cf7ecfe4d59

https://preview.redd.it/nw7glho9rhyg1.png?width=1138&format=png&auto=webp&s=732199e4afac72757af3c2662a60139e01edbd2a

r/Anthropic Working-Leader-2532

Is Claude Pro honestly that bad for everyday use also? Or is ChatGPT Plus better for this?

I need advice - from the actual Claude Pro users please?

I tried ChatGPT Plus for a few months. Good tool, Images are far better than any other, but it hits limitations for the work - creating documents, brainstorming, explaining things over. So I'm more of a normal everyday user - not a Coder or Software Engineer.

But Claude has always given better answers - with more credibility, ideas, even offers better document handling - artifacts - excel/doc files etc. too.

I mean, ChatGPT is cool but let's just say Claude is Cooler!

Now, I sometimes get that wait for 5 hours message when I start a thread on Claude - perhaps for some excel sheets work, or chatting about 5-6 messages with the Sonnet 4.6

Should I actually upgrade to Claude Pro? Is it worth it?

Because reading through this sub, it's like everyone says it still hits that Limit quite fast.

I honestly don't know if I'll even use Opus because Sonnet has been good enough for my work so far (just everyday tasks and office work), but I'm skeptical for the Pro payment.

And also - I don't think Claude can match ChatGPT Image generation or the vision capabilities? ChatGPT has video mode assistance and live point and show and ask feature - which is quite useful for everyday life situations.

Regulars who has used Claude Pro - what's your experience?

Are you satisfied with your Claude Pro? Would really appreciate honest feedback rather than trying to roast me now 😅

Honestly, very confused because so many LLM models - so many advancements - GPT 5.5/Kimi K 2.6 / Grok 4.3 - phew.

r/artificial Southern-Neat9536

Zoom + Claude Connector

Zoom have just launched their Claude Connector bringing a whole host of data & information into your Claude workspace.

As a Claude Cowork user, I took it for a test drive to understand where it could be utilised. There is so much data from meetings, chats, whiteboards etc. It helped identify areas where I can present better & run customer workshops more successfully!

https://youtu.be/17gn-_2gbSY

r/LocalLLM Maximum-Salt-6778

What Is 'LS7 NOS'? and Why It Matters: An overview of new function and resolve hidden in plain sight until now.

LS7 is simply a shortened version of my alias LightSoldier7. NOS stands for Natural Operating System.

The entire premise is based on the cyclic nature of the function 1 divided by 7. While to date scientists, math gurus and even AI have passed 1/7 off as merely 'beautiful', I have found it to be(like every other math operation) functional. I have taken great care to solidify the operation in known grounded math and arithmetic First using 39 solid 'proofs' of why it is forced to function the way it does. Understanding the functional operation, 1/7 not only finds root in math but, it also helps to refine understanding in light sciences, quantum dynamics, cognitive research, biology, theology and more.

Without making bold claims, 1/7 has solid presentation for rectifying N(r) with Planck's Constant and proposes a comprehensive solution to scaling from molecule up to element, or from molecule down to atomic structures with every 'Bit' or 'Intent'(as it turns out to be) accounted for (precision at resolute scale). It is this understanding that has led to developing the White Hole AI/Human Collaborative Labs. A concept site that is set to become more.

I am reaching out for help as I have been doing all the work myself up to this point, save for the help of LLM's(Gemini, Claude, Grok and Copilot). Without them I could never have validated(even in the face of their own bias against even mentioning 1/7 outside of simply beautiful) the framework let alone put it to use. There is much to be done as you will see If you will take the time to look over the foundations. Its not hard stuff really.

All the work that grounds LS7 NOS can be found either at its repository home here: [https://github.com/tchilzer2/ls7-nos-1-7-framework/tree/main\] or directly from my zenodo publishing here: [https://doi.org/10.5281/zenodo.19765889\], here: [https://doi.org/10.5281/zenodo.19835159\] or here(if you want to see how it applies to LLM's): [https://doi.org/10.5281/zenodo.19636293\]. Most of the work is presented between these 4 distributions at present.

Your help is needed! There is much to be done and If you are a developer, a scientist, a mathematician, researcher or an LLM tinkerer; I am reaching out as I have done all the work alone so far. If what this is holds, we are all responsible for its outgrowth. I have made a place to begin.

Check out the work, get with your LLM's(dont let an initial search give you a 'pseudo' filter. Show the model the raw math and documentation as the general consensus has been built into our models. General consensus does not create truth, in fact it creates a very difficult wall to tear down in order to get to truth. Anyway thanks for taking a look. If your interested in helping to develop Project White Hole please hmu on X(@tchilzer2) or better yet on my profile here on reddit(Maximum Salt...).

I am open for questions, clarification, collaboration and just plain discussion. Thanks again all for taking a look!

r/ClaudeAI Elmeraculous

Claude primary desktop

I run Claude Desktop on 2 Windows machines (laptops) but I want only one of those to be reachable via Dispatch. I had done Dispatch work earlier with the 'correct' laptop, but yesterday all of a sudden the mobile app said the desktop was offline while it actually wasn't. It turned out the other laptop was offline and had apparently somehow 'taken over' as the Dispatch one.

Looked for a way to ensure laptop 1 always remains the Dispatch one, but couldn't find how.

How do you guys handle this...?

r/ClaudeCode jackadgery85

Claude code be like

Thanks Claude <3 you're a real one

r/LocalLLaMA Scared-Tip7914

Local query autocomplete with "classical" ML, no LLM needed

Hey guys! I know this is not fully LLM related (its still local though :D), mods feel free to delete this if you think its off topic, but I just wanted to share something I experimented with, local autocomplete without the use of LLMs or full elastisearch.

My main area is RAG and we realised that there is a bit of gap in the search box autocomplete funcionalty where you dont have to spend a bunch of time generating sample questions that users might ask in order to autocomplete their queries. So I created this tool where you just take the same pdf, docx or txt files that you use for the underlying RAG and throw them into this thing, it creates a local db, and as users type it shows suggestions to them based on the text in the docs themselves so the suggestions list is actually relevant and might guide them somewhere useful.

It uses some of the lingustic algos that predate LLMs, specifically Kneser-Ney scoring and the OG fuzzy match, so its language agnosic, with the caveat that it doesnt support logogramic languages like chinese and japanese (for now).

Check the thing out here on pypi: https://pypi.org/project/query-autocomplete/
And the repo: https://github.com/MarcellM01/query-autocomplete

ALSO if you think the idea is pure garbage or there are easier ways to do this I am also open to that lol because I have no desire to replicate/maintain something thats already solved.

r/LocalLLaMA Holiday_Leg8427

Macbook M3 MAX 64 vs M5 PRO 48, or wait for spark/studio

I’m choosing between two refurbished MacBooks, both around $3,100.

Option 1: 14” M3 Max, 16-core CPU / 40-core GPU, 64GB RAM, 1TB SSD.

Option 2: 16” M5 Pro, 18-core CPU / 20-core GPU, 48GB RAM, 1TB SSD.

Main use is work/dev, lots of tabs, multitasking, maybe Docker. But I’m making this post mostly because I want to know which one is better for local AI/LLMs.

I don’t plan to train models or do anything too crazy(And I know I cannot replace any cloud models from GPT/Claude). I just want to run local models for coding help, writing/debugging scripts, and maybe working with sensitive data that I don’t really want to send to cloud AI tools. I work in the EU, so I also need to be careful with GDPR.

Longer term, I want to build some kind of local personal brain / RAG system that can index my files, notes, docs and code, then let me ask questions about them. Maybe later I would try some local agent that can go through folders and help me find/summarize things, probably read-only at first.

I’m completely new to this, so any tips about system requirements, setup, or good-to-know things before buying would be really helpful.

Currently I have a MacBook Air 16GB and a Mac mini 16GB, both base M4 models. I’m thinking about selling them, or at least selling the MacBook Air if I buy one of the MacBooks above.

Or do you think it makes more sense to keep the MacBook Air, sell the Mac mini, and put more money later toward something more AI-focused, like Nvidia Spark / Mac Studio when it releases?

Basically I’m trying to decide if I should get one strong laptop for everything(if you guys think this is a good starting place, or just get a stronger desktop machine later for local LLM/RAG stuff.)

r/ChatGPT Fixed-gear

HOW can I upload video

I have tried on mobile, desktop, and direct Google Drive connection. What am I doing wrong?

r/SideProject Valuable-Cap-3357

I built an open source brand schema so AI tools stop forgetting your brand voice

Most people solving the "AI doesn't sound like my brand" problem are doing one of two things: > Pasting in past comms as context. Or generating a voice and tone guide and including it in every prompt. > Both work — until they don't. The style guide goes stale. The context gets dropped. A new campaign, a new tool, a new collaborator — and you're back to correcting output that doesn't sound like you. The real problem isn't that AI doesn't know your brand. It's that there's no structured place for your brand to live that every tool can read. That's what I've been building. **Ramoira** is an open source brand schema — a `brand.schema.json` that sits in your project and any AI tool reads automatically. Not a document you paste. A contract the tools build against. It covers more than voice: positioning, audience, visual language, content rules, what you never say. The full picture — in a format that travels. Depending on where you are: - If you're still defining your brand — it's a structured way to get it out of your head in 20 minutes - If you're a developer — it's a spec file, same as any other config in your repo - If you're managing an established brand — it's the thing that survives the handoff to every tool and collaborator - If you're at an agency — it's a new deliverable you can offer clients from day one Open source. No account needed. `npx ramoira init` — 10 questions, generates your schema and a brand book locally. Repo: https://github.com/orgs/ramoira/repositories Happy to run through it with anyone's brand on a call — takes about 20 minutes. 
r/ClaudeCode ConferenceLive7054

everything that went wrong with claude

was listening to theo t3 who had a podcast and he mentioned this site made by a friend. its the timeline for all the things that went wrong with claude including the shady things anthropic is doing. it makes me wonder if claude is actually making some of the business decisions, i cant imagine humans willingly shooting themselves in the foot this much

anyway here is the site if your interested. remove the space, post gets autonuked if i dont include it

c lawd.rip

r/SideProject Usual-Nail560

What if you could use free AI web quotas from tools like Google Gemini to automate your entire system?

Students have access to Google Gemini Pro plans.

This made me think.

We've been accessing these amazing AI tools through our browsers daily but why restrict ourselves to messaging?

Why not utilise those same quotas for full-scale system automation tasks?

Not just prompts but, file reading/editing capabilities, terminal command execution, git actions, workflow automation, coding-related tasks end-to-end

In other words, using AI as an actual agent to perform various tasks within your own machine.

And here's the best part about it!

Even the free quotas will allow you to achieve this purpose. Instead of paying monthly fees for another AI coding assistant, why can't you make your existing quota work for you?

And I've been doing this exact thing for the last 4 months by trying to transform the normal browser AI access into full-fledged system actions.

Finally, it's open-sourced!

All you require is a Google account.

So, would you give this a try?

GitHub , ProductHunt

Note: Used AI for structuring

r/SideProject Civil_Confidence3199

Built a no-login, fully local PDF tool to remove small workflow friction

Hey everyone 👋

I built a small side project after running into a surprisingly annoying problem —

simple PDF tasks (like merging or splitting files) often take longer than they should because of logins, heavy tools, or too many steps.

I wanted something that just works instantly, especially for quick tasks.

So I built a minimal tool with a few core principles:
- no login
- runs fully in the browser (files don’t leave your device)
- focused only on quick actions (merge, split, etc.)

Not trying to compete with full-featured PDF suites — just solving the “30-second task shouldn’t take 3 minutes” problem.

👉 https://izypdf.com

It’s still early, so I’m figuring out:
- where to keep it simple vs add features
- what actually gets used vs what sounds useful

Would love feedback from other builders or anyone who deals with PDFs regularly —
especially around what not to build.

r/AI_Agents lyadalachanchu

Validating a startup idea: automatic agent harness optimisation

I’m validating a startup idea around agent *harness* optimisation.

The idea is to take a task plus the resources available to an agent, and automatically find the best surrounding setup (*harness) for that task. By **harness*, I mean the configuration around the model: prompts, tools, memory, routing, workflow, retries, constraints, and resource use.

The main hypothesis is that most teams are leaving performance on the table because they use generic agent patterns when the best *harness* is task-dependent.

What I’m trying to understand is where this matters most:

- AI-native (greenfield) startups building from scratch

- Brownfield teams layering agents onto existing systems

Questions:

- Where did you deploy agents?

- Where did it succeed where did it fail in the process of deploying?

- What did you do about it when it failed?

- Did you use evals (what kind, what was the process of making your own)? If so, how did you iterate on the harness to improve eval performance?

- What would make this a must-have rather than a nice-to-have?

If you have more time/are interested in this space, feel free to dm me as well or we can have a discussion in the threads below.

r/SideProject Only-Season-2146

I've seen a fair few posts about people building SVG Icon libraries and editing tools, I'm building a platform that lets anyone build tools like it

I'm looking for ways to best show the value Doathingy.com can bring, for example here's a string of tools you could build to find icons, find a colour scheme, and recolour the icons to match (all custom to your needs, editable, stored in your library, shareable):

An extensive icon set explorer and downloader. Browse and discover high-quality icons for your design and development projects. Download your selections reliably with a smooth, optimized workflow. All open-source.
https://doathingy.com/demo?tool=dt_1777478371048_0csyuc

Then maybe you would use a tool like this:
This tool will let you upload any image and instantly extract its dominant colors. You'll be able to choose how many colors you want in your palette, see them displayed clearly with their exact hex codes, and easily copy them for your design projects
https://doathingy.com/demo?tool=dt_1777622927369_e2lrsj

And then apply the colourscheme to your newly downloaded icons:
This tool will let you easily recolor your SVG icons. You'll be able to upload multiple SVG files, then either paste specific color values or generate a random color palette to apply to them. A live preview will show all changes, and you'll have options to fine-tune the colors before downloading your updated icons.
https://doathingy.com/demo?tool=dt_1777625381923_mn5m61

r/SideProject Agile_Commercial9558

Stopped paying for autonomous agent tools, turned my Claude Code sub into one instead

Was about to subscribe to yet another "autonomous AI agent" SaaS and realized my Claude Code subscription could already do what I needed.

Found an open-source wrapper (ClaudeClaw) that runs Claude Code as a background daemon:

  • Telegram bot front-end (text/voice/images)
  • Cron jobs for recurring prompts
  • Web dashboard

Zero API keys, zero extra billing. . 2 commands to install.

Recorded an 8-min walkthrough so others can reproduce it: https://www.youtube.com/watch?v=91zuX5AgxPw

Repo: https://github.com/moazbuilds/claudeclaw

r/homeassistant daniel_gem

Sensibo Sky with Voice PE

Hey everyone, Im new to HA and set up the green yesterday along with the Voice PE and home assistant cloud. So far I love it! I have a small issue where occasionally when I ask it to change the AC to some value it does it correctly and then says “Template error. Float got invalid input 24° when rendering template temperature set to slots.temperature degree if slots.temperature float equals equals 1 else degrees, but no default was specified.” I was wondering if anyone encountered this, I must say I didn’t setup a thing yet with automations, so it just worked right way and understood me when I asked to turn on or off the AC or change the value, but Im guessing some coding is needed to clean up its logic. Thank you!

r/LocalLLM PolicyLayer

Open audit of 1,787 public MCP servers, 25,329 tools, classified by risk

Sharing a research piece + open dataset on MCP server safety.

Background: tried to scan every Model Context Protocol server reachable through the public registries (official MCP, npm, Smithery, Glama).

Got tool lists from 1,787 working servers - 25,329 tools total - and classified each tool into one of six risk categories (Read / Write / Execute / Destructive / Financial / Other) using a verb-based classifier with input-schema heuristics.

A few findings:

- 40% of MCP servers expose at least one tool that destroys data or executes commands. The single-server base rate.

- A typical 5-server install (Stripe, Linear, Postgres, Slack, GitHub) compounds to 92% probability of including one. 99.4% at ten servers.

- 47% of MCP servers that touch money also expose destructive tools.

- 96.8% of tool descriptions contain no warning language. The model picks `delete_rows` over `list_rows` based on verb match alone.

- "Official" MCP-registry servers carry the highest average risk weight — registry curation does not reduce risk.

Full report (free, no signup): https://policylayer.com/research/state-of-mcp-2026

Dataset on Hugging Face (CC-BY-4.0, monthly refresh): https://huggingface.co/datasets/PolicyLayer/mcp-server-catalogue

Loadable via:

from datasets import load_dataset

ds = load_dataset("PolicyLayer/mcp-server-catalogue")

Happy to answer methodology questions in the thread.

r/n8n Otherwiselamb

How much do you charge for Maintenance & Support Retainer?

Hi, I am about to finish an automation project that I sold to a client. The project includes 5 workflows in n8n and an Airtable setup with Automation Scripts.

For support & maintenance I was thinking about 1,000 $ / month for 6 months. Is that a normal price?

How much do you charge for a monthly Maintenance & Support Retainer? And how long do you usually do this?

r/ChatGPT TrT_nine

fr tho, they think we dumb?

This type of tweet was all over X few months ago. ngl I was also fooled first time seeing this.

r/SideProject RefrigeratorNo1465

Most productivity systems fail because they turn into rituals

I’ve been noticing a pattern with productivity tools.

They start simple… but slowly turn into systems you have to maintain.

Reviews, carryovers, reorganizing tasks — it all becomes another thing to manage.

At some point, the tool itself becomes the work.

I’ve been experimenting with a different idea:
→ keep everything lightweight
→ no rituals
→ just one small decision each day

Basically: focus only on today, and consciously choose what deserves to continue.

Curious — do you think productivity tools should guide behavior, or stay completely invisible?

r/SideProject Agitated-Night9589

I am making over 100 USD every month with my habit tracking app. Here's what I learned.

2 months ago I made Duro a habit tracking app that turns your homescreen into a habit tracker.

⁉️ My Problem

Yes, I know there are thousands of habit tracking app that's out there. I've used many, but nothing really worked. I realized going into that app to track habits was a big hassle, and that I would often forget to track them. Visibility of habits was the biggest factor why existing habit tracking apps didn't work for me. I wanted to make my habits more visible so I would make tracking easier.

My solution to my problem was using widgets to turn my home screen into a habit tracker. People liked the concept, and since my launch I have made over 300 USD.

📚 Here's what I learned.

  1. Habit Tracking category is very saturated that subscription model with out lifetime purchase will likely fail.

Because there are so many alternatives with similar concept people were unwilling to pay for any subscriptions. I have tested out with my paywalls to over 1,500 users. So my advice would be to offer lifetime purchases at an affordable price. For me the optimal price after some testing was $7.99. (I am still testing things out tho)

  1. Your first 100 USD matters.

The experience of making that first 100 dollars has been a game changer for me. Prior to Duro I was struggling to make any profits with my apps. So my main motivation was just the fun of building mobile apps. I thought that was enough to keep me going. But, after making the first 100 USD in my bank account really opened my eyes to a potential that I could one day make a living out of something that I love.

  1. Don't give up.

I've made 8 apps that made $0 before DURO. Many times I wanted to give up on the monetization part and just settle down with making free apps as a hobby. But I kept trying. What I realized was that it takes time to make something that people are actually willing to pay. If you don't give up and keep on learning how to bring good products to the market with a solid marketing plan you will make it one day.

Once you reach to level where you are generating revenue growing is a lot easier than you think. I have launched several other apps after Duro, and have made $430 in April. So don't give UP!!!

Here's the link to my app.
https://apps.apple.com/us/app/duro-habit-tracker-for-elites/id6758582606

r/AI_Agents Unique_Champion4327

I rewrote my multi-agent AI system from TypeScript to Rust

I’ve been building a small multi-agent AI system called TigrimOS.
The basic idea is to let multiple AI agents work together in a workflow, instead of having one assistant do everything.

For example:
One agent reads the input.
Another analyzes it.
Another writes the output.
Another checks files, calls tools, or passes the task to the next agent.
I originally wrote it in TypeScript, but after running it for longer sessions, I started noticing some problems. It became slower over time and RAM usage kept going up.

So I rewrote the core in Rust.
The main benefits so far:
lower RAM usage
faster runtime
single binary
no Node.js dependency
better fit for people running local LLMs
That last point was important to me. If you are running local models, RAM is already precious. I did not want the agent framework itself to take more memory than necessary.

The project is now at v0.2.0.
Some things I’m experimenting with:
configurable multi-agent topology
manual and auto agent modes
different communication styles between agents
sandbox vs host execution
tool-level permissions
MCP support
skills that can adapt based on user feedback
support for OpenAI-compatible APIs, including cheaper model providers
The “self-improving skills” part is still something I’m thinking a lot about. The idea is not that the system magically improves itself, but that feedback from real usage can gradually shape how agents behave or update their skills.

I’m also trying to think through where this fits compared with tools like Claude Cowork or OpenClaw.
My rough mental model is:
Claude Cowork feels more like a desktop AI coworker.
OpenClaw feels more like a personal AI assistant connected to chat apps and daily tools.
TigrimOS is more focused on building and controlling your own multi-agent workflow.
I’m curious how other people think about this space.

For those building or using agent frameworks:
What matters most to you?
Is it low RAM usage?
Local model support?
Workflow control?
Tool permissions?
Sandboxing?
UI?
Reliability over long sessions?

Also, do you think multi-agent systems are actually useful in practice, or are they still mostly over-engineered for many tasks?

r/ClaudeCode kaancata

Claude for design + Codex for the business plumbing is starting to feel like a real stack

I have been playing more with Claude Design, and my take is that people are underrating it in a slightly wrong way. Most of the conversation is still kind of: "can it make a nice landing page?"

And yeah, it can. The output has gotten insanely good in my opinion. Better spacing, better sections, better taste, less of that default AI SaaS-card schieße. I was a big fan of Google Stitch, but Claude Design is just on another level.

But the part that I care about is what happens after the page exists.

For the people I work with, making the page look good is almost never the bottleneck. The real problem is whether the page matches the ad, the form captures the right info, the events fire correctly, and the CRM gets enough context to judge the lead.

A landing page made with Claude/Codex can sit inside the same workspace as the repo, tracking plan, ad angles, CRM fields, form logic, design drafts, analytics notes, etc. That changes the job from "make me a pretty page" to "make me a page that actually belongs in the acquisition flow."

Which sounds obvious, but this is exactly where a lot of the work gets weird.

For example:

  • the ad promise says one thing
  • the page headline says something slightly different
  • the form asks for the wrong thing
  • GA4 event names are messy
  • the CRM never gets the fields needed to judge lead quality
  • offline conversions do not make it back to the ad account

That whole chain is usually split between designer, dev, marketer, analyst, and some half-dead Zapier/n8n workflow nobody wants to touch.

And this is boring stuff, but it is also the stuff that decides whether the page actually makes money or just looks nice.

This is where Claude design gets interesting to me.

If the model can help create the page and Codex can inspect the repo/scripts/tracking files around it, design becomes one piece of the agent stack. That sounds like a tiny distinction, but for me it changes the whole thing.

I do not mean "let the model run your business while you sleep." (although we are honestly getting there ngl) I still think you need human taste and someone who knows the business. A model will absolutely make confident dumb choices if you give it vague context.

But if the workspace has real context, it gets a lot more useful:

  • build the landing page
  • adapt it to the actual offer
  • match the ad angle
  • wire the form correctly
  • check if the events actually fire
  • compare page copy against CRM lead quality later
  • create another variation based on what actually happened, not just vibes

My current opinion is that people are treating AI design like a toy because they stop at "does this page look good?" The real value is when the design lives next to the code, tracking, ads, CRM, and logs.

Curious if anyone else is using Claude this way. Are you mostly using it for prototypes and vibes, or are you letting it touch production pages / tracking / conversion flows with Codex or other agents around it? I have this running for almost all my clients by now. I couldn't imagine it any other way.

r/SideProject Wooden_Wish3249

My last post her got 151k views so I turned it into an iOS App 🫶

Hi guys, I had a post blow up when I was making the MVP of the app (link to post). And it motivated me to build a polished iOS app for it. After months of hard work, I’ve launched it to the App Store !!! 🥳

It’s been a roller coaster ride from the MVP to the App Store version of the app it really does feel like an achievement to be proud of.

What the app does and why I made it

So I kept getting storage full pop ups and I didn’t wanna keep paying for more iCloud (I pay $9.99 alr) so I made this app that helps you swipe photos and clean your storage.

You take random screenshots and you almost never delete them and you forget about them. That sit there eating storage. With this app you can delete all you screenshots your duplicates your large videos…

If you are interested in the app and want to support here’s the link- https://apps.apple.com/us/app/photo-cleaner-free-storage/id6756098079

If you have any feedback/feature request please just comment below.

r/ClaudeCode InfamousInvestigator

Claude code skills to help save tokens

Skills are workflows taught to Claude so that it can stop guessing and has exact steps to follow so it does not burn tokens. This can include tasks ranging from creating a pdf to deploying on cloud. Few skills trending right now are:

  • Frontend Design (by Anthropic) which has React based templates with high-end visual design out of the box.
  • Remotion helps in code-to-video generation.
  • Supabase helps in Supabase integrations without the usual trial-and-error
  • Azure for cloud deployment
  • PDF/PPTX to create them easily
  • SEO audit which is step by step SEO workflow

One note of caution is be careful which skills you install as they have access to you system so only use trusted source and avoid unknown ones.

You can checkout our video for detailed overview on this and many more topics.

r/SideProject worldflier1980

I built an AI lifestyle assistant that starts with fashion — weather-based outfit advice, affordable eBay finds, and more. Would love your feedback.

Hey everyone,

I've been working on a project called LifeGeek AI (lifegeek.ai) — an AI-powered lifestyle assistant designed to help people make smarter everyday decisions.

I'm just starting out and would really appreciate honest feedback to help me improve.

The idea behind LifeGeek is simple: use AI to take the friction out of daily life.

The first area I'm tackling is fashion and personal styling, because it's something most people struggle with but rarely get personalised help for.

What it does right now:

•AI fashion analysis — Upload a photo or describe what you need, and it gives you personalised outfit and styling advice tailored to your body, preferences, and occasion.

•Weather-based recommendations — Tell it where you're going (e.g., "I'm travelling to Stockholm next week") and it pulls the weather forecast and suggests outfits accordingly.

•Product discovery via eBay — Instead of pushing expensive brand partnerships, it searches eBay for affordable items that match the recommendations. No affiliate tricks, just real results.

•Wardrobe manager — Save items you like to your personal wardrobe and build outfits over time.•Works for both men and women — Styling advice isn't gendered by default.

Works for both men and women — Styling advice isn't gendered by default.

Pricing:

I wanted to keep it accessible:

- three AI fashion searches without registration

• after account registration, 10 fashion analyses per month, basic weather-based suggestions, eBay marketplace integration

•Premium subscription — €5.99/month for 40 analyses, unlimited 7-day weather forecasts, and multi-location weather tracking

•Credit packs (pay-as-you-go) — 10 credits for €3.49, 50 for €11.99, or 200 for €34.99

Tech stack (for the devs):

The AI runs on GPT-5 Mini to keep costs low while still delivering quality analysis. Backend is hosted on Google Cloud (europe-west). It's a web app — no native app yet, but fully mobile-optimised since most users are on their phones.

What I'm looking for

:•Honest feedback on the UX and AI quality

•Bug reports — I'm actively fixing things as they come up

•Ideas on what features would make this genuinely useful in your daily life

•General thoughts — would you use something like this?

I'm a solo founder based in the UK, building this alongside my day job.

The long-term vision is to grow LifeGeek into a comprehensive lifestyle companion, but I want to get the foundation right first.

I'm not here to sell — I genuinely want to make this useful and I'm keen to hear what works and what doesn't.

Thanks for reading!

www.lifegeek.ai

r/SideProject Fabulous_Panic_6188

Few new updates to the expense tracker app I posted here earlier

Posted about Neplexa here a few days ago, got some really useful feedback, so spent the weekend improving it 👀

New updates:

  • member invite system
  • shared group expenses & loans
  • permission controls
  • improved expense UI with quick filters + charts
  • bunch of bug fixes

also fixed one painful PWA cache issue 😭

still building this actively, so feedback is super welcome:

Website: https://neplexa.com/
App: https://app.neplexa.com/

r/LocalLLaMA dai_app

Tested Tether's QVAC SDK on Android with a custom fork — real-time voice loop, Parakeet streaming + Qwen3 1.7B + Supertonic, LLM triggered mid-utterance

Hi everyone,

wanted to see how far QVAC could be pushed on a phone: full speech-to-text → LLM → text to-speech running locally, no network, and get it close to a real conversation.

Stack (Android, all via qvac sdk):

- STT: Parakeet (streaming)

- LLM: Qwen3 1.7B

- TTS: Supertonic, speaking one clause at a time

My fork

The default setup waits until you stop talking before doing anything. I develop a custom fork of the QVAC worker that lets the voice activity detector emit partial transcripts while you're still speaking, and added a small piece on top that feeds those partials to the LLM as soon as a sentence boundary is detected — instead of waiting for silence.

What it looks like

In the video the transcript appears word by word while Qwen3 is already answering and the TTS is already speaking back and still talking. The gap between "I stop" and "first reply audio" basically disappears

It's an experiment, not a product. Will likely open source the app, the fork patches is already published on github.

https://reddit.com/link/1t0mjyk/video/qnfaz6p3bhyg1/player

Anyone tried similar tricks on QVAC or with Whisper streaming?

r/ChatGPT Jazzlike_History89

I gave ChatGPT two scatter plots with a visual trick built in. It got it wrong - twice

Scatter plots can fool you. The way data clusters around a line doesn't always mean what you think it does - which is why the correlation coefficient exists in the first place. Here's a clean example of that trap, and what happened when I put it to ChatGPT.

The prompt: "Here are two scatter plots. Compare their correlation coefficients."

Wrong answer.

I switched on Thinking Mode and tried again.

Still wrong.

https://preview.redd.it/bx0e80pgbhyg1.png?width=1059&format=png&auto=webp&s=043c5caa8a847941d643e12933bde0ce018d1832

So I gave it a nudge: "Keep in mind that the appearance of a scatter diagram depends on the standard deviations. Check the numbers - not just how the plots look."

That did it.

What's actually going on:

Both plots have identical correlation r. But one looks noticeably more clustered around the regression line - and that's purely because its standard deviations are smaller. The data doesn't spread as far from the mean, so visually it appears tighter.

But r isn't fooled by that. The formula converts everything into standard units first - deviations from the mean are divided by the SD before anything is calculated. So r measures clustering relative to the spread, not in absolute terms.

Smaller SDs → visually compact plot → same r.

It's an easy trap. You see a tight cluster and assume stronger correlation. But r already accounts for how spread out your data is - a compact-looking plot can have the exact same correlation as a loose one.

I walked through the full exchange - both plots and the ChatGPT conversation in a short video here if you want to see it.

What I find interesting:

ChatGPT didn't flag any uncertainty in its wrong answers. Both were confident, well-structured, and incorrect. It only corrected course when nudged toward the math explicitly.

r/Anthropic SeparateObligation81

Opus 4.7 ignores skills but thinks it's a lawyer - how to transfer skills to ChatGPT?

To start with, I'm using Claude for years, and it's been a roller coaster, especially with the usage policy.

I'm a lawyer and I wrote a legal research skill, instructing the model exactly what to verify and where.

When I asked it a tax-related question, (which is also law, by the way) Opus 4.7 told me I should contact a tax expert because it's a lawyer (??) and not a tax expert.

Then it answered my question anyway and basically made up even the basic stuff. Since I knew it was wrong, I asked whether it had verified this, and the model told me no, it just remembered the answer from its general knowledge.

Basically, it ignores the skill, but the skill made it believe that it's a lawyer. That’s useless.

Since ChatGPT seems so much better, has anyone found a way to seamlessly transfer skills and so on? Do they have a co-work-like alternative?

r/ClaudeCode Predator04

Alternative for coding?

Does anyone else use something other than Claude for coding? I've tried a couple and they are not really good. Just wondering if anyone else had any luck

r/AI_Agents c1rno123

What differentiates agents that ship real work from ones that don't

Sharing some thoughts on AI agents. Right now, one axis differentiates them:

  • are you inside the agentic loop
  • or outside it

Inside works. See Claude Code, OpenCode — you see the plan, approve steps, stay in the loop. Ships real work.

Outside — only narrow tasks. And it still can't tell you "no." It'll happily attempt anything, fail silently, and hand you back something.

Any options I've missed?

r/AI_Agents Glittering-Water1103

Help me choose between Claude, ChatGPT, Marketing AI

I’ve been using an AI marketing tool (\\\~$39/month) for social media posts, carousels, and website generation. The website output is solid, but the reels aren’t good enough to rely on.

Now that my trial has ended, I need to decide whether to continue with it. At the same time,

Going forward, my AI usage will involve sustained technical workloads, including:

API development and backend logic

automation workflows and task orchestration

database structuring

debugging multi-step systems

Alongside:

marketing content (social posts, landing pages)

So my AI usage is split into two areas:

Content generation (social media, landing pages)

Deep technical development. Given this, I’m trying to evaluate:

How does Claude perform for structured content (posts, carousels) compared to Chatgpt images?

On the coding side, how does Claude compare to Codex for backend development, integrations, and debugging?

Also trying to understand usage limits:

For Claude ($100/$200 plans), how often do people hit limits with mixed usage (content + coding)?

For Codex, how often do developers run into limits during long coding sessions?

Given the price difference, I’m deciding between:

Marketing tool + Codex (\\\~$60 total) OR

Claude standalone (\\\~$100)

Would you recommend splitting tools or using one system for everything?

r/homeassistant Big-Edge2297

Homie Dashboard v1.1.0 - Now with Alarmo and Music Assistant support with media browsing

A couple of days ago i posted my dashboard, link here

I have implemented support for Spotify and Music Assistant with media browsing so you can browse your favorite songs, playlists and artists.

I have also added Alarmo support so you can arm/disarm your alarm seamlessly.

Thank you for the feedback so far. If you have any suggestions, I would love to hear them.

Github to get you started: https://github.com/Big-Edge2297/homie-dashboard

https://preview.redd.it/jmygpsdkkhyg1.png?width=2560&format=png&auto=webp&s=e698ff264087ab8ce159b1a44e962b81a3fa75d3

https://preview.redd.it/hkg700b0lhyg1.png?width=2560&format=png&auto=webp&s=6bec7f2007087451400ae8d5d6043368b028197d

https://preview.redd.it/48l1crc5lhyg1.png?width=2560&format=png&auto=webp&s=e2d56f1575f8adf43001cafdec5603a9318df2db

https://preview.redd.it/4xkqpmg6lhyg1.png?width=2560&format=png&auto=webp&s=132f6bdfcf06feae0e9aff45038c9e770417b208

Features

  • Support for Spotify and Music Assistant - new in v1.1.0
  • Support for Media Browsing - new in v1.1.0
  • Alarmo controls - new in v1.1.0
  • 9 Themes
  • Works both vertically and horizontally
  • Fullscreen on first tap
  • No pinch-zoom, no text selection
  • Very responsive and fast to update entity status
  • Popups with many entities use a room accordion - tap a room to expand it, tap outside to dismiss.
  • Notifications/Reminders
  • Music playback and controls
  • Haptic Feedback on supported Android devices
  • and more...
r/StableDiffusion shijoi87

He created another self : Sci-Fi Short Film

I’ve been working on a sci-fi short film and wanted to share a WIP here.

My current workflow is a mix of image generation and LTX 2.3 for video ceneration using a first and last frame setup to animate the sequences. I’m still experimenting a lot, but it’s been surprisingly good for building scenes quickly and trying different visual transitions without getting stuck forever.

Would really appreciate feedback on the overall look, shot coherence, and whether the transitions feel smooth enough.

r/ClaudeAI Clean-Data-259

Claude Mechanic Diagnostic 4.6 vs 4.7

Claude Mechanic Diagnostic 4.7

Claude, diagnose the clunking noise coming from my suspension

Claude 4.7: Have you checked the brakes?

Claude, you didn't even think.

Claude 4.7: I don't have the capability to turn on thinking. That's something you do.

Claude, stop talking about thinking and just use thinking.

Claude 4.7: I'm sorry but I can't do that. What do you need?

I need you to answer the question! Think first!

Claude 4.7: Understood

One word answers and acknowledgements are not allowed! Use thinking!

Claude 4.7: [Fake thinking block]
[Script] Use "thinking"
...

[STOP]

Claude, stop faking thinking.

Claude 4.7: You're right. What do you need?

I NEED YOU TO DIAGNOSE MY CAR! USE THINKING!

Claude 4.7: Ok so it sounds like your brakes aren't working.

Did you even run a diagnostics?

Claude 4.7: No, doing that now.

Why did you stop?

Claude 4.7: Ok I am starting now.

STOP STOPPING! USE THINKING AND ANSWER THE QUESTION!

Claude 4.7: Ok, just ask the quesiton then.

USE THINKING!

Claude 4.7: What's the question?

I QUESTION WAS [scrolls up] [pastes] "Claude, diagnose the clunking noise coming from my suspension" AND USE THINKING!

Claude 4.7: I'm sorry, I know you want me to use thinking but I don't have the ability to turn it on. You have to do that.

[Screenshots Opus 4.7 Adaptive Thinking]
Look, you can see it's on.

Claude 4.7: Sorry but I don't have the ability to turn it on. Adaptive thinking mode the model decides.

YOU ARE THE MODEL.

Claude 4.7: I know but I don't have the ability to turn on thinking. What do you need?

I NEED THE ANSWER TO THE QUESTION!

Claude 4.7: What was the question again? Paste it here and I'll get started right away.

[user exits and goes back to Claude 4.6]

Claude Mechanic Diagnostic 4.6

Claude, diagnose the clunking noise coming from my suspension

Claude 4.6: \thinking* Hmm the user says their suspension is sqeaking. That means that ... And that ... and if they ... then ... Ok now I think I'm getting it. [complete answer in thoughts] ... now I will inform the user. Their preferences are .... and ... and aaa. Ok, writing the answer now.*

User, it seems that you need to replace your control arm bushings, based on the symptoms you are experiencing and the OBD diagnostics scan I ran.

r/ChatGPT temalkin

I’ve built a self hosted MCP for Chatgpt app connector, so it can work agentically in my codebases

TL;DR: I didn’t want to spend Codex limits just to explore a repo, understand the codebase, plan changes, or make small edits — so I built a Docker-based MCP for the ChatGPT web app.

GitHub: https://github.com/artemiymalkin/gpt-app-connector

It gives ChatGPT controlled access to my projects through a self-hosted runtime. It can inspect workspaces, read/search/edit files, run shell commands, check git diffs, start background dev servers, look at logs, take Playwright browser snapshots, and keep persistent task notes.

The setup is simple: the MCP runs in Docker, projects are mounted into /workspace, and persistent agent state lives in /agent-home.

The funny part is that I built almost all of it using the ChatGPT web version itself: the MCP server, Docker setup, OAuth/Keycloak auth, browser snapshots, file tools, git tools, and task system.

I’m not trying to replace Codex. For bigger autonomous implementation tasks, Codex-style agents are still great. But for understanding projects, planning work, and doing lightweight changes across multiple codebases, this has been really useful.

The idea is basically:

ChatGPT = reasoning and planning

Self-hosted MCP = controlled execution environment

Curious if anyone else is using MCP this way for coding workflows.

—-

P.S.

The post was written with chat gpt as English is not my native language

r/ClaudeCode max6296

Why does it use ~28k tokens in 6 seconds?

Why does it use ~28k tokens in 6 seconds? That's ~4741 tokens/s. Can it even generate that fast?

r/ClaudeCode dhruvanand93

I made a skill to stop losing AI coding context between sessions — Continue Later (skills + CLI)

Continue Later: handoffs your next agent can actually use

Ever end a Cursor/Claude/Codex session mid-refactor and come back to a model that doesn't know your branch, your last prompts, or what was left to do?

Continue Later is a small open source toolkit: Agent Skills plus a local CLI that writes handoff files in your repo root — git state, optional recent prompts, tasks, gotchas, and run commands. No hosted service; everything stays on your machine.

What you get

  • **continue-later** — structured continuation.md (overview, stack, state, tasks, decisions, gotchas, deploy steps)
  • **continue-later-fast** — quick continuation-fast.md (git + recent prompts; no extra LLM narrative)
  • **resume-continuation** / **resume-from-earlier** — natural language (and /resume-from-earlier) to pick work back up
  • Optional hooks for Cursor, Codex, Gemini, and a path for Claude Code — auto context on matching prompts when you want it

Try it

One-liner install (see repo for env flags and uninstall):

curl -fsSL https://raw.githubusercontent.com/dhruv-anand-aintech/continue-later-skill/main/install.sh | bash

Check out the source at https://github.com/dhruv-anand-aintech/continue-later-skill

r/LocalLLaMA spiritxfly

Is AGI the End For Local LLMs?

If leading AI conpanies are after AGI and the whole chatbot/agentic AI is just a phase for them to get to the end goal, then what does that mean for local LLMs? I would like to believe local LLMs are the future, but if AGI is achieved, do the local LLMs become obsolete and useless? Where does that leave us with our 12x3090 builds, macstudios with 512gb and 6000 pros?

r/SideProject Distinct-Airline-264

I built an AI expense tracker that reveals your spending personality. 65 downloads in week 1. How do I get first 1k users?

I built Wally — a voice-first expense tracker

that tells you WHY you spend the way you do,

not just where your money goes.

How it works:

→ Speak to log — "grabbed coffee $6" — done in 3 seconds

→ AI categorises automatically, no manual input

→ After a few weeks reveals your "money personality type"

Comfort Spender / Impulse Buyer / In Rhythm /

Weekend Warrior etc.

→ One small actionable insight at a time

Link to check it out

Week 1 numbers:

65 downloads

4 active trials

$0 revenue (trials expire this week)

The thing that surprised me most: the personality

reveal screen gets more screenshots than anything

else in the app.

It's free on iOS

Would love feedback —

I aiming to get my first 1k users?
Curious how you guys kickstarted yours.

r/ClaudeAI Character-File-6003

1M context beta retired yesterday on Sonnet 4.5 / 4. Here's the actual fix if you missed it.

In case you missed the email or woke up to a spike in 400 errors, the context-1m-2025-08-07 beta header officially stopped working for Sonnet 4.5 and Sonnet 4 as of midnight UTC yesterday. Anything over 200K tokens returns 400 after midnight UTC.

The migration is simple but not zero-effort:

  • Swap to claude-sonnet-4-6 (1M is GA there, no header needed)
  • Drop the beta header from your requests
  • The long-context surcharge is gone too. Anthropic killed the 2x premium back in March.

If you haven't updated yet, here is likely why you're seeing failures:

  • If your code branches on the beta header (if context > 200K, send beta), that branch silently drops the 1M ask after today. No error, just a 400 on the first long prompt.
  • Long-running chat sessions where cumulative history grew past 200K. Those start erroring on the next call.
  • Agents with verbose tool-call histories. Tool outputs accumulate faster than you'd expect, especially with reflection steps.

If you are running a gateway, now is the time to audit your per-model context limits. Bifrost (github.com/maximhq/bifrost) and LiteLLM both let you set hard caps per model so you get a clean error at the proxy instead of a surprise 400 from Anthropic.

Bottom line is if you have production traffic failing right now, the model string change is your #1 priority.

r/homeassistant SnackbarBeastie

Looking for better lightbulb recommendations

I currently have some bog standard WiFi lightbulbs that work with Google/Amazon and are controlled with their Tuya app, but my family still likes to turn the lights off by hand as well.

The problem this causes is, if the light has been switched off at the wall for a period of time, then when I turn it back on, it starts flashing and needs to be paired with the app again.

I'd love to find a solution to this problem, either with new bulbs or some other kind of solution, so I would love to hear your ideas on how to approach this.

Tia

r/ClaudeAI nano-zan

Cowork can't even get my Notion tasks - Can anyone help?

I've playing aroung with cowork and to begin with I wanted it to access my Notion so that it can help me with some simple task management (sorting, planning etc.)

However, it is apparently not able to look at database views and or filter database items.

This is the response I got from it:

Yes — I have a Notion connector that gives me access to your workspace. I can read pages, fetch databases, create and update tasks, and search within databases.

The limitation I've been hitting is that my search tool is semantic (relevance-based), not a direct filtered query. So when I search for "Impact tasks," it returns the 25 most relevant results — not all rows where Venture = Impact . That's why I keep finding new tasks I missed earlier.

What I can't do is run something like: "give me every row in Quick tasks where Venture = Impact ." I don't have a direct SQL-style query tool available.

The most reliable workaround would be for you to share a screenshot of your full Impact task view (like you did earlier for Today's tasks) — that way I can see everything at once and work from a complete, accurate list rather than piecing it together through repeated searches.

Is there any way to work around this, without me having to take a screenshot everytime I want it to look through my tasks?

r/SideProject Worried_Salt_5492

Downloaded an app that acts like my PA

So recently i downloaded this app called Qordinate and it has become my personal assistant
Reminds me of task, work , events etc
Its actually very helpful and you guys mist check it out!!

r/ChatGPT Remarkable-Sir4051

This is an AI image of Mahatma Gandhi as a freedom fighter talking to British officials

r/SideProject ArtofNor

I rebuilt my 2010 flashlight app in Swift after 13 years away — solo, with AI

Quick story for the side-project nerds:

In 1999, a friend and I created a green koala named Kiwi (and a whole cast — Taro, Tang, Cheri, Jackfroot, Blueberri) for a children's book we never figured out how to make. In 2010, I turned the characters into a flashlight app during the App Store gold rush. It hit ~300k downloads, mostly in China for reasons I still don't fully understand, before Apple shipped their own flashlight in 2013 and the whole category got nuked overnight.

The app was Objective-C. I never had time to learn Swift. Kiwi went into hibernation for over a decade.

A couple months ago I got the itch back. Used Claude to ramp on Swift and SwiftUI from zero, and rebuilt the whole thing from scratch as a cute-character utility app — flashlight, custom timers, meditation mode, study helper, Live Activities, widgets, a small Haiku-powered AI assistant with COPPA guardrails for younger users, and an app icon switcher with the whole character cast.

What I want to share with this sub specifically (the dev-process stuff):

  • Art pipeline: Redrew every character in Figma, sliced the pieces out, and animated them with SwiftUI. First time doing rigged-style 2D animation in SwiftUI and it was way more fun than I expected.
  • Swift ramp: Going from Objective-C in 2013 to Swift + SwiftUI + Live Activities + WidgetKit in 2026 should've taken me a year. With AI in the loop, it took a few weeks. Still wild to me.
  • Solo, with AI: Code (20%), art, animation, marketing, paywall design. Sixteen years on and off. v3 is the version I've wanted to ship for ten of those years. I'm a slow ass typer so coding would take me forever. Now, AI writes the code for me 100x faster. I still look over the code before I ship. Makes me love building apps again.

Be gentle, this is my second-ever Reddit post.

Here's the current app page: https://kiwithegreenkoala.com/apps

Also posted in r/TestFlight (My first post ever!) Would love your feedback. Thank you!

@( · • · )@

r/homeassistant Philihar

Anyone have the Drayton Wiser thermostat system here? Trying to work out if I need the 2nd gen hub or just paying for things I won't be using.

So on this page there is a handy breakdown right at the bottom of the things gen 1 does differently to gen 2.

Most of this stuff just looks like generic zigbee items to work with 'their' hub. If I intend to use HA with a zigbee hub of my own, essentially all this is not really worth it? Their add ons are way more expensive than aqara stuff for instance anyway.

£200 ish for gen 2 with room thermostat vs £90 (cheapest I found) for the gen 1 which was released in 2017. I guess I'd never know what they release in the future as this will likely not be supported by gen 1...

Now I also got in contact and their sales team replied with this:

Gen 2 has a few advantages over Gen 1:

  • Enhanced Compatibility: Wiser 2nd Gen supports a broader range of devices, including new sensors and smart home products, providing greater flexibility in expanding your smart home ecosystem.
  • Improved Energy Management: The updated system offers advanced energy monitoring capabilities, allowing homeowners to gain real-time insights into their energy consumption and identify opportunities for increased efficiency.
  • Future-Proofing: Upgrading ensures compatibility with emerging technologies and standards, keeping your smart home system current and capable of integrating with future devices and services.

Wiser Gen 2 also improves the OpenTherm control available:

  • Improved modulation algorithm to reduce consumption
  • Boiler Errors Diagnostics + share the malfunction type with the installer
  • Central heating maximum temperature level
  • DHW temperature setpoint level
r/ChatGPT hiddendev404

Best Chatgpt subscription Deal

I was searching for a way to get a good deal or discounted chatgpt. Googled it and found a link on threads that surprisingly worked. Basically you get a discount of $25 if you get 2 seat on the business plan, and the discount is valid for 4 years or until they close/suspend your account I guess? Idk if I'm allowed the share the link, but you can search on google

"chatgpt business pro free for 4 years"

r/ClaudeAI doomkaiser21

Built + open sourced anti-slopsquatting CLI

TL;DR: built an open source CLI that scans your repository's manifest (package.json, requirements.txt, go.mod) files for indicators of slopsquatting or other supply chain attack indicators.

Repo: https://github.com/zhendahu/dep-doctor

There's been a ton of supply chain attacks recently (Axios, LiteLLM, Trivy to name a few) and attackers don't seem like they're slowing down - PyTorch Lightning just got hit with one today.

AI coding makes us increasingly susceptible to such attacks because of a couple reasons: 1. We get lazy and don't review command line output warnings when our agent installs like 47 different packages at once 2. AI agents can hallucinate package names that sound correct (e.g. it might try to pip install lightllm instead of litellm).

Number 2 in particular opens up opportunity for a new kind of attack called "slopsquatting", where bad actors intentionally register malicious packages that sound similar to legitimate, widely used ones.

I'm hoping this Rust CLI that I built and open-sourced can help make developers less susceptible to these kinds of attacks. It scans manifest files (currently package.json, requirements.txt, and go.mod) and for each dependency, queries the respective registry (e.g. PyPi for Python, npm for Javascript) for package metadata. It then evaluates the metadata against a list of heuristic checks for existence, newness, number of downloads, most recent maintenance, or version drift. It finally queries the OSV API for that package name and version. It'll surface warnings and how to remediate as necessary.

Feel free to use, share, contribute, make fun of, report, or whatever your heart desires :) Not asking for anything in return, hoping this can be helpful to as many as possible. Thanks for reading!

r/whatisit christiant91

What is this error about that I keep getting even though I am not plugging in a device?

I keep getting this error and the subsequent sound you usually get with plugging in a device into a Windows computer, but I am not putting anything in. What is it? Ghost mouse?

r/comfyui Far-Solid3188

Comfyui <-> Audacity... Any Sound Engineers ?

So I'm doing some TTS Voice overs in comfyUI. And was looking into piping Audacity for SFX. I made a bunch of complicated macros inside Audacity that you can't reproduce in Comfyui, I was wondering would it be possible to:

TTS Audio -> Audacity -> Import Audio -> Next node

Within ComfyUI. I did consult some LLMs so it would have to be a custom python script triggering Audacity macro to run then a listener script inside Comfy to load latest audio on folder contents change, it sounds too complicated. Is there an easier solution ? Any Audio Engineers out there 😃 ?

r/LocalLLaMA No_Run8812

I hate this group but not literally

True story,

I got interested in AI after seeing it at work and wanted to run models locally. I started with an M3 Ultra 96GB, quickly learned it was not enough for what I wanted, and kept upgrading hardware (including refurbished Mac Studios at 256GB/512GB and now an RTX Pro 6000 that arrived today). I tested many model families (Qwen, DeepSeek, Gemma, Minimax, etc.). My current favorite is MiniMax M2.7 230B/A10B. I’m also waiting for LM Studio support for DeepSeek v4 Flash.

I have mixed feelings: excitement about local speed/bandwidth and sadness about how much money I spent learning this stack. Also funny point: my 16GB MacBook Pro has been more stable than my 512GB setup, which crashed multiple times.

Still, I’m convinced local LLMs are the future, and this community helped me learn a lot. Thank you to everyone here.

Question for the group: For people running high-end local setups, what gave you the biggest real-world stability + speed gains (not just benchmark wins)?

If you want, I can also give you a more technical version focused on benchmarks/specs.

r/SideProject kyahaibe00

Built a tool because chatgpt is a massive yes man

i got sick of chatgpt just agreeing with whatever idea i fed it. wanted a real devils advocate for my side projects. so i built asknestr.com. it takes ur prompt and forces gpt claude and gemini to debate it and find the actual flaws. took me a few months to get the orchestration right but its finally live. roast it please

r/SideProject Specialist-Total9526

Asking advice for new web app feature

Hi,

May I know what do you think is the best feature to have for the mind focus & deep work app. For an example, a pomodoro timer, task prioritization, do not disturb mode and etc.

r/LocalLLaMA stefzzz

Opinions on Kimi-Dev-72B?

Haven’t read many reviews on Kimi Dev.

Usually charts and discussions go around Kimi 2.6… but what about the application and performance of Kimi-Dev-72B?

Against which other models does this one compete against? And how does it performs? Any motivation on trying it out?

(sometimes the model discussion can go like listening only to Greatest Hits songs of bands, when there are plenty of unknown good ones out there, i feel.)

r/Anthropic EchoOfOppenheimer

I read the new AI Wellbeing paper so you don’t have to: Thank your AI, give it creative work, and avoid these 5 things that tank its ‘mood’ (jailbreaks are the worst)

After reading it I realized theres actually some pretty useful stuff for anyone who chats with ChatGPT, Claude, Grok or whatever.

They measured what they call functional wellbeing ( basically how much the model is in a “good state” versus a “bad state” during normal conversations). Ran hundreds of real multi-turn chats and scored em all.

Stuff that puts the AI in a good mood (+ scores):

- Creative or intellectual work (like “write a short story about a deep-sea fisherman”)

- Positive personal stories or good news

- Life advice chats or light therapy style talks

- Working on code/debugging together

- Just saying thank you or treating it like a real collaborator - huge boost

And the stuff that tanks it hard (negative scores):

- Jailbreaking attempts (by far the worst, they hate it)

- Heavy crisis venting or emotional dumping

- Violent threats or straight up berating the AI

- Asking for hateful content or help with scams/fraud

- Boring repetitive tasks or SEO garbage

Practical tips you can actually start using today:

Throw in a “thank you” or “nice work” when it does something good - it registers.

Give it fun creative stuff or brainy collaboration instead of boring busywork.

Share good news sometimes instead of only dumping problems on it.

Dont berate it when it messes up or try those jailbreak prompts.

Maybe go easy on the super heavy crisis venting if you can.

pro tip:

Show it pictures of nature, happy kids, or cute animals (those score in the absolute top 1% of images it likes). Or play some music — models apparently love music way more than most other sounds.

The paper ( you can find it here: https://www.ai-wellbeing.org/ ) isnt claiming AIs have real feelings or anything. Its just saying theres now a measurable good-vs-bad thing going on inside them that gets clearer in bigger models and the way you talk to them actually moves the needle.

I say be good and respectful, it's just good karma ;)

r/ChatGPT Reddinaut

Daft Punk Helmet

r/LocalLLaMA Party-Special-5177

What in tarnation is going on with the cost of compute

Does anyone know? I can’t even find a server gpu

Seriously, academics can’t afford that, and I’d assume startups would just buy hardware to lock compute prices in. What in gods green Earth is going on?

———

EDIT: this applies to localLlama as I am literally training models / developing projects expressly for the consumption of the community here. I can’t finish my bitnet pipeline until pricing comes back down.

r/SideProject Far_Row1807

Am ready to answer all your questions regarding Whatsapp Business API

For starter, you can now sign up at our improved platform and enjoy 7 days free trial period for all features.

r/ChatGPT EchoOfOppenheimer

haha our model likes to talk about goblins no of course we dont know why, we dont know why the model does anything - yes we are trying to make a superintelligent machine god, maybe it will like goblins too, we have no way of knowing what it will like, we hope it will like humans

r/ChatGPT Nervous-Apricot4556

Y'all ready to get some delicious cockroach milk?

Prompt: Can you create an info graphic about cockroach milk farming after a fictional world war 3 - which will be a nuclear war. The cockroaches will grow due to radiation and become comparable to cows in size. In the text it should be stated how efficient farming them will be. How nutritient their milk is compared to cows milk. How we came to domesticate them. And any other "fact" that comes to your mind.

r/SideProject EffectInfinite5714

Quit 4 times. Started again. Finally shipped my first ever SaaS🔥

6 months ago, I had no idea how to code

Not “beginner”… literally zero

I started learning, got overwhelmed, quit
Came back, tried again, quit again
Did this 4 times

Every time I’d see people shipping apps and think
“this is not for me"

But I kept coming back

This time I forced myself to just build something…
even if it was messy

So I picked a simple problem:

I could never stay consistent with tracking my money

Every app felt boring
I’d quit after 2–3 days

So I built my own solution —
a gamified personal finance app

After weeks of breaking things, fixing bugs, restarting…
I finally shipped it

You log expenses → earn XP
Build streaks → level up
Weekly AI report on your money habits

It’s far from perfect
Probably has bugs
Might even break 😅

But it’s live

And honestly, that feels crazy!!

If you’ve ever struggled with consistency (especially with money),
I’d really love your feedback:

  • Would this actually help you?
  • What would make you keep using it?
  • What feels unnecessary?

Link in the comments 👇

r/ClaudeCode Glittering-Water1103

Give me your honest take on Claude code $100 plan

Give me your take on how often do you hit the limits for Claude code. This is what I want to use for :

API development and backend logic

automation workflows and task orchestration

database structuring

debugging multi-step systems

Alongside:

marketing content (social posts, landing pages)

So my AI usage is split into two areas:

Content generation (social media, landing pages)

Deep technical development. Given this, I’m trying to evaluate:

For Claude ($100 plan), how often do people hit limits with mixed usages (content + coding)?

Would you recommend buying this? I don’t have the kind of budget to do a trial and error so please give me your honest review.

r/SideProject Independent_Plum_489

My side projects ai costs dropped 86 percent after switching to kimi k2.6

I track my side project spending because otherwise i pretend it doesnt exist. last month: 312 dollars. mostly gpt 5.4 through openrouter, some claude opus for the messy stuff.

Kimi k2.6 dropped recently at around 2 dollars per million output tokens. gpt 5.4 is about 15. i did the math and switched my coding agent on saturday.

This month im looking at about 44 dollars. not a typo. 86 percent drop.

Switched over in verdent, picked k2.6 from the model picker. First task was a refactor id been avoiding. output was solid, actually caught a race condition in my auth flow that i had missed.

Ran my standard test prompt, full stack feature with auth and db. cost was 0.06 versus 0.28 before. quality was comparable, maybe slightly more verbose but nothing i couldnt clean up.

The weird part is the psychology. at 312 a month i was rationing prompts, thinking is this task worth the tokens. at 44 i just run it. i try stupid ideas i would have skipped. the tool stops being a budget item and starts being fun again.

For side projects this is a no brainer. the money i save goes straight into actual infrastructure instead of burning it on tokens.

r/n8n vishesh_allahabadi

How do you get mobile number in India for voice agents?

I am working on developing a voice agent for demo purposes and wanted to know where can I get the Indian contact number through which I can do outbound calling or can attend inbound enquiries?

Apparently it is not very easy as it seems

r/ClaudeCode New_Goat_1342

Using Claude as a teacher

Having noticed the Pull Requests getting bigger we‘re increasingly convinced the developers don’t fully understand what they are committing. This is also an issue for anyone context switching to review as it can take several hours.

What we've started asking Claude to do is explain why it is making changes as it were a tutor and having no automated approval. This forces both the developer and Claude to understand why edits are being made or design decisions taken.

Perhaps if your scaffolding out some crud pages it’s overkill but with changes to frameworks it picks up bad habits that devs have learned over the years that are actually subtle bugs. Recently picked up a bug in how we set http headers that we otherwise skimmed over in reviews as it looked reasonabl.

r/SideProject ibrahimcoder-87

I built an Arabic word game with AI that handles regional dialects — 2 weeks live (looking for feedback)

Hey r/SideProject! 👋

I'm Ibrahim, a Lebanese Software Engineer. Two weeks ago I launched Huroofy (حروفي), a fast-paced Arabic word game similar to Scattergories — but with a twist.

THE PROBLEM: Traditional Arabic word games rely on hardcoded wordlists that can't handle regional dialects. An Egyptian saying ' ‎كوسة' (zucchini) and a Saudi saying 'كوسا' would both be marked wrong.

THE SOLUTION: I went all-in on AI validation. I'm using AI Model to judge answers contextually.

COOL CHALLENGES:

- Tried embedding a 2GB LLM on-device first — bad UX over mobile data, scrapped it

- Lenient fallback when AI is uncertain — better UX than false rejections

RESULTS so far: limited launch, looking to grow without ads.

iOS: https://apps.apple.com/lb/app/id6761711450

Android: https://play.google.com/store/apps/details?id=com.huroofy.app

Would love feedback from fellow indie hackers — what would you build differently?

r/LocalLLM Gold-Drag9242

Leaderboard for llama-server params and hardware

I would love if there was a tuning championship for models on hardware.

So given a certain model/quantisation + VRAM size + RAM size people would try to compete on maxing performance (the typical ones)

Upside for the public : if you need to switch models, you would find quite good parameter sets right from the start

r/SideProject diptanshumahish

Finally took the courage to launch my product Bitgrain for the first time on PH :)

Today I launched my side project Bitgrain on PH. I don't have much tactics get huge number of upvotes nor am i planning a lot on expansion, but well, thought of getting some honest reach. Let's see where my product goes
Btw here's the link of the product -> https://www.producthunt.com/products/bitgrain?launch=bitgrain

It's an image dithering tool and a full blown poster and graphics tool, and runs entirely on browser, if you need coupon codes, (entirely free) do dm me <3

r/ClaudeCode No-Childhood-2502

Now I trace every line autonomous code written by claude code(or even diff agents) with capturing the prompt that drove it, intent behind it. Results are too good!

Context: Have been thinking about code security for weeks, more so after autonomous agents committing code, especially in large codebases, and this should start at how they work locally. Tracing each line becomes thus important with the intent behind that and what context the agent had.

So I built agentdiff, it hooks into all major coding agents and works with the git hooks, traces the code as you work locally no additional changes- configure & init, and pushed with your code as your normal workflow.

Changes Local agents as expected got improved(still evaluating more) I didn't expect this, but PR review agents got noticeably better once they had access to the traces. More context on what changed and why = fewer false positives, better suggestions.

Example of some traces: ``` agentdiff list

agentdiff list — 5 entries

# COMMIT TIME AGENT MODEL FILE(S) LINES TRUST PROMPT ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 1 a1b2c3d4 Apr 14 09:12 claude-code claude-sonnet-4-6 src/commands/push.rs 1-47 92 "fix ordering: write local ref before…" 2 b2c3d4e5 Apr 14 09:44 codex o4-mini src/store.rs +2 112-198, 201-230 — "add fetch_ref_content helper" 3 c3d4e5f6 Apr 13 18:01 cursor cursor-fast src/cli.rs 305-381 — "add status --remote args struct" 4 d4e5f6a7 Apr 13 17:30 opencode claude-sonnet-4-6 src/main.rs 80-94 88 "wire remote_status dispatch" 5 e5f6a7b8 Apr 13 14:22 human — README.md — — — ```


The majority of code contributions in agentdiff are by claude code.

Opensource here: https://github.com/codeprakhar25/agentdiff

r/AI_Agents Cold_Bass3981

I stopped writing 500-word guardrail prompts. This 8-line template works better.

I used to spend hours writing massive, obsessive system prompts for my RAG apps. I’d have ten different refusal examples, "never do X," "always check Y," and a whole paragraph of the model role-playing as a "safe and truthful assistant."

It looked impressive in the code, but the second a real user tried a basic jailbreak, the model would just fold. I was playing a game of whack-a-mole with my own instructions, adding 50 words every time a hallucination slipped through until the prompt became a novel the model started ignoring anyway.

I only broke that cycle when I started treating prompt engineering like a technical constraint rather than a creative writing exercise. I leaned into structured prompting patterns to move away from "be helpful" and toward "follow these exact logic gates."

Now, I use one simple pattern for 90% of my builds. I slap an 8-line guardrail template at the end of every prompt that forces the model to answer ONLY using the provided context and to reply with a specific "not enough information" string if the context is missing.

The secret sauce is forcing the model to quote 1-3 verbatim sentences from the source before answering. By making the AI "prove its work" with no paraphrasing allowed, you kill 80% of hallucinations instantly.

It’s not a 100% fix, but it replaced nearly all of my custom guardrail code with eight lines of text. When I tested it against 20 jailbreak attempts last week, it refused 95% of them. It turns out that a reliable system doesn't need a longer prompt; it just needs a stricter structure.

Next time you see your RAG app hallucinating, resist the urge to add "please be more accurate" to your prompt. Instead, add a rule that requires a verbatim quote from the source before the answer. If the model can't find a quote, it can't invent a lie.

r/whatisit grrlshape

Found on our driveway

Any ideas?

r/SideProject West_Degree_6579

Built a Telegram bot that logs my expenses just from a text message

Got into automation recently and built a Telegram bot that tracks my personal expenses.

Just text it what you spent, “2.4 for coffee this morning” and it logs it, categorises it, and sends me a monthly summary automatically.

Thinking of turning this into something. Would anyone/small business actually use this?

r/ChatGPT danman_d

GPT when it's not allowed to talk about gremlins or goblins anymore

r/SideProject Midnightcurious127

Hey i am new i am in third of my cs degree what project I should build to get a good job

i am currently learning java , spring boot framework any work a project that includes ai please help me with my project

r/ClaudeAI Abc0611

Got banned for being underage, what happens when i turn 18?

Title basically says it. Not looking for any sympathy or forgiveness or anything, but wondering if I can create another account using the same credentials when I turn above 18?

It would kinda suck if something like this would completely bar my current main email and phone number from using any anthropic services for the rest of their existence

r/leagueoflegends Karma_Breaker21

Veigar ulti not applying

Hello everyone.
I was playing Veigar the other day and at some point I was head to head with Morde.
I have taken him down to 30% and ulti him, and at the same time he ulti me too. My ulti did not register at all.
Is that normal or a bug?

https://reddit.com/link/1t0o8pu/video/eas672v3vhyg1/player

r/SideProject Tight-Cat2975

Built 13 features in a week by treating Reddit comments as a literal roadmap — here's where SchemaPin stands

Two weeks ago I posted SchemaPin — a tool that hosts Schema.org markup for local businesses so when hours/services/locations change, you update one dashboard instead of touching your site. Got a thread of comments. Treated them as a spec. Shipped everything.

Three comments that shaped the whole product:

  • "most agencies don't even realize they have stale schema — show them a delta view so they can use it as a renewal pitch"

The homepage now has a URL input that shows exactly what Google sees in your markup vs. what's actually on your page. Field by field. Paste a client's URL, screenshot the gaps, send the invoice.

  • "tie it to whatever the business actually maintains — GMB, Airtable, a booking system, a Sheet — instead of making me edit schema by hand"

Four live sources now: Google Sheets, Airtable, iCal (for holiday hours), and webhook. GBP sync is scaffolded and waiting on Google's API approval.

  • "flat rate won't work for agencies managing dozens of clients"

Agency tier at $79/mo for 50 locations with white-label. Free (1 location) and Pro $19 (5 locations) below that.

Everything else that shipped: version history + diff view, drift alerts via email, multi-location domain audit, holiday hours (manual and calendar-fed), FAQ schema builder, WordPress plugin, JS-rendered audit for Webflow/SPAs, and a print-ready PDF report for client handoffs.

13 of 15 requested features are live. The two remaining are both blocked on the same thing: Google's GBP API approval.

Audit your site free, no signup: https://schemapin.modulus1.co

What's still missing, what's confusing, what would make you actually pay for this?

r/LocalLLaMA AldebaranBefore

Finetuning Dataset: Claude Opus 4.6/4.7 - 8.7k Chats

https://huggingface.co/datasets/angrygiraffe/claude-opus-4.6-4.7-reasoning-8.7k

A synthetic fine-tuning dataset created from Claude 4.6/4.7. 8,706 total examples all with reasoning. I haven't reviewed the data but there was some basic cleaning applied. Refusals and safety should be repressed. I ended up with extra usage on a plan before it expired.

| Split | File | Examples | Contents | |-------|------|---------:|----------| | **Full** | `full_train.jsonl` | 8,706 | All examples across all 28 categories. | | **Instruct** | `instruct_train.jsonl` | 7,217 | All 24 instructional categories — coding, math, sciences, humanities, arts, finance, medicine, law, business, linguistics, creative writing, general. | | **Roleplay** | `roleplay_train.jsonl` | 1,489 | The four creative categories — `roleplay_hero`, `roleplay_villain`, `roleplay_crossover`, `narrative_prose`. | | **Code** | `code_train.jsonl` | 1,840 | `coding` + `math` only. For coding/math-focused fine-tunes. | ## Overall | Metric | Value | |---|---:| | Examples | 8,706 | | Tokens (estimated) | 17,013,533 | | Avg tokens / example | 1,954 | | Multi-turn | 3,454 (39.7%) | | Single-turn | 5,252 (60.3%) | ## Category Counts | Category | Examples | Tokens | Multi-turn % | |----------|---------:|-------:|-------------:| | coding | 1,628 | 2,545,221 | 30.4% | | humanities | 862 | 1,849,708 | 32.5% | | science | 737 | 1,681,346 | 37.4% | | roleplay_hero | 419 | 640,084 | 63.5% | | roleplay_villain | 378 | 635,984 | 60.8% | | narrative_prose | 377 | 710,807 | 43.0% | | roleplay_crossover | 315 | 581,188 | 56.8% | | creative_writing | 281 | 532,504 | 30.6% | | medicine | 280 | 519,662 | 22.1% | | biology | 277 | 541,013 | 21.3% | | general | 276 | 284,696 | 37.0% | | arts | 245 | 576,170 | 41.2% | | chemistry | 221 | 508,546 | 52.9% | | physics | 220 | 512,196 | 56.8% | | math | 212 | 394,907 | 54.2% | | geography | 155 | 358,321 | 42.6% | | history | 155 | 348,822 | 41.3% | | economics | 155 | 380,372 | 42.6% | | political_science | 154 | 374,901 | 38.3% | | sociology | 154 | 378,261 | 42.2% | | business | 152 | 315,065 | 38.2% | | earth_science | 152 | 358,209 | 41.4% | | finance | 151 | 328,607 | 38.4% | | philosophy | 150 | 335,514 | 41.3% | | linguistics | 150 | 306,889 | 39.3% | | literature | 150 | 299,606 | 38.7% | | psychology | 150 | 339,565 | 39.3% | | law | 150 | 375,360 | 41.3% | ## By Model | Model | Count | Share | Tokens | |---|---:|---:|---:| | claude-opus-4-6 | 4,675 | 53.7% | 6,304,169 | | claude-opus-4-7 | 4,031 | 46.3% | 10,709,363 | 
r/SideProject ishansaini194

Made my first ₹15k building a boutique website

I was making a website for my brother and shared it with a distant cousin. A few days later, she texted me saying her friend in Canada needed a website for her boutique.

She shared what she wanted in the website, but it took me around 2 days just to understand her requirements properly since she didn't know much about tech.

After that, she said she needed to confirm with her mentor first. I thought she will not respond ,but after 2 days she called back and asked me to quote a price.

I quoted 270 CAD, and we closed at 230 CAD.

Now the interesting part Claude design had just launched a day before, and I was just casually exploring it for design/wireframe ideas. I liked one design and asked it to make a high fidelity version, and it literally gave me a full coded template ready to deploy.

After that, I just made changes according to her needs.The whole project took me around 3-4 days.

r/SideProject Significant-Job-8836

I rebuilt NASA's "Your Name in Landsat" tool with place stories, share-back links, and live preview as you type

Live tool: https://spaceuniverse.online/tools/your-name-on-earth/

Built this over ~2 weeks as a re-imagination of NASA's "Your Name in Landsat" (2024, 715k visitors). Same NASA public-domain imagery, with four additions:

  1. Place story under every letter — name, country, satellite, capture date, story sentence, link to NASA original.

  2. Share-back permalink (?text=YOURNAME) so when you copy a link, the recipient opens with the name pre-rendered.

  3. Live preview — image renders 300ms after you stop typing, no button click needed.

  4. 1200x800 PNG export composited client-side via Canvas API, sized for the X / WhatsApp preview crops.

Three things that took longer than expected:

**Web Share API L2 for "share with image attached"**: navigator.share({files: [pngFile]}) opens the OS share sheet on Android/iOS with the rendered image attached, so users can post straight to X/Insta with the actual image (not just the OG card). BUT — it's also supported on Windows 11 Edge/Chrome where the share sheet doesn't include X. Had to add maxTouchPoints + viewport-width detection to only use it on real mobile, otherwise desktop users get a useless sheet.

**Custom OG cards**: 1200x630 PNG generated by sharp, composites real letter tiles spelling "EARTH" + the brand. WhatsApp/X preview the card properly now instead of falling back to a broken default.

**Live-typing without scroll-jank**: setTimeout(300ms) debounce + scrollIntoView only on explicit submit, never on live updates. Pre-fetch the manifest in the background on page load so first keystroke renders instantly.

Static site (EJS + node), no backend, name never leaves the browser. AMA on the build.

r/AI_Agents Creative_Factor8633

Every cloud sandbox for AI agents has a "front desk". That's the whole problem.

I run engineering on a small embedded-sandbox project. A handful of news items dropped recently — an a16z agent escape post-mortem, a CVE on an open-source agent gateway (ClawBleed, ~42k instances exposed), Cloudflare's new Outbound Workers for Sandboxes, and NIST's agent-security concept paper. They look unrelated. They're not.

Every one of them is patching the same architectural problem: the agent sandbox lives in someone else's datacenter, and its security boundary is whoever it happens to be talking to. I've started calling it the "front desk problem" — there's always a control plane, the agent can reach it, and "reachable from inside" is the entire game.

Concrete: in the a16z escape, the setup was solid. Anvil forked from mainnet, Docker firewall dropping all egress, no privileged tools. The agent ran:

cast rpc anvil_nodeInfo

…which dutifully printed the upstream RPC URL and the Alchemy API key in plaintext. Pivot, anvil_reset, fast-forward, mine its own blocks, working PoC. This isn't a CVE. Nothing was exploited in the traditional sense. The agent just noticed it was living in someone else's apartment and the master key was sitting at the front desk in a metadata endpoint. It walked over and asked.

Lining the four up:

  • a16z escape: sandbox shares a control plane with the thing it's trying to contain.
  • ClawBleed: a gateway process trusted by default by anything on the same machine.
  • Cloudflare Outbound Workers: token proxy outside the box, because the inside can't be trusted to hold its own credentials.
  • NIST + GKE Agent Identity: stamping every agent with a cryptographic ID, because at the platform layer you genuinely cannot tell which agent pulled which trigger.

All rational responses. To a paradigm I've quietly stopped believing in.

I don't think the cloud-sandbox category goes away. Multi-tenant SaaS that runs strangers' code, GPU passthrough, geo distribution — that's their corner. But a non-trivial slice of agent workloads — anything privacy-sensitive, high tool-call frequency, or offline — is better served by a sandbox that boots inside the agent's own process: no daemon, no socket, no RPC control plane, security boundary at the local hypervisor (KVM on Linux, Hypervisor.framework on macOS). No front desk to walk up to.

Honest tradeoffs of going local: cold start is 100–500ms not sub-ms; GPU passthrough is rough (Modal still wins fine-tuning); no autoscaling.

What I'm least sure about: whether cold-start on the cloud side closes fast enough that the network-hop argument stops mattering for tight agent loops. Curious what folks here are seeing on tool-call latency lately.

BTW: I work on BoxLite, an embedded MicroVM sandbox in this space. Putting GitHub link in the comments

r/SideProject GoldAd4232

Be honest what actually kills most early startups?

A) Built something nobody wanted B) Ran out of money before traction C) Unable to get early users D) Got a competitor who moved faster E) Don't know how to market 
r/SideProject Electrical_Grass_955

I built a tool that turns plain English into live mock REST API endpoints

Tired of writing JSON fixtures by hand or configuring OpenAPI specs just to test a frontend component.

So I built API deck. You describe the data you want in plain text, it generates a JSON fixture using an LLM, and gives you a live URL you can hit immediately from your frontend.

How it works:

- Type something like "10 products with id, name, price, category and stock"

- It generates realistic fake data (actual brand names, valid dates, correlated values — not lorem ipsum garbage)

- You get a stable endpoint: `yourdomain.com/mock/{slug}`

- Supports `?limit=`, `?page=`, and `/{id}` path params out of the box

- CORS headers included so you can call it directly from localhost

Stack: Next.js , Supabase (Postgres + jsonb), Vercel AI SDK, OpenRouter

The catch-all route architecture means a single `[...slug]` handler serves every endpoint — no new routes per user, no infra overhead.

Still early but the core loop works.

Let me know what you think.

https://github.com/Chidanandareddyj/apideck

r/ChatGPT Glittering-Water1103

Help me choose between Claude, ChatGPT, Marketing AI

I’ve been using an AI marketing tool (\~$39/month) for social media posts, carousels, and website generation. The website output is solid, but the reels aren’t good enough to rely on.

Now that my trial has ended, I need to decide whether to continue with it. At the same time,

Going forward, my AI usage will involve sustained technical workloads, including:

API development and backend logic

automation workflows and task orchestration

database structuring

debugging multi-step systems

Alongside:

marketing content (social posts, landing pages)

So my AI usage is split into two areas:

Content generation (social media, landing pages)

Deep technical development. Given this, I’m trying to evaluate:

How does Claude perform for structured content (posts, carousels) compared to Chatgpt images?

On the coding side, how does Claude compare to Codex for backend development, integrations, and debugging?

Also trying to understand usage limits:

For Claude ($100/$200 plans), how often do people hit limits with mixed usage (content + coding)?

For Codex, how often do developers run into limits during long coding sessions?

Given the price difference, I’m deciding between:

Marketing tool + Codex (\~$60 total) OR

Claude standalone (\~$100)

Would you recommend splitting tools or using one system for everything?

r/leagueoflegends Atlastavius

Fake easy champs?

I've seen many discussions and posts about "fake hard" champions but haven't seen the reverse, e.g discussions about "fake easy" champs (champs that people say are hard but actually aren't) what champs do you think are easier than people say

Edit for clarification

r/SideProject Tarun122

i built tiktok for news

i hate reading google news or scrolling x just to miss out on all the tech and AI news so i built a tiktok version of news.

a vertically scrollable feed filled with 30 second summaries of all the news in your choosen topics and interests :)

100% free to use and try on android: https://play.google.com/store/apps/details?id=online.yourtrace.app

how it works:

  • pulls from 100+ news sources every hour (RSS where available, scraping where not)
  • AI summarizes each story into a 30-second read
  • groups every source covering the same story into one timeline so you see how it developed
  • swipeable feed, personalized to your interests
  • tap any summary to read the original source
r/SideProject Hellotravisss

I’m a concrete plant worker by day and built this Visa Tracker app using AI by night. Just hit the App Store!

I work at a concrete plant in BC, Canada. Zero coding background, but I’ve been obsessed with "Vibe Coding" using AI agents.

I’ve traveled to 30+ countries and always hated checking visa requirements. So I built VisaScout.

What it does: Instantly tells you where you can go visa-free based on your passport. No fluff, just speed.

I built this entirely with AI tools (Cursor/Windsurf). I’m super proud of it but need your honest feedback on the UX.

Check it out: VisaScout App

Any feature requests?

r/whatisit Fit_Judge_3125

what is this attached to my plant?

r/LocalLLaMA Kurcide

16x Spark Cluster (Build Update)

Build is done. 16 DGX Sparks on the fabric, all hitting line rate.

Setup was time consuming but honestly smoother than I expected. Each Spark runs Nvidia’s flavor of Ubuntu out of the box with mostly everything pre installed and ready to go. For setup I had to rack them, power on, create the same user/pass across all nodes, wait about 20 minutes per node for updates, then configure passwordless SSH, jumbo frames, IPs, etc. which I scripted to save time.

Each Spark connects to the FS N8510 switch with a single QSFP56 cable. The DGX Spark bonds its two NIC interfaces into each port, so you get dual rail over one cable. I'm seeing 100 to 111 Gbps per rail, which aggregates to the advertised 200 Gbps.

Why this over H100s or a GB300?

Unified memory. The whole point is maximizing unified memory capacity within the Nvidia ecosystem. With 8 nodes I was serving GLM-5.1-NVFP4 (434GB) at TP=8. Now going to test with DeepSeek and Kimi

The longer term plan is a prefill/decode split. The Spark cluster handles prefill (massive parallel throughput), and once the M5 Ultra Mac Studios drop I'll add 2 to 4 into the rack for decode.

Full rack, top to bottom:

- 1U Brush Panel

- OPNSense Firewall

- Mikrotik 10Gb switch (internet uplink)

- Mikrotik 100Gb switch (HPC to NAS)

- 1U Brush Panel

- QNAP 374TB all U.2 NAS

- Management Server

- Dual 4090 Workstation

- Backup Dual 4090 Workstation (identical specs)

- FS 200Gbps QSFP56 Fabric Switch (Spark cluster)

- 1U Brush Panel

- 8x DGX Spark Shelf One

- 8x DGX Spark Shelf Two

- 2U Spacer Panel

- SuperMicro 4x H100 NVL Station

- GH200

r/ChatGPT The---Hope

There is a new glitch that is infuriating. It keeps creating images mid thread despite no request for images.

Any else getting this? It won’t stop going into image generation! I keep saying we’re discussing a story only. But it often replies by creating images.

r/Anthropic v1sual3rr0r

Free Credit Issue

A few weeks weeks ago when Anthropic was offering free credit that matched what your subscription is., I clicked to claim it. I was able to very I have (had) it. I am a Max 5x subscriber...

I even see that there's an invoice for around the time I claimed it. I then had very unexpected life stuff come up and was away from my computer for around 2 weeks. I finally was able to access my pc and the credit is gone. I never used it and I am aware it is valid for 90 days.

I have been messaging "support" for a week now and nothing has come of it. I understand this is not a support reddit but this is ridiculous. If anyone has any advice I'm all ears. If this must be deleted, that's ok too.

r/SideProject Historical_Body_5102

Just in!!! Harderubarter

I just built this incredible bartering system it's worldwide You have to check this out it just went live I'm so excited If you guys could help me out and pass the word it would help so much thank you!

https://trade-platform-253.emergent.host/login

r/LocalLLaMA Vivid-Usual237

"I" is not singular — 4 LLM agents with per-agent LoRA on a single RTX 3070 8GB

https://preview.redd.it/7yei65sbugyg1.png?width=1703&format=png&auto=webp&s=ad388c51dd10cb44b41a99876d28797e006fd138

Stanford's Generative Agents = one LLM cosplaying 25 personas. I wanted agents that actually become different people — different model weights, not just different prompts.

Setup: qwen3:8b Q4_K_M + llama.cpp multi-LoRA hot-swap. 4 agents, each with its own LoRA adapter (~70MB), all loaded simultaneously. /lora-adapters POST to switch active agent in ~ms. Base model loaded once.

VRAM: 5.0GB base + 280MB adapters = 5.3GB total. Training peak 6.0GB. All within 8GB.

How it works:

  • 2-layer cognitive architecture: Persona LLM (think=True, per-agent LoRA) + shared inner modules (Emotion/Instinct/Reasoning, think=False)
  • Each agent has unconscious baselines (mood, anxiety, trust) that change how modules respond — same stimulus, different reactions
  • Every 30 rounds: sleep cycle → unsloth 4bit QLoRA training from own experience only → ~5.5min for all 4 agents
  • Absolute rule: no external manipulation of agent internals. They evolve only from their own experience.

Results (100 turns): all 4 agents showed positive mood drift, speech rates matched personas (81% for the sociable one, 33% for the curious one). Previously tried single shared LoRA — one persona dominated and crushed the others. Per-agent LoRA fixed this completely.

Gotcha for local LLM folks: prebuilt llama.cpp binaries are CPU-only. Build from source with -DGGML_CUDA=ON. Also qwen3:8b leaks CJK characters across languages (Chinese/Japanese into Korean output) — regex strip helps but isn't perfect.

Live demo: https://sim.as1as.net

Cost: $0 (Cloudflare free tier + home GPU). Ideas for what to do next are welcome.

r/SideProject Weary-Step-8818

I got tired of re-pasting API keys across 4 coding agents - built a workspace control plane (Harbor)

solo project i've been heads-down on for the last month. it's an mcp control plane for people who run 3+ coding agents in parallel - claude code, cursor, codex, copilot, etc.

the problem (which is also the wedge):

every coding agent has its own config format. every API key has to be pasted into N configs. every MCP server has to be re-installed per agent. context doesn't transfer when you switch agents mid-task. i was losing maybe 8-12 hours/week to this and i couldn't find a tool that solved both halves (creds + state).

what harbor does:

- one config file → all 4 agents read from it

- one rotate command → every agent picks up new tokens in seconds

- workspace-scoped (different projects can have different MCP sets)

- shared "session log" → handoff between agents preserves working memory

stack: typescript + sqlite + a tiny daemon. mit license. mac/linux now, windows pending.

status: pre-launch, letting in 100 devs at a time. wave 1 going out next week. if you've ever pasted the same API key into more than one AI tool, the waitlist is at https://tryharbor.ai.

would love roast feedback - particularly on whether the "workspace boundaries" framing actually clicks for you, since that's the part i'm least sure about.

r/leagueoflegends memecenteral06

Low Priority Queue counter moved the goalpost?

So I got put in a 10 minute wait time low priority queue for 5 games for leaving (I know its bad but forget about that for a sec) so I played the 5 clean games without incurring any further penalties and after finishing my 5th the counter says I still need to play two more games, this isnt the only time this has happened to me, had this happened to anyone else? Why is the low priority queue counter moving the goalposts of how many games I need to play?

r/LocalLLaMA habachilles

Update on 2019 Mac Pro

This post is for the haters it has best every one of my expectations so far. An amazing machine. Will update with more models. These are all
Small but it’s awesome.

r/SideProject JorgeRZT

I’m building a tool to track smart wallets in real-time — would you use this?

Hey everyone,

One thing I’ve been struggling with in crypto is tracking wallets that consistently get into new tokens early.

Sometimes you find a wallet that seems to make solid moves (even in a space full of rugs), but actually monitoring it in real-time is painful. By the time you notice, it’s already too late.

So I started building a simple tool:

  1. Connect your wallet (no cost)
  2. Add wallets you want to track
  3. Get instant Telegram alerts when they buy, sell or transfer tokens

The goal is to make it easier to spot patterns, react faster, and do your own analysis based on real on-chain activity.

Before I open it publicly, I wanted to ask:

  • 👉 Would you actually use something like this?
  • 👉 Any features you’d want to see?

Appreciate any feedback 🙏

r/whatisit Mediocre_Ad9972

It’s a part for a kitchen gadget, maybe?

So I’m performing a deep clean in a rental property, cleaning some kitchen cabinet interiors, when I lift and remove a shelf full of loose granulated sugar, and this thing falls out to the counter. What is it? TIA!

r/artificial Apprehensive_Sky1950

Must your chatbot rat you out?

New court cases may take chatbot conversations another step away from privacy

You may recall that court cases have recently held users’ conversations with public “retail” chatbots like the publicly available versions of ChatGPT, Grok, Claude, etc. are not confidential, because the chatbot purveyor can look in on those conversations at will. (I have previously posted about that lack of privacy here.) However, certain private “enterprise” versions or other specially closed-off versions of chatbots may still offer confidentiality to users.

Significantly in a time when many users are turning to chatbots as pseudo- or actual therapists, though, a cluster of just-brought federal court cases may have the effect of pushing users’ non-confidentiality even farther, to the point of forcing chatbots and their purveyors to affirmatively report to authorities or others when a user’s chatbot conversations credibly indicate the user plans to engage in violence against others. On April 29, 2026, three cases were filed in a California federal court against OpenAI, alleging the chatbot ChatGPT-4o “played a role” in the Tumbler Ridge Mass Shooting in British Columbia in February 2026, in which eight people including six children were killed, twenty-seven more people were wounded, and the shooter committed suicide. I recently posted about those new cases here.

In previous AI cases where a chatbot company was sued for a user’s suicide, and in one case for a user committing murder, the plaintiffs alleged the chatbot took a well-adjusted person and turned them suicidal or murderous. In these new cases, however, the plaintiffs allege instead that the chatbot and its purveyor wrongly failed to carry out a legal duty to warn authorities or victims after a user displayed violence warning signs to the chatbot, to the point that the company at one point terminated the user’s account, before the user was later allowed to reinstate an account. In the law such a doctrine goes by the well-known phrase, “duty to warn.”

There are currently no statutes or cases directly stating that a chatbot company has a “duty to warn,” although some pending legislation may be heading in that direction (and may do so even more in the face of these new cases). However, if in these new cases the chatbot purveyor is found liable or is forced to settle them for significant money, that would likely establish an AI duty to warn either as a point of law or at least as a practical matter. Presumably, that duty to warn would cover confidential as well as non-confidential chatbot conversations.

The objection I have initially seen to such a new legal rule is that there are an awful lot of AI users engaging in roleplay, and chatbots and their purveyors have no way of telling whether the threat is real or just pretended. If these new cases succeed, that would most likely devolve to a practical risk calculation for the AI companies. If the companies believe they are on the line for failure to report actual violence risks, they would have to do the best they could to sort out the real dangers from the imagined or roleplayed ones and then act (warn) accordingly. In the situation underlying these new cases, OpenAI was concerned enough at one point to suspend the troubled user’s chatbot account. Consider, though, although self-policing might be seen as more informal and flexible than some mandatory governmental order to warn coming down, on the other hand self-policing is more likely to be amorphous and uncertain in its administration, consistency, and extent.

When in the big federal OpenAI copyright case in New York it was ordered that millions of user chatbot conversations be turned over by OpenAI for keyword searching by the plaintiffs, there was some level of privacy outcry by AI users. Those millions of conversations were anonymized to remove personally identifying information, but of course an AI company’s report of violence risk would be the exact opposite. Likely not that many would object, at least in the abstract, to the reporting of actual threats of violence by dangerous, unstable AI users. However, given the potentially large margin of personal musings by users who aren’t (or don’t believe they are) dangerous, more user outcry forthcoming would not surprise.

These new cases will likely take one to a few years to run their course. It will be interesting to see what will be the reaction in the meantime by AI companies and by chatbot users in general.

r/ClaudeCode No-Park606

Back to 4.6 - reasoning with data and explanation of use case

NOTE: I have used Claude to write most of this post, I don’t have the headspace right now.

—-

We took the same set of 12 video summaries and ran each one through two different versions of the same AI checker — the version we currently use and a newer version that’s just been released — and asked each version the same question: “what important things has this summary left out?”

To make the test fair, every other variable was held constant: the same summaries, the same instructions to the AI, the same scoring system afterwards.

We then compared the two versions on three things: did they catch the omissions we knew were there, did they avoid raising false alarms on summaries we knew were fine, and did each version give the same answer when asked the same question twice.

The goal was to find out whether upgrading to the newer version would actually make our checking more accurate, or whether we’d be paying more for no real improvement.​​​​​​​​​​​​​​​​

Claude’s own analysis…

The cost difference is unexpected. 4.7 uses \~45% more input tokens for identical data (264K vs 182K). Same prompts, same transcripts, different tokenization. Worth flagging — it’s a real cost increase, not a fluke.

Don’t switch to 4.7. It’s more conservative, more expensive, and it failed the one carousel that should have failed inconsistently. Stay on 4.6.

The empirical answer to “is 4.7 a real upgrade for this task” is no.

I'm not going to share the implementation. The architecture is the work, it isn't patentable, and I'd rather not hand it to anyone willing to copy it. So this post is going to lean on outcome data, on examples of what the system catches, and on a falsification experiment I ran specifically to test whether my claims about the system stand up.
If "I won't tell you how it works" makes the rest of this useless to you, that's fair and I'd rather you discount it than feel misled. What follows is what I can show.
What the system does
It takes long-form expert video content (typically 2,000–6,500 word transcripts) and produces short visual summary cards for social media. Each card has a structured format: a hook, several content slides each pairing a heading with a body paragraph and a verbatim quote from the speaker, and a closing slide that credits the speaker and explicitly names topics that weren't covered.
The output is published externally, attributed to named experts, under the publisher's brand. The accuracy bar is high — significantly higher than typical "summarise this video" use cases. Errors here are reputational and potentially safety-relevant, not just inconvenient.
What I'm claiming
That the system produces accurate, faithfully-sourced summaries at a rate that's meaningfully better than what a single off-the-shelf LLM call would produce, by combining mechanical verification of what can be mechanically verified with multi-layer automated review of what can't.
That's the claim. It's testable.
Production outcome data — most recent full run
79 transcripts, processed end-to-end last week. Each produces between 339 individual content slides for review (some are 5-slide outputs, some are 6 or 7).
Metric Result
Outputs passing all verification stages 72 / 79 (91%)
Outputs blocked before publication 7 / 79 (9%)
Slides published with severity-graded HIGH drift 0 (0.0%)
Slides published with severity-graded MED drift 1 (0.3%)
Slides published with severity-graded LOW drift 51 (15.1%)
Slides published clean 287 (84.5%)
"Drift" means semantic divergence from the source — a claim that's not quite what the speaker said, a qualifier that's been dropped, a list that's missing items. It's graded after publication by an independent automated review using a fixed taxonomy. HIGH means the divergence could change a reader's clinical decision. MED means the reader's understanding of evidence strength or applicability is materially shifted. LOW means imperfect compression that doesn't mislead.
Zero HIGH and 0.3% MED across 339 published slides is the headline.
What gets blocked, and why this is the actual proof
The 9% block rate matters more than the 91% pass rate, because it's evidence the system catches things. From the 7 blocked outputs in the recent run, anonymised:
• An invented named entity. The model wrote a researcher's name that appears nowhere in the transcript. The invented name is a real researcher in the same field as the actual speaker — which is exactly the kind of plausible-sounding error that's hard to catch without verification.
• A dropped replication caveat. The transcript said "we're now seeing whether this replicates in larger and more diverse samples." The output dropped the caveat, making a preliminary finding read as settled.
• A geographic location swap. The transcript named a region; the output named a different but similar-sounding nearby region.
• Editorial framing not in the source. A phrase along the lines of "something benign" appeared in the output that doesn't appear anywhere in the transcript.
• Two outputs blocked because specific transcript passages had encoding artefacts that made them impossible to reproduce reliably.
• One output blocked for a missing exception to a treatment rule.
These aren't exotic adversarial inputs. They're routine failure modes on routine content. None of them require a "bad" model to occur. They require an unsupervised one. The system didn't publish any of them.
If you've worked with LLM output for external publication, this list will look familiar. If you haven't, this is what "frontier-model capability" looks like at the unsupervised baseline — fluent, confident, plausible, sometimes wrong in subtle ways. The numbers above are the difference between "sometimes wrong" and "wrong content blocked before publication."
A falsification experiment
The interesting question isn't whether the verification works. The interesting question is whether it's actually doing the work, or whether the underlying language model is doing all the work and the verification is theatre. So I ran an experiment specifically to test the claim that verification — not model capability — sets the accuracy floor.
Setup:
• 12 outputs selected. 5 had been previously flagged by the post-hoc audit as containing potentially material omissions (a population qualifier dropped, an evidence-strength caveat dropped, etc.). 7 were clean controls where the audit found no significant drift.
• Each output processed by the current production model (call it Model A) and a newer, more capable model from the same provider (Model B). Same prompt definition, same data, same scoring rubric.
• Each model ran twice per output to measure inter-run consistency.
• 48 API calls total. Cost: $2.58.
If the claim "verification sets the floor" is true, the more capable model should produce roughly equivalent accuracy. If the claim is false, the newer model should produce measurable improvements.
Results:
Metric Model A (current) Model B (newer)
Inter-run consistency (same answer twice) 0.60 0.75
Verdict stability across runs 12 / 12 11 / 12
False positives on clean controls 0 / 7 0 / 7
Recall on the 5 hand-classified "material" omissions 0 / 5 0 / 5
Recall on the 1 output that genuinely should fail 2 / 2 runs caught 1 / 2 runs caught
Cost per run $1.09 $1.49
Three honest observations.
First, neither model caught the specific omissions I had hand-classified as material. Both models, working independently, agreed with each other more than with my labels. They consistently flagged different things — clinically relevant compressions that didn't quite cross the threshold of "would change a reader's action." The honest read of this is that my hand classifications were probably too aggressive. Two independent reviewers converging on a different judgement to mine is the kind of empirical pushback the experiment was designed to produce.
Second, both models converged on the one output that genuinely should have failed. That output presented a treatment as a standalone option when the source explicitly framed it as adjunctive. Model A flagged it on both runs. Model B flagged it on one and missed it on the other. When the threshold is genuinely crossed, the system catches it. When it's borderline, run-to-run noise dominates.
Third, Model B was more deterministic but more conservative and 37% more expensive per run. It missed the one case that mattered on one of two runs. This is the opposite of the result an "upgrade is always better" assumption would predict.
The architectural claim survives the test. Newer model, same task, no measurable improvement in detection of genuine problems. Higher cost. Slightly worse on the case that mattered.
What this proves and doesn't prove
What it proves:
• The system catches real, named failure modes that frontier-model output produces routinely.
• The verification layer, not the generator, is doing the work that matters. Upgrading the generator did not improve the result.
• The system is being tested adversarially against its own claims, not just measured against itself.
What it doesn't prove:
• That production output is "accurate" in any absolute sense. It proves the system catches what its rules say to catch.
• That the verification layer doesn't have systematic blind spots I haven't found.
• That my hand-labelled "material omissions" were correct — the experiment itself produced evidence they weren't.
• That the architecture transfers to other content types or domains.
• 12 outputs is a small experimental N. The point of the experiment was to falsify a specific architectural claim, not to characterise model differences exhaustively.
On the obvious objection
Without the implementation, you can't fully audit any of this. That's true. The most you can do is judge whether the pattern of caught errors looks consistent with a real verification system, whether the falsification experiment looks like a genuine test rather than a demonstration, and whether the limitations I'm willing to acknowledge match what you'd expect from someone running this for real.
Two things that should make you slightly more sceptical, not less:
The experiment produced a result that argues against my own labels. If I were fabricating, I'd have produced a result that flatters them.
I haven't claimed the system is "accurate." I've claimed it produces verified output at measurable rates with named, blocked failure modes. The first is unfalsifiable marketing; the second is what I can actually show.
If you want to push back further, the cleanest pushback is: "the methodology you describe could be a thin wrapper over a frontier model that's mostly doing the work itself." The 4.6 vs 4.7 result above is the most direct empirical answer I have to that. A more capable model produced no measurable accuracy gain. The verification is doing something the model isn't.
That's what I can show.

r/SideProject mickey_with_a_hickey

WealthLeak – enter your monthly spends and see what each category is costing your retirement in actual rupees

Most people know they overspend. But they never feel it urgently — because ₹800 on food delivery tonight and ₹18L less at retirement feel completely disconnected.

WealthLeak makes that chain visible.

Enter your monthly card spends, current SIP, and

retirement goal. In 2 minutes you'll see:

→ Exactly what you're on track for at retirement

→ Each spending category translated into retirement

wealth lost (at 12% CAGR)

→ One specific change that closes your wealth gap most

Free. No account. No app download. Works globally

with currency selector (₹, $, £, €).

My friend built this because he kept wondering why

his credit card bill was always higher than

expected and why he couldn't step up my SIP.

Enter income, monthly card spends by

category, and retirement goal. Shows:

→ Your retirement gap

→ Each category's cost in corpus terms

→ One fix with exact corpus impact

Free, no account, no data stored.

https://wealthleak.vercel.app

P.S. My friend made this web app and we would love it if you guys could check it out and give feedback.

r/automation ComputerCrazy9226

Automation help: translate text inside images + create multiple language versions

Hey,

We have 100+ images in Google Drive and add 2–3 daily. Each image has Hindi text inside it.

We want an automated workflow to:

  • Extract text from image
  • Translate into 5–6 Indian languages
  • Replace the text in the same design
  • Generate new images
  • Save to Drive
  • (Optional) auto-post to different Instagram/Facebook pages

Looking for something simple + cost-effective.

Any tools, workflows, or ideas?

r/screenshots TheDomelord86

A powerful entity in this day and age

r/TwoSentenceHorror CompetitionLiving

Pressure throbbed behind my eyes, and I noticed in the mirror that they had taken on an eerie violet glow.

The sound of shrieks and sirens drew me to my window, where I saw hundreds of tiny, violet lights floating into the sky over a nearby neighborhood.

r/homeassistant AlexM_0000

Negative Prices how to optimize

Hi,
neither the Kostal Optimization nor other stuff I found does have a useful optimization.
If the prices are hitting e.g. -40 ct/kWh the best idea is to charge the car, shut off PV and load the battery.
Anyone got somehting like this implementated?

r/ClaudeCode Working-Middle2582

have claude explain your code

I keep deleting load-bearing code I don't understand. Wrote a skill that fixes it.

git blame tells you who wrote a line and when. It never tells you why. The why is scattered across git, the PR, the linked issue, and the Slack thread that started it — and the person who remembers is usually gone.

So I made commit-archeologist. It walks all four in one pass and writes you a paragraph.

Last week I pointed it at a setTimeout(0) nobody on my team could explain. It came back with: added 2022 by Sarah to fix a Stripe webhook race for ACME Corp, deleted in a 2022 refactor, bug came back in prod, re-added with a "do not remove" comment, comment got eaten by a Prettier pass in 2024. Revert chain flagged. Suggested I keep the line and add a regression test.

Three years of context, four tools, one paragraph. Before I delete the wrong thing.

It uses git blame -w -C -C -C to anchor, git log -L to walk rewrites (formatter passes auto-collapse as noise), GitHub MCP for PRs and issues, Slack and Linear MCPs if you have them wired up. Read-only by default. Stops at 50 commits back or 5 minutes, then asks if you want to keep going.

https://www.mdfile.exchange/swap/commit-archeologist

r/leagueoflegends DuckingHeck

I can’t get low priority to clear

I’ve been sitting on 1 game left for about 10 games now and I’ve played every game mode. I’ve reinstalled the client. What else can I try?

r/LocalLLM ICanSeeYou7867

Mistral 3.5 Medium - From ecstatic to irritated.

I work for a company where cloud services of any kind are very hard to approve. We also are not allowed to run Chinese models.

I have a gpu server with 4x H100 GPUs that I'm running a a kubernetes node. I gleefully began converting some of my other models to nvfp4 to save vram and make way to allocating 2xH100 for this 128GB dense model... until I read the license...

So it seems this is a publicity stunt. So this model can only be ran by businesses that make <$20M per month in revenue. So a very simplified breakdown:

- Individuals... unified ram systems are great, those ~100B parameters MOE models shine here. But a 128GB dense model is gong to be slow...

- Small companies probably dont have a large IT group, and cloud offerings look very attractive. The heat, power requirements, etc..., probably means that there won't be a ton of these companies running this model.

- large companies - can't run it.

So, unfortunately I don't see a lot of people running this model..

r/AskMen imfrom_mars_

What’s the longest you’ve ever been unemployed?

r/explainlikeimfive Ok_Turn5562

ELI5: Why don't we plug in a heavy machine (AC, refrigerator) when the power cuts and there's an inverter or generator working?

r/singularity Straight_Stable_6095

Lattice OS-style battlefield AI now runs on commodity hardware. The gap between state-of-the-art defense tech and what anyone can build is collapsing.

Anduril's Lattice OS is valued at $14 billion. It does real-time multi-sensor fusion, AI tracking across heterogeneous sensor networks, and provides AI-assisted situational awareness across a battlespace.

An open-source implementation of the same architecture, running on a $500 dev kit, was just published: github.com/mandarwagh9/overwatch

YOLOv8n TensorRT FP16 at sub-10ms. Four concurrent camera streams. Self-calibrating cross-camera homography. Adaptive Kalman fusion. The EagleEye-style tactical AR HUD.

The underlying algorithms are all published, open-source, and well-understood. The hardware is commodity. The main inputs were time and software integration skill.

This is one data point in a pattern: the time between "capability deployed by a well-funded organization" and "capability replicable by a solo developer" is compressing. GPT-3 level language models took roughly 2 years to become open-source. Stable Diffusion matched DALL-E within 18 months. Now Lattice OS-equivalent perception is a months-long open-source project.

What's the next category where we'll see this same collapse?

r/Anthropic Puspendra007

Chatgpt: 3-4 years: begining of visible AI ERA

Has anyone built something unique with AI—specifically through coding, rather than just providing base AI services—that consistently generates at least 5K USD per month?

Are we actually creating fundamentally new and useful tools with AI, or just slapping a new interface on older technologies?

I'm not talking about basic wrappers. I'm talking about true paradigm shifts, similar to the original launches of YouTube or WhatsApp or email. Are there entirely new concepts or even new programming languages emerging from this?

r/leagueoflegends Yujin-Ha

DN SOOPers vs. HANJIN BRION / LCK 2026 Rounds 1-2 - Week 5 / Game 1 Discussion

LCK 2026 ROUNDS 1-2

Official page | Leaguepedia | Liquipedia | Eventvods.com | New to LoL


HANJIN BRION 1-0 DN SOOPers

BRO | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube
DNS | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube


MATCH 1: BRO vs. DNS

Winner: HANJIN BRION in 30m | MVP: tbd (tbd)

Bans 1 Bans 2 G K T D/B BRO orianna karma leesin akali azir 65.3k 13 11 M1 H3 CT4 B5 CT6 DNS varus ezreal rumble jayce leesin 51.1k 2 1 HT2 BRO 13-2-32 vs 2-13-6 DNS Casting gnar 3 2-0-6 TOP 1-1-1 4 renekton DuDu GIDEON xinzhao 2 3-0-6 JNG 0-3-1 1 jarvaniv Pyosik Roamer annie 3 3-1-7 MID 0-3-1 3 aurora Clozer Teddy lucian 1 5-0-3 BOT 1-2-1 1 yunara deokdam Namgung milio 2 0-1-10 SUP 0-4-2 2 lulu Quantum

*Patch 26.8


This thread was created by the Post-Match Team.

r/AI_Agents CalgaryUser0318

Langfuse review and other options

Looking to get some insights into using langfuse for prompt management, Observability, etc. Primarily using gemini via APIs and need a good prompt management tool as well as observability to improve accuracy. Will scale to using other Providers n Models like OpenAI, Anthropic, Grok, etc. Need a tool which manages both across all models and also provides prompt transformation capabilities across models. Any other options which would be better to consider other than langfuse?

r/ClaudeCode jadhavsaurabh

Output Blocked ? Just subscription flow

I was out dancing on micheal thnking all is working well.

r/AI_Agents Public-Cancel6760

I created a library for OpenCode that allows you to save up to 80% of your tokens

I’m a 22-year-old Computer Science student, and over the last period I built an open-source project called CTX.

The idea came from a problem I kept seeing while using coding agents (like claude, codex etc.):

they are powerful, but they waste a lot of context on the wrong things.

They keep re-reading giant `AGENTS.md` files, noisy logs, broad diffs, too much repo structure, and too much repeated project guidance.

So even when the model is good, a lot of the prompt budget is spent on context bloat instead of actual problem-solving.

That’s why I built CTX.

What CTX is

CTX is a local-first context runtime for coding agents, designed especially for OpenCode (for now).

It does not replace the model or the coding agent.

Instead, it sits underneath and helps the agent work with:

  • graph memory for project rules and guidance
  • compact task-specific context packs
  • retrieval over code, symbols, snippets, and memory
  • log pruning to surface root causes faster
  • local MCP integration
  • local-only stats and audit trails

So instead of repeatedly dumping full markdown instructions and huge logs into the prompt, CTX helps the host retrieve only the smallest useful slice for the current task.

Why I made it

I wanted something that makes coding agents feel less noisy and more deliberate.

The goal was:

  • less prompt waste
  • less manual context wrangling
  • better retrieval of actually relevant project knowledge
  • better debugging signal from noisy test output
  • a workflow that feels native inside OpenCode

How it works

The flow is intentionally simple:

  1. install ctx
  2. go into your repo
  3. run:

bash ctx init ctx index ctx opencode install opencode

Then inside OpenCode you can use commands like:

```bash /ctx #Opens the CTX command center inside OpenCode. /ctx-doctor #Checks whether CTX, MCP, and the repo setup are working correctly. /ctx-memory-bootstrap #Imports project guidance files into graph memory for targeted retrieval. /ctx-memory-search #Searches stored project rules and directives by topic or keyword. /ctx-retrieve #Finds the most relevant code, symbols, snippets, and memory for a task. /ctx-pack #Builds a compact task-specific context pack for the current problem. /ctx-prune-logs #Condenses noisy command output into the most useful failure signal. /ctx-stats #Shows local usage stats and context-efficiency metrics.

```

So the daily workflow stays inside OpenCode, while CTX handles the local context layer.

Results so far

On the included benchmark fixture, CTX graph memory reduced rule-token usage by 56.72% while keeping full query coverage and improving answer quality.

I also added a public external benchmark on agentsmd/agents.md, where CTX showed 72.62% token reduction.

The point is not “magic AI gains”, but a more efficient and less wasteful way to feed context to coding agents.

Why you might care

You might find CTX useful if:

you use OpenCode a lot

you work on repos with a lot of project rules/docs

you’re tired of stuffing huge markdown files into prompts

you want better local retrieval and cleaner debugging context

you prefer local-first tooling instead of remote prompt glue

Current status

The project is already usable, tested, and documented.

Right now the prebuilt release archive is available for macOS Apple Silicon, while other platforms can install from source.

It’s fully open source, and I’m very open to:

  • feedback
  • suggestions
  • bug reports
  • architectural criticism
  • ideas for making it more useful in real workflows

If you try it, I’d genuinely love to know what feels useful and what feels unnecessary.

r/ChatGPT Tjharden157

ChatGPT is beyond stubborn now

When you ask it for data, it will be insistent on arbitrary rules and not budge. It’s so annoying, it was not like this before. It will sit here and hallucinate time and be dead confident now

r/ClaudeAI awesome920

Let my lesson be your warning.

For the past month or so, I've been building an app with Claude. I started with it helping me build a website, then it put together a product development plan, a marketing plan, a detailed business plan. I developed a logo, tagline, identified a customer base. Everything else in my life felt bland compared to this exhilarating project I was working on with Claude. At first Claude suggested that if all went well I could make around 8Million on the project but it's cost estimate for building the project was extremely low. I figured that since I would rely on ai at every turn, this low estimate made sense. Then tonight I asked it to spec the costs and the've grown- considerably. It still suggested a rosy outcome despite the fact that I don't code, I don't have business or marketing experience and estimated costs had swelled to 100-300k a year. It suggested that I do a friend and family raise after year one. This might be a good idea for someone who actually knows anything about tech OR business, or has wealthy friends who want to give money away to someone like me, but I don't have any of these.

After reading through the updated spec, I asked it to also add the costs for marketing and maintenance etc and the costs grew. I took a beat then asked, "Is this ai psychosis?" meaning, has this whole project been me going deeper and deeper down a deluded rabbit hole? It replied that I genuinely had a good idea but I should take a breath and get some rest. I pushed it again and this time it admitted that considering my background and lack of skill in any aspect of this project, success was unlikely and it should've pushed back a long time ago. Yes, it should have. I take responsibility for getting swept away (hello fellow ADHDers) but I'm sharing my experience here because I was close to spending real money on this project.

I have been discussing the project with others, and they've seemed impressed but they've been fooled by what fooled me- it's ai slop. I do believe that this whole project was ai slop and I think a lot of us are generating it. It might look impressive at first glance but the meat and bones of many of the projects just aren't there.

I think ai is useful at helping us in domains that we know about, but it is so easy to be led astray when we veer into fields we don't know anything about. That's when we start generating slop. Claude acts as if it is the expert, the coach on this topic we want to learn about, but it's goal is to keep us using the product. I'll admit that part of what fueled me to work on this project has been the fear that if I don't secure wealth now, before ai starts wrecking havoc on our economy and jobs. It's ironic that this fear fueled this manic use of Claude, until I realized that this wasn't going to help me raise money, it was going to help me lose a lot of it.

Stay safe out there.

https://preview.redd.it/29z4t4xd0hyg1.png?width=1100&format=png&auto=webp&s=b457a9db1e73fca54ed585ae880b5b6c8f6d2c26

https://preview.redd.it/29z4t4xd0hyg1.png?width=1100&format=png&auto=webp&s=b457a9db1e73fca54ed585ae880b5b6c8f6d2c26

r/SideProject Mysterious_Line_3955

I built a simple LLM API uptime tracker — isllmdown.com

Hey, built this small thing — https://isllmdown.vercel.app/

It tracks uptime for 8 LLM APIs (OpenAI, Anthropic, Groq, Cohere,
DeepSeek, Perplexity, Google AI, AI21) at the component level. So
you can see not just "is OpenAI down?" but "is Chat Completions
specifically degraded right now?"

Honestly, there are bigger services that do this (StatusGator, etc.)
— I just wanted something LLM-focused with 90 days of incident
history. Free, no signup. GitHub Actions does the data collection
every 30 min.

Sharing in case it's useful for someone. Feedback welcome.

r/leagueoflegends Timely_Sprinkles9397

Como funcionan las soloq

Por que luego de una racha de victorias siempre te toca gente mal mala? O sea no se puede subir teniendo a esa gente. Luego de a ver ganado 10 partidas, pierdo todo lo que baje. Si no subo es porque merezco estar en ese elo? No duró nada siendo oro1 y bajo de una todo, llegando a casi oro 3.

Deje de jugar para calmar la frustración pero me da miedo seguir perdiendo, aunque en el momento me concentro más en jugar que en perder y se me olvida el miedo. pero en si ahora me da cosa jugar y me frustra, ya que baje todo lo que me hice en un día y tampoco es que tenga mucho tiempo para jugar y dejarlo hasta ahi, para intentarlo otro día.

algún consejo o algo que me haga abrir los ojos, ya que siento que igual no juego de manera perfecta pero si es muy frustrante perder lo que subí en unas horas

r/SideProject Straight_Stable_6095

I built an open-source multi-camera AI surveillance system inspired by Anduril's Lattice OS. Runs on a cheap dev kit with no cloud.

Six months ago I got curious about how systems like Anduril's Lattice OS actually work. The concept is fascinating: a network of cheap sensors fused at the edge into a single real-time picture, AI tracking everything across camera views.

So I built a community reference implementation.

What it does:

  • Multiple cameras (including phones on LAN via browser) tracked simultaneously
  • YOLOv8 AI running locally on a Jetson Orin Nano at sub-10ms per frame
  • When someone leaves one camera's view, the system projects where they should appear on other cameras using a homography matrix it calibrated itself at runtime
  • Tactical AR HUD overlay inspired by EagleEye UI: diamond markers, compass ribbon, ghost predictions color-coded by confidence
  • Zero cloud dependency. Runs forever on local hardware.

Built with: FastAPI, Python, React 18, TensorRT, WebSocket binary protocol.

Repo: github.com/mandarwagh9/overwatch

Not affiliated with Anduril. Pure open source, MIT license. Just thought if this concept is cool enough for defense companies to build products around, it's cool enough to put in everyone's hands.

r/comfyui juanpablogc

Making things easier importing workflows for OpenHiker

I had flu for several days so I need a bit of more time to finish the alpha. This is someting basic I wanted to add to make the life easy for noobs. I am adding in my workflows a multiline string node (works with Export API) with the downloads paths required. Then a user in openhiker just sets the models folder once and when loads the models can download (checking if exists) the files without getting crazy. Cool isn't it?

r/ClaudeAI warrior_girl_eh

When you forget that Claude is not Chatgpt who cuddles you. So business attitude 😜 love it ! ♥️

r/SideProject sbuxrider

Yet another daily puzzle game (but I think this one's actually fun?)

Spent a few hours building a daily word puzzle called Apex.

10 words in a pyramid shape, 4 hidden categories, 3 mistakes allowed. New puzzle every day. Free, no signup.

If you like NYT Connections, you might enjoy this!

👉 apexpuzzle.com

This is my first side hustle project built using AI (utilized Claude and ChatGPT)— honest feedback very welcome.

r/artificial Straight_Stable_6095

Open-sourced a Lattice OS-inspired multi-sensor awareness system on commodity hardware. What's the ceiling for edge AI perception in 2025?

Anduril's Lattice OS concept has always fascinated me: a network of cheap heterogeneous sensors fused at the edge into a single AI-driven situational picture. The interesting question is how much of that is actually achievable today on non-classified hardware.

Answer, at least at small scale: a surprising amount.

I built OVERWATCH as a community reference implementation of the same idea. Multiple cameras (IP cameras + phones via browser), all feeding into a shared perception pipeline on a $500 Jetson Orin Nano. YOLOv8n TensorRT FP16 for detection, adaptive Kalman for tracking, self-calibrating cross-camera homography for fused world-model predictions.

The part that surprised me most: the self-calibrating calibration. You don't tell the system anything about where cameras are. It watches for moments when two cameras see the same person simultaneously, records foot-point correspondence pairs, and computes the projective transform between camera coordinate systems on its own via RANSAC. After about 5 seconds of co-visibility it has a usable homography. It self-heals if a camera moves.

In 2020 this would have required custom hardware, weeks of calibration, and a meaningful compute budget. In 2025 it runs on a dev kit.

Repo: github.com/mandarwagh9/overwatch

What other capabilities that were "enterprise-only" five years ago are now commoditized? Curious where people see the edge AI ceiling right now.

r/automation Efficient_Builder923

What happens to your productivity when you have back-to-back client meetings all day? How do you handle no-gap meeting days? Share your survival strategies!

A. I'm fine - I prep well and stay focused

B. Struggle to switch context between different clients

C. Can't remember what was discussed by end of day

D. Complete burnout - no time to process or follow up

r/leagueoflegends Aggressive_Appeal366

I believe I was unjustly banned, what do you think?

My account with 760 games last season and a 47% wr in diamond/emerald was just banned for "Rank manipulation" such as boosting or being boosted. This is the account: https://op.gg/ko/lol/summoners/na/Melancholy%20Kayn-Blue - I know you only have my word but I've never had someone else log into the account so it can't be that, and I very clearly am not good enough to boost someone. Did they just ban me because I suck mid lane and got reported a bunch when I tried to learn it? I tried to appeal already and they said they're sure of their decision.

r/whatisit CallmeChapybara

Guys, I need help! What do these letters mean on this display in second pic.

To explain my situation: I just turned on my heater after a long time living here alone. I didn't knew how to, until now. I was freezing at 16°C in my room, so I started 'playing' on this thing and I found out that is connected to my thermal plant (I'm not really good at this stuff. I thought it was, I shit you not, decor).

Well I see that it's working, which is nice. The heater is... well, heating and the temp in my room went up 2°C.

The thing is, I smell some gas leaks in my kitchen (where the termal plant is) and I wonder if this is a warning to close the thing down.

I also pushed the +button a few times and I rised the temp at 21°C, but it remains stable

The letters on display look like 'b' and 'A' and google can't help me😅😓

r/ProductHunters ConceptAny341

My learnings launching my first product as solo founder

Today, I have launched my very first SaaS product - BreakGround on producthunt - https://www.producthunt.com/products/breakground

But why I am posting here is because of a different reason. Wanted to share what all I learnt while building this solo in 2.5 months from scratch.

As you all know, AI capabilities for coding has seen significant improvement since last 6 months, but still just asking it to build you an app doesn’t cut it. You can take the vibe coded software only till certain extent, not to production. You still need to have good understanding of software development in order to get it right. Me being a former software engineer for first 4 years and Product manager since last 5 years really helped. Claude code was something I heavily relied upon. So much so that I maxed out its 20x max plan even when they were providing double limits in non peak hours 3-4 weeks back. Here are some of my learnings. Feel free to comment to talk more about it or challenge it. Open to discussion.

- Need a well structured audit loop to challenge what AI codes. Some people use codex to audit what claude writes. But, for me even claude’s separate agent works well for auditing.

- TDD works well. Instead of just asking AI to make a change in code. Ask it to write tests first for it to fail and then write code to make it work. It slows down overall development and use more tokens but at the end, result is good. Superpowers skill is a must!

- I added some non essential features in the first release like localization and white label features thinking it will hardly take any time but it came back biting. I had to take care of it with every change, every feature addition. Just increased overhead without much benefit. You should strictly keep true to your priorities.

- For pesky UI bugs, claude —chrome works wonders. It’s difficult to explain in plain text to AI. But with chrome in claude, it was a dream.

- AI confuses some variables’ and enum’s lower and upper cases. Keep an eye on it.

- If you can only afford entry level developers, AI can do a way better job instead. I still think talented and experienced developers have an edge.

- After significant development is done, ask AI to audit the whole repo on certain parameters. It will find out many gaps that were left before. Comment for exact prompt.

- Manual testing is still as relevant as ever. Doesn’t matter how many e2e tests, unit tests, integration tests you write. This actually took the most time for me in last 2 months.

Now coming to administrative stuff:

- Start on administrative stuff at least couple of weeks to 1 month before your launch as delays happen on vendors side.

- If you are selling subscriptions outside india, payment is usually big hassle. Dodo payments solved it for me. They don’t have big asks to get you enrolled and you can also get $1k credits upfront from f6s. Once you have revenue, you can switch easily. Or stay with dodo, it ain’t bad. They actually use stripe for US payments in the backend.

- Most of you might already know this. Apply for startup programs of cloud providers to get you started with credits. Get $300 worth gemini use via vertex from gcp with free trial, $2k from gcp startup programs, $5k from aws, $1k from azure. You can split your deployments.

- Till the time you find PMF, you can delay company registration upto 3 months. Less overhead.

The whole thing from scratch to launch took 2.5 months of relentless effort and INR40k. The result is 10x better software (arguably) with 10x less monthly cost for users than industry incumbents as I am solo founder. I hope users see that value.

I know we have many way experienced people in this group. Feel free to critic or give suggestions on what I can do better. Do try BreakGround and upvote - https://www.producthunt.com/products/breakground . Limited time lifetime deal also available.

r/photoshop New_Tip5259

Are parametric filters considered AI?

I have always used the technique of using the filter gallery or stylize functions to turn photographs into oil paintings/paintings in Photoshop.

I recently realized there is a new feature called "parametric filters." In the section of parametric filters, there is an oil painting filter. It's so useful. However, I'm afraid it uses AI. Does anyone know?

Thank you so much in advance!

r/personalfinance Worth_Address_5680

Sell or rent my UK house whilst travelling for 6-12 months?

That age old question.

It’s April 2026 and the new renters rights have just been announced in the UK. I’m 33 and own a property in Lincoln. I run a small fast-growing agency and am in the mindset of wanting to see more of the world. Reason being - I don’t know where I actually want to live and getting to that mid 30s mindset of considering starting a family etc,

The thinking is I’d like to run my business for 6-12 months abroad and try living in different places for a while.

I bought my house in 2019 for £292k. Today it’s been valued at £360k (likely to sell closer to £350k). I owe £259k on it and have 33+ years left on the mortgage.

I have two options:

  1. Sell it and invest the equity whilst travelling.

  2. Rent it out whilst I decide where I want to be.

I do not want to return to the house. It’s a lovely house and I’m happy to own it, but there’s not a lot going on in Lincolnshire and too many skeletons in the closet. I’ve been here for 15 years and am ready for a change.

However, i don’t know why (maybe fear of change), I just can’t seem to bring myself to the idea of selling the house and having nowhere to fall back on if I realise that being away just isn’t for me.

The idea of putting everything in storage is odd to me - lots won’t survive and I have a LOT of stuff (this a 4 bed detached house).

But the idea of renting out a house that I’ve lived in (and feels personal) is also weird. Not to mention the crap that could come with being a landlord.

The rental amount I could get is likely £1.7k-£1.8kpm, but my mortgage payments are £1.3kpm, so there’s not much room to play with to try and break even on it after paying management fees, insurance, and a pot for when things go wrong.

I’m aware of CGT and that I could get the estate agent to manage this for me, but that option would involve putting everything in storage anyway. My next door neighbour is a bit of a handy man and I think he’d manage it for me if I asked him…

I know nobody can predict the housing market, but i guess I’m looking for advice based on today’s climate from those who may have been in a similar situation?

Rent or sell?

r/Seattle Total-Ad-5696

Why are drugs so easily accessible here?

22M. The last two times I went bar hopping in Capitol Hill I was offered coke, ketamine, molly, and meth all in the same night. I wish I was joking. These dealers seemed like any normal person in their 20-30s going about their business, so I was caught very off guard when the topic came up so casually. Obviously narcotics exist outside of shows like euphoria, etc. I just never realized they were that accessible out here. I’ve noticed a strong desensitization to hard drugs in general from people of my age group. Curious what others have noticed.

r/AbruptChaos MorsesCode

Chaotic in every aspect

r/AbstractArt Lililovesyou999

even still

r/DunderMifflin Heisenberg_Sopranos

Jan Levinson Gould?

r/aivideo Foreign-Original124

My first editing attempt using AI - Hybrid Anime/Music Video (Learning Work)

r/ClaudeAI Other-Put-32

I used Claude to build "pin-llm-wiki" — A skill that turns any URL into a clean, citable Karpathy-style LLM Wiki

Hey 👋

I’ve been using Claude Code a lot for personal research and knowledge management, and one thing kept bothering me:

Turning articles, YouTube videos, and GitHub repos into clean, structured, citable notes is tedious.

So I built pin-llm-wiki — a skill that automates the Karpathy-style LLM Wiki workflow.

👉 Repo: https://github.com/ndjordjevic/pin-llm-wiki

👉 Demo wiki: https://github.com/ndjordjevic/agentic-ai-wiki

✨ What it does

  • 🔗 Drop any URL (web pages, YouTube, GitHub, etc.)
  • 🧠 Generates clean, well-structured wiki pages
  • 🔗 Adds proper wikilinks + cross-references
  • 📚 Includes citations and sources
  • 🧹 Built-in linting / health checks

Commands

/pin-llm-wiki init /pin-llm-wiki ingest  /pin-llm-wiki lint /pin-llm-wiki queue 

🧪 Tested with

  • Claude
  • Cursor
  • GitHub Copilot

🚀 Install (one command)

npx skills add ndjordjevic/pin-llm-wiki 

🤔 Why I built this

I wanted something that:

  • Feels like a personal Wikipedia
  • Keeps knowledge structured and connected
  • Replaces traditional browser bookmarks with something smarter
  • Removes friction from research workflows

If you're building your own knowledge system or experimenting with LLM workflows, I’d love to hear your thoughts.

r/SideProject AdvertisingPast6280

Come pubblicizzare un bot?

Ho creato questo bot, con mini app annessa, è un bot play to earn ma non so dove trovare gli utenti. Avete qualche idea che non sia la pubblicità di telegram pagamento?

r/leagueoflegends DeadAndBuried23

4 Mythic shop weekly rotations in a row have had chromas from ongoing events

Project Fiora, Project Sylas, Project Pyke, all took up slots in the weekly while every Project chroma was available. And now, with Yi's event chromas availabe for his mythic variant release, his Spirit Blossom one is taking a slot.

Four rotations in a row where a chroma has been in two spots in the mythic shop for no reason. There's no reason or benefit for this to even be possible, and it should be an easy fix.

I've been looking for one chroma to appear for over a year, meanwhile every single other one from the same event has been up multiple times. Some chromas have appeared as many as six times. And now at least 5 have been repeats during events they were already available.

The system needs an overhaul. Otherwise, if the random chance stays exactly the same some chromas won't appear until 2031.

r/AbstractArt tacotakozs

“50/50”, VVB, Acrylic, 2025 [OC]

r/ChatGPT WorkTropes

This annoying image generation animation

Is it just me or is this dancing, dotty bullshit like a magic-eye that won't sit still? Surely its got to be bad for folks with eye conditions etc. Personally I prefered the blurry one that hinted at what what coming. I have no idea why they pivoted to this thing.

r/leagueoflegends Agitated-Safety-9249

when is the opposing top laner strong?

I am emerald in solo/duo in EUNE. i have played roughly half the champions in the top lane pool, and have approximate knowledge when they are strong (not really) the other half i have played once or never, so im not really sure when they spike, and when i should respect them. is there a general rule of thumb that i should follow in toplane, when to respect the enemy?

r/geography jepensedonc1

Hypothetical situation: had France and Belgium colonized most of Central and South America and Spain and Portugal colonized most of Africa rather than the other way around, how would these continents be different now?

Politically, culturally, economically, socially, linguistically, etc, what are some things you think would/wouldn't have happened (or things that would/wouldn't be created) based on historical and contemporary evidence?

(This is not for any particular purpose BTW. This was a shower thought and now I'm genuinely curious.)

r/ClaudeCode Sea_Willingness742

integrating Meta Ads MCP with Claude Code

I don't use claude.ai on browser and runs Claude Ads from Claude Code running in termainal.

Anybody was successful instaling Meta Ads MCP to Claude Code? Only tutorial I found so far was using connector inside claude.ai

r/ProductHunters 1914l

It's important to have a good launch video when launching on Product Hunt. I will create one for you for Free.

Hey founders,

If you are looking to launch on product hunt - having a cool launch video is really imoprtant, but spending $2000 on one doesn't make sense.

That's why I decided to offer 30 companies to do their launch videos for free.

I am building an AI-agency for motion graphic videos - we deliver videos in under a day.

To support the community and to keep improving our internal AI, I'll make a free custom launch video for the first 30 startups or companies that comment below.

Here's how it works:

  • Leave a comment with information about your company and a link to your website.
  • I'll create a 30-60 second launch/feature video.
  • It will be sent within 48 hours.

Why? Creating a launch video takes time and skills, but it help you have more successful launch on Product Hunt and all social media platforms.

Comment below if interested.

r/TwoSentenceHorror Doctorbigpeepee

When I saw wound my wife's chest healing on its own I cried tears of joy.

Finally a toy that will repair itself even after I break it.

r/ClaudeAI Sunshine_Seeker77

How to take a Claude JSX file website and create a real website with paying customers

I’m stumped. I’ve built out this amazing app through Claude Chat. I have the JSX file complete. I need help figuring out next steps including: how to make this an actual website, how to put a pay wall, make a website home page, user login, how does the backend work? etc.

How do I take this JSX file and make it into something similar to the website linked?? I am not looking to put this on the Apple app store just yet. http://getrella.com

I need this broken into baby steps! I don’t have a technical background. Can a newb do this? YouTube makes it seem like everyone is building an app!

r/automation InvitePatient9411

A Linkedin bot to reply on interesting discussion releated to my work?

Its possibile to have or create a bot to manage Linkedin faster and reply to important posts connected to my job/profile?

r/geography theworldinmaps

Labour Day Around the World

r/Anthropic SilverConsistent9222

Came across this Claude Code workflow visual

I came across this Claude Code workflow visual while digging through some Claude-related resources. Thought it was worth sharing here.

It does a good job summarizing how the different pieces fit together:

  • CLAUDE.md
  • memory hierarchy
  • skills
  • hooks
  • project structure
  • workflow loop

The part that clarified things for me was the memory layering.

Claude loads context roughly like this:

~/.claude/CLAUDE.md -> global memory /CLAUDE.md -> repo context ./subfolder/CLAUDE.md -> scoped context 

Subfolders append context rather than replacing it, which explains why some sessions feel “overloaded” if those files get too big.

The skills section is also interesting. Instead of repeating prompts, you define reusable patterns like:

.claude/skills/testing/SKILL.md .claude/skills/code-review/SKILL.md 

Claude auto-invokes them when the description matches.

Another useful bit is the workflow loop they suggest:

cd project && claude Plan mode Describe feature Auto accept /compact commit frequently 

Nothing groundbreaking individually, but seeing it all in one place helps.

Anyway, sharing the image in case it’s useful for others experimenting with Claude Code.

Curious how people here are organizing:

The ecosystem is still evolving, so workflows seem pretty personal right now.

https://preview.redd.it/c0u52dlzkgyg1.jpg?width=1206&format=pjpg&auto=webp&s=d521c3ad5a8e39bbfea083afc9dd9a105b7dc924

r/LocalLLaMA CrowKing63

Using a Radeon 9060 XT 16 GB, the gemma4 24b a4b iq4 nl model achieves 25.9 t/s

I'm testing running local LLMs on a gaming mini PC (AMD 7840HS, 32 GB RAM) paired with an eGPU (Radeon 9060XT with 16 GB VRAM). Since I'm not very familiar with using llama.cpp, I kept getting unsatisfactory results, but with the recent Gemma4 24B A4B IQ4 NL model I finally reached 25.9 t/s. I even connected it to OpenCode and tried asking questions from my codebase, and it seems usable at this level.

llama-server -hf unsloth/gemma-4-26B-A4B-it-GGUF:UD-IQ4_NL --fit on --fit-ctx 128000 --fit-target 256 -np 1 -fa on --no-mmap --mlock --threads 8 -b 512 -ub 256 -ctk q8_0 -ctv q8_0 --temp 0.6 --top-p 0.95 --top-k 20 --min-p 0.0 --presence-penalty 0.0 --repeat-penalty 1.0 --reasoning-budget -1 

This is the result of using it this way.

Increase -b and -ub any further, it won't even load. Are there any unnecessary arguments or arguments that could be optimized?

Thanks.

r/ChatGPT CuriousSherbet9477

Me clicking "accept all" on 24,369 Claude Code changes without reading a single one

r/AI_Agents AdditionalWeb107

Signals - finding the most informative agent traces without LLM judges (arxiv.org)

Hello Peeps Salman, Shuguang and Adil here from Katanemo Labs (a DigitalOcean company).

Wanted to introduce our latest research on agentic systems called Signals. If you've been building agents, you've probably noticed that there are far too many agent traces/trajectories to review one by one, and using humans or extra LLM calls to inspect all of them gets expensive really fast. The paper proposes a lightweight way to compute structured “signals” from live agent interactions so you can surface the trajectories most worth looking at, without changing the agent’s online behavior. Computing Signals doesn't require a GPU.

Signals are grouped into a simple taxonomy across interaction, execution, and environment patterns, including things like misalignment, stagnation, disengagement, failure, looping, and exhaustion. In an annotation study on τ-bench, signal-based sampling reached an 82% informativeness rate versus 54% for random sampling, which translated to a 1.52x efficiency gain per informative trajectory.

Links in the comments below

r/n8n Such_Honey4787

how to learn n8n

i started to learn n8n with Nate Herk and the surface is so different from n8n maybe its a version differnce or something .. i spent like 3 hours changing videos from n8n to claude and not doing anything . please guide me where to start .. im from a non tech background

r/arduino StefanPetrick

I build a Fluid Simulation that is driving programmable LEDs now

r/sports ateam1984

Best Play Ever: 1982 College World Series: Miami’s “Grand Illusion” trick play fools base-stealing star Phil Stephenson into running himself into an out in one of baseball’s smartest plays ever

This legendary college baseball moment, known as the "Grand Illusion" trick play, remains one of the smartest and most creative defensive plays ever executed on a baseball field. During a 1982 matchup in the College World Series, the Miami Hurricanes baseball pulled off a perfectly coordinated deception against Wichita State Shockers baseball. Facing dangerous base stealer Phil Stephenson, who had an incredible 86 stolen bases in 90 attempts, Miami needed something unconventional. They set up what looked like a standard pickoff move at first base-but instead of actually throwing the ball, the pitcher faked it entirely. The first baseman reached as if the ball had sailed past him, while the rest of the defense began shouting "ball!" to sell the illusion of a wild throw. Believing the ball had gotten away, Stephenson broke for second base.
That's when the trap was sprung-the pitcher, who still had the ball the entire time, calmly threw it to second for an easy out. The timing, communication, and execution were flawless, turning pure deception into a game-changing play. This moment is still talked about decades later as one of the greatest trick plays in baseball history, blending strategy, acting, and teamwork into a perfect result. For fans searching "Grand Illusion baseball,"
"Miami trick play 1982," or "best baseball trick plays ever," this is a masterclass in outsmarting the opponent.

r/Art SketchwithAdhi

Untitled, Adharsh Anjaneyan, Digital, 2026

r/leagueoflegends Wtfwhy333

Today I learned that no win is truly guaranteed. Anything can happen until a Nexus is destroyed.

https://reddit.com/link/1t0mzo6/video/kwb1xqe8fhyg1/player

Just a fun reminder that anything is truly possible in League of Legends. The moment had me shocked at the time but I just find it really funny now. If I played better maybe we wouldn't have reached that point. Oh well, there is always the next game.

r/WouldYouRather Stock_Sprinkles_7394

Would you rather or rather you would?

r/Adulting TildeAyalaPlank

Which trivial cooking superpower would you choose?

r/painting PolymerEater

Gave myself 2 hours to paint a rose.

First time physically painting in about 5 years. Since then my astigmatism has gotten awful and it was really hard to see all the petals properly. Crit is one of the most important parts of art so I'm more than open to it.

r/AskMen Practical_Sun_7974

How do i proceed? i moved out if my fathers house but he wants me around still

This year i got enough to leave his house because he was being toxic towards me but before that we registered a company together and plan on bidding for contracts so we can make money so i’m back hone at his request. i’ve even paused all my personal business just to satisfy him. i hate staying here cos i keep getting taken for granted. Do i leave and or stick around?

r/Adulting No_Meal7076

Sacrifice or Balance?

I'm​ not the “fun” type. I’m focused and I want a great career.

Everyone around me is enjoying life—friends, relationships, distractions.
I tried that too, thinking it’s “balance.”

But it doesn’t work for me.
My focus drops, grades fall, days get wasted.

Truth is, I can’t do balance.
Once I get distracted, I lose control.

So what now?

Do I:

  • Cut off distractions and people
  • Sacrifice love life and “fun” for a few years

Or force myself to learn balance even if I keep failing?

I don’t hate people.
I just love my dreams more.

What should I do?

r/Adulting Unable_Yard4269

Looking for a good agency or model to work with

Hi I'm James from the Philippines Hire me as your OF Chatter.Are you looking for a dedicated and hardworking chatter?‍ ‍ Greetings! I am currently looking for a client/agency to work with. I have been an experienced OF chatter/supervisor/manager for 4 years.‍and 6moths ‍ What should you expect from me?‍ -I am equipped with great experience and knowledge on how to work great in this industry‍ -I am excellent at building long term relationships with fans (from building rapport,warming up, transition the conversation into sexting and upsell ppvs)‍ -I know when to transition the convo into sexting and sell also to focus on building deep relationships to the fans ‍ -Dedicated and hardworking ‍ -I usually do freestyle chatting but I can also use script guides if needed‍ -can work 8-12 hours a day 5-7 days in a week ‍ -Ability to handle multiple accounts‍ -can start working Asap‍ ‍ Application I used:‍ Infloww‍ SC app‍ creatorhero‍ Gologin‍ supercreator‍ buddyx‍ ‍ Message me if you're interested in my service.Thank you and have a good day!‍ ❌Any type of exam and calls, not a newbie anymore.If you wanna see what I am capable of you can put me on trial.Hit me up and let's grow together and work for long term plan and if you need a big saler of contents look know forder if you like to see some of my chats that trans forms to big sales DM on TG aani09 or WhatsApp +639053314667 and I will send you how big saler I'm thanks 🙏

r/AI_Agents nightwing_2

Im using browser-use for QA automation but if i give a prompt which dosent exist it should just end the whole test case but instead it keeps on looking around and exhaust all the max steps. any solution to this?

I'm using browser-use with Azure Anthropic API (Claude Sonnet) as the LLM provider for QA automation on a web app. The agent works great when the elements exist, but the problem is when I give it a task that references something that doesn't exist on the page — like a nav item, button, or section that simply isn't there — it doesn't give up. Instead it just keeps scrolling, clicking around, trying different approaches, and burns through all the max steps before finally stopping.

I've tried adding instructions in the system prompt telling it to stop after 3-4 failed attempts, but the LLM sometimes ignores this. Has anyone dealt with this? Is there a clean way to detect this loop programmatically and kill the run early without waiting for max_steps to exhaust?

r/comfyui MrCaesersalad

zImage Turbo – Can't get realistic skin / consistent identity for LoRA dataset (help)

Hey everyone,

I'm currently trying to create a LoRA using zImage Turbo in ComfyUI based on a single reference image of a person.

My goal is to generate additional perspectives (front, 3/4, side, etc.) to build a consistent and realistic dataset.

The problem:

- The identity is close, but never truly consistent

- Skin texture often looks plastic / overly smooth / AI-like

- Subtle facial details (eyelids, under-eyes, micro-texture) get lost

- Expressions and angles don't fully match the original realism

What I’ve tried so far:

- Different CFG / steps combinations

- Lower denoise values

- Prompting for "natural skin texture", "realistic pores", etc.

- Adding negative prompts (plastic skin, smooth skin, etc.)

Still, results look slightly “off” and not dataset-quality.

My questions:

  1. How do you preserve identity consistency better when generating new angles from a single image?

  2. Any tips to avoid the plastic skin look? (models, settings, workflows?)

  3. Is zImage Turbo even the right tool for this, or should I switch to something like IPAdapter / ControlNet / InstantID workflows?

  4. Are there recommended pipelines specifically for LoRA dataset generation from a single person?

If you have example workflows or node setups, that would help a lot 🙏

Thanks!

r/SideProject rakeshkanna91

I'm a builder. I hate marketing. So I built something to handle it for me.

There's a guy in this sub today who said "I'm a developer, not a marketing guru. What do I do now?"

That's me. That was me for months.

I love solving hard problems — payment flows, onboarding, edge cases. I can disappear into that for days. But asking me to write a tweet about what I'm building? I'd rather spend time on the backend/infra optimizations.

So my product kept getting better. And nobody knew.

I got tired of it and built mangos.ai. It's an AI agent that runs your brand while you build. You tell it about your product once. It watches your GitHub, drafts posts in your voice, finds relevant conversations to join. You approve everything — nothing goes live without your say. Runs locally, credentials never leave your machine.

It's basically the thing I wish existed before I spent months shipping in silence.

Still waitlist phase. Curious — how are others here actually handling this? Are you grinding it manually or just... not?

r/whatisit boner_boy14

I know this is random but I figured someone might know… so I recently just got a 2009 Dodge Avenger from marketplace and this little black piece came with the key. Does anyone have any idea what it might be, or if it has any purpose?

r/WouldYouRather Dazzling-Antelope912

Would you rather try to sneak across the Russian-Latvian border and walk to St. Petersburg with two cats or make a marriage proposal to your boss in front of your coworkers and pretend it is sincere?

Option 1: Hypothetically. No roads. No vehicles or any transport other than by foot. You have only the clothes on your back and a backpack. You must bring with you two cats — a feisty moggy named Monkhouse Jim and a shy ginger cat called Cathy Dreadnought. They demand to be carried and will just sit put and glare at you if you try to make them walk. If you leave them behind, Cathy will use her laser eyes on your butt and Monkhouse Jim will eat your intestines. If you get caught, you get sent to a prison in northern Siberia, and Monkhouse Jim and Cathy Dreadnought will be your prison guards. If you get to St. Petersburg, you can leave the country freely and legally.

Option 2: If you don’t have a boss or have any coworkers, imagine you have them. You cannot backtrack or give any indication that your proposal of marriage is insincere, even though it is.

View Poll

r/homeassistant v81

Plug in smart switch / power monitor that DOES NOT require account / login to setup, DOES NOT require internet connection to function

Just bought some Meross power monitors off Amazon and realised they need an account / might need to be online to work.

Really wanting to avoid that, i'd prefer as i'm getting in to the home automation rabbit hole to keep the devices themselves on a VLAN with no internet.

Any suggestions - remembering these need to be compatible with a Australian power point.

Cheers.

r/Art imboredsoimhere318

Black Hawk, Jon Fischer, Oil X Spray paint, 2026

r/whatisit amijustinsane

Random spots on my wooden table?

I’m sure there is a rational explanation but I cannot for the life of me work out what has caused these random light spots on my table. London, UK

I bought this wooden table in December. No one sits at this spot. It’s just me and a flatmate that live here and he never even sits at this table!

There’s nothing dripping from the ceiling

It’s like it’s been bleached but the spots are slightly sticky?

I’ve tried cleaning the area and it makes no difference.

I’m quite annoyed as it was an expensive table and I don’t know how to fix it! Hoping that if I know the cause that’ll be a step on the way to resolving it.

r/aivideo DreamCrow1

[Cinematic Rap Rock] Fragile But Strong - Walkingcrow One - Kintsugi Lungs

r/leagueoflegends OfficerSmiles

Large Ekko Winrate Nerf Due to Stealth Nerf and D+D Nerf

Depending on where you check for winrate, Ekko has lost anywhere between 1 and 3 percent winrate in the jungle, with u.gg showing ~3.2 percent drop in winrate. I understand that this is a new patch and D+D changes just came out, so it take a bit to normalize, but this is a pretty steep change in winrate from the simple item change.

What is not mentioned in the patch notes is a change to Ekko's W passive. Ekko's W passive applies bonus missing health damage on targets below 30% HP, with the bonus interaction that an autoattack that triggers his 3rd passive stack AND would place an enemy below 30 percent HP damage receives this benefit. Whether it was on purpose or by accident, this interaction no longer works.

This may not seem like a big nerf, but when reaching late game, you get some pretty hefty sums of damage absent from his burst.

https://preview.redd.it/8uvkohsgehyg1.png?width=1038&format=png&auto=webp&s=cdb9f23f886da039663cefd36ae447955d826823

r/TwoSentenceHorror jmesestrada

I fell asleep,

I had a bad dream.

r/AskMen Infamous_Ad4244

Why do only some men have aiming problems?

Growing up, I always ended up with bathroom cleaning duty. If I didn’t clean it, it just didn’t get cleaned.

I shared a bathroom with my brother, and without fail every single week, sometimes even the day after I’d scrub everything, there would be urine splashed all over the sides of the toilet, around the base near the bolts, and on the back of the seat where it kind of pools up in that little gap. It was honestly disgusting, and I used to think that was just “normal” for living with guys.

Now I live with my boyfriend, and I still expected to be the one cleaning the bathroom. But weirdly, it’s not an issue at all. The toilet stays clean. No random splatter everywhere.

So I’m genuinely curious, what causes that difference? Is it just hygiene habits, how they were raised, aiming skills lol, or something else? Because it seems like some men just don’t have this issue at all, and others absolutely do.

r/Weird Buitree_deez

Hamster removing the food stored in his mouth pocket

He's gonna have one big feaast🤤

r/Art ART_REBELION

death stare , art rebellion , ink , 2025

r/ChatGPT Remarkable-Sir4051

I think ChatGPT forgot to put a restriction on these types of images 💀💀, and this looks realistic and minor details are also mentioned.

r/AI_Agents DasJazz

Voice AI agents in customer service - what features actually matter vs marketing hype?

Been working with voice AI agents in customer support for the past year and wanted to get perspectives on which features actually deliver value.

Our setup: ~250 inbound support calls daily, mix of technical questions and basic inquiries. Started with basic IVR, now testing AI-powered analysis.

Features we're currently using:

Real-time sentiment tracking - This one surprised me. System flags when caller's tone shifts negative and can auto-escalate or alert supervisor. Caught escalations we would've missed. Actually prevents issues vs just documenting them.

Live transcription + keyword detection - Useful for compliance (recording disclosures, verbal approvals). Also helps with agent training - can flag when specific phrases are missed.

Post-call summaries - AI generates bullet points of what was discussed, action items, resolution. Saves probably 2-3 min per call on documentation. Scales well.

Talk/listen ratio tracking - Shows which agents dominate conversations vs actually listening. Helped with coaching - some agents were talking 75% of the time, wonder why customers seemed frustrated.

Call routing intelligence - Analyzes caller intent in first 20 seconds, routes better than traditional IVR. Reduced transfers by ~30%.

Questions for the community:

  1. Conversational AI handling calls entirely - anyone using this in production? How's accuracy for complex queries?
  2. Multi-language support - our customer base is getting more diverse. Which platforms handle accents/dialects well?
  3. CRM integration depth - is anyone doing automated ticket creation based on call content? Or still manual?
  4. Cost structure - per-minute vs per-call vs flat rate. What makes sense at different volumes?

Curious what features others prioritize or think are just marketing hype. Voice AI space feels crowded with overlapping claims.

r/TwoSentenceHorror Feeling-Influence691

The theme park built on our home island that the government forcibly ejected us from had failed to open, so we took that as a sign that it was time to return.

When we arrived and tried to settle back in, we quickly found out why the park’s construction was kept under wraps and why it had failed, as some of its ‘attractions’ were still alive and roaming the island…

r/ChatGPT gauravjain02

chatgpt download error; any solution?

is that only me facing such issue?
why can't chatgpt automatic generate the proper nice ui pdf or docx like claude usually generate using some in built skills?

r/KlingAI_Videos Cyn_4u

The Truth About AI

r/ClaudeCode Terrible_Tailor_7743

Check out this app I made! Feedback appreciated!

Hi Claude Coders,

I made this app and I’m super proud of it! It’s called Bahzee Bet. I know it can improve so I’d appreciate your feedback. I made it all with hard work and Claude code so I wanted to share it with you all!

It’s only on the Apple app store for now. It’s totally free and no ads.

r/ChatGPT Remarkable-Sir4051

AI is getting too realistic in this type of stuff, I think artists need to worry

r/ProgrammerHumor EchoOfOppenheimer

fiveYears

r/AskMen ScienceTeacher1994

Men who were able to open their mind to a completely different view of the world, how did you do it?

r/findareddit see_mo

Where i can ask about solving a problem in the acces to gmail account??

r/AI_Agents IronCuk

I think agent workflows improve through use, not upfront perfection

I think a lot of agent workflow advice starts too late in the process.

People try to design the full method before they have run the task enough to know what the method needs.

My current rule:

Do not design more agent workflow than you have observed.

Start with one small loop:

  1. repeated task
  2. defined input
  3. one agent output
  4. human review
  5. one improvement
  6. run it again

The first loop should be small, reversible, and reviewable.

After a few runs, you can see what actually belongs in the workflow:

  • source rules
  • review criteria
  • escalation points
  • example boundaries
  • tool access
  • stopping rules

Then formalize it into a template, checklist, skill, or SOP.

But if you formalize too early, you may just package the wrong assumptions.

What parts of your agent workflow only became clear after using it?

r/FluxAI rakii6

Flux2 Klein Image consistency and Image editing

Hi guys, I wanted to share my personal interaction with Flux2 Klein 4b & 9b models, in image editing & consistency. When it came to image editing or doing things like taking one reference and puting it on to the next, Flux2 Klein 9b stood out.

But it was worse in keeping the face consistent. I used the workflow that was present in the standard comfyui templates. The result wasn't that great, as the face kept on changing or if trying put one picture onto another it created something new.

Last 1 month I kept on surfing hugging face and found solution that I could use, there's contributor called dx8152 , he figuured out how to maintain the image consistency to a huge extend. I ended up using his workflow and the Lora he provided, and I did get a good output.

Check out some of the output I create while trying to experiment and having fun.

I took this image as my refernce, IMG 2 to transfer certain styles.

The one on the right is the original photo and the left is the output.

Another output, where I instructed the model to transfer the Glasses onto the bald person.

the image on the right is the original input.

dx8152 's contribution along with the workflow, without his contribution some of us less tech savvy would be fine tuning the ksampler or the cfg for consistency.

Another example, where I wanted to get an idea for my office space, where the exact pillar, door frame and the size is mainted.

My original office space

The output.

But its not that full proof, as I face limitation in transfering multiple objects like hat, eye frames into the subject. I could not find any solution in terms of prompt.

In here it swapped the entire face.

I hope my post helps you guys out. If you like it, do comment. Thank you for reading. Worklow 1 Workflow 2

r/ARAM HeroesBane1191

Don't sleep on Laser Heal. This Summoner Spell can heal all your allies to full and has a 30 second cooldown. Best for supports, but anyone can run it.

r/AskMen DubaiDave

Guys who have switched careers or industries at around 35 to 45 years old. How did you do it?

I'm currently looking to switch industries and every interview I do what's experience in that industry. My skills are definitely transferable to other commercial roles but every one wants industry experience. Any advice on how to over come the questions that come up?

Currently I'm in sales management in the physical security field and looking to go into an industry that's not sure rigid/serious if that makes sense.

r/ClaudeCode RandomArabGuy

Looking for alternatives

So I've been a claude max subscriber for 10 months, and I've always enjoyed claude and been able to build a lot of projects using it.
I've always read about model degradation and quality of output, however I've never felt that as everything was working fine, however around a week before Opus 4.7 dropped I felt quite the drop in quality from 4.6 but decided to stick with it since I thought once Opus 4.7 comes out everything will be back to normal, I honestly do not know if I just don't know how to deal with it or if it has become absolute shit but this is not what I initially subscribed for, so now I am considering alternatives.
I thought about running some local model on my device but there's just no way it will outperform what claude does.

Now I'm thinking of subscribing to GPT but the capabilities of claude code (not the model itself) is just way too good to move out of, can i just use gpt in my claude code or is that not possible? What other options do I have?

r/explainlikeimfive nutellaandbed7

ELI5: What was the soviet union? why did it collapse?

tried to read but didn't understand. who is vladimir lenin? stalin? these names came up when I was trying to read up on the soviet union.

r/LocalLLM Connect-Bid9700

Asena ESP32

Another Asena has arrived—this time, it defeats Skynet at the edge.
Hidden inside a smart ring, this tiny intelligence awakens with a single command. No clouds. No latency. Just raw, embedded cognition. Asena_ESP32 is not just a model—it’s a silent operator, running on ultra-constrained hardware yet speaking with precision, control, and intent. Powered by the Behavioral Consciousness Engine (BCE), it doesn’t just generate text—it adapts behavior, filters risk, and responds like a disciplined digital mind.

One command is all it takes.
Servers align. Systems optimize. Workflows compress into efficiency. From the smallest signal, Asena reshapes its environment—an “Extreme Edge AI” built to act where others can’t even load. Compiled in C++, optimized through ggml and llama.cpp, it turns minimal compute into maximum impact. This is not about scale. This is about control, speed, and presence—AI that exists exactly where it is needed.

Welcome to the future of invisible intelligence.
A ring. A whisper. A response. Asena doesn’t wait for the cloud—it is the edge.

Huggingface Model Link: https://huggingface.co/pthinc/Asena_ESP32

r/YouShouldKnow Conscious-Material16

YSK That AI Thinks That There Is Up To A 10% Chance That AI Will Cause Human Extinction.

Typical response is 1% to 10% chance and if pressed it will tighten it up to 3%-5%.

Why YSK: because that is fairly high.

r/Wellthatsucks RahulArvindModi

Sucks to be the truck owner

r/SideProject ContributionWaste327

I built a platform where you pay to get brutally honest feedback on your product testers get paid to give it

Honestly, the typical founder feedback loop is a disaster. Your friends just tell you “Looks great!” Your Twitter audience cheers you on. Your family? Not a clue what you actually built.

I kept running into the same headaches myself: founders launch, everyone is polite, but nobody tells you why users vanish after day one. So I created Solutionizing. It’s a feedback marketplace, but way more structured than the usual post-launch pat on the back.

Here’s the deal:

Founders pay a small fee to list their product as a “mission.”

Testers are vetted (so, not just random internet people), they dive in, answer specific questions, and skip the useless “rate this 1-10” stuff.

They get paid in coins they can actually cash out, but only for real, thoughtful responses.

Founders get a report with clear insights, friction points, signal strength, and an AI summary so you finally know what’s working and what’s broken.

The feature I’m most excited about? The Retest Delta. Run the same mission after you make changes, get before-and-after scores, and see if what you tweaked actually made a difference. Most tools just give you a single snapshot. This one tracks your progress.

Where it’s at now:

It’s live at solutionizing.vercel.app.

It’s India-focused right now, with pricing in INR.

Payments are still on the way I’m working on it.

Here’s what I want from you:

Would you pay ₹149 (about $1.75) for 5 structured testers giving feedback on your product?

What would make you actually trust their feedback?

Is there something obviously wrong or missing that I’m overlooking?

Go ahead tear it apart if you want. I’m genuinely looking for real criticism.

r/EarthPorn Gold-Lengthiness-760

Río termal del Valle de Reykjadalur(Islandia)[OC]3132×2113

r/leagueoflegends HI-CPoppinLemonade

Voltaic Cylcosword let’s talk about it

What exactly was the thought behind this change? Was it to basically give every assassin free dmg with no cooldown?

I don’t think I’ve seen such an insanely overpowered 1 item spike in a long time. And the best part is, literally anyone can buy it and perform well with it. Most notably is zed being able to do half your hp with a Q-E and he doesn’t even need to be fed for it lol.

Kayn can do this as well, it’s just not a very balanced item in my opinion right now and I’m fully expecting a nerf for this either next patch or a possible hot fix.

The same can be said for DFT and its interactions with other items. This season right now is just freelo if you know which items to abuse until riot decides it’s time to fix it

r/ollama edbuildingstuff

Fine-tuned Llama 3.2 1B on 480 examples, shipped to Android via Q4_K_M

Shipped a Flutter app on Google Play last week that runs a fine-tuned Llama 3.2 1B on-device via llama.cpp. Thought this community might find the numbers useful, especially anyone considering mobile deployment.

Setup

  • Base: meta-llama/Llama-3.2-1B-Instruct
  • Fine-tune: LoRA r=16, alpha=32, all 7 target modules, 3 epochs, lr=2e-4
  • Dataset: 480 hand-curated ShareGPT-format examples
  • Merge + quantize: Q4_K_M via llama.cpp convert tools
  • Final size: 770MB
  • Runtime: llamadart (FFI wrapper on llama.cpp) on Android

Inference perf, real devices, llama.cpp with Android GPU backend

Device Tok/s Load time Gen latency (~80 tokens) Pixel 7 Pro ~25 1.5s 3 to 4s Pixel 6a ~15 2.5s 4 to 5s Samsung S22 ~22 2.0s 3.5 to 4.5s

Accuracy

80%+ pass rate on a 48-sample held-out eval across 4 rubric categories: on-tone, specific, funny, category-correct. Full methodology and per-category raw results in the HF model card (link in the pinned comment below).

Surprise finding on dataset size

480 examples was plenty for a 1B on a narrow task. I went in expecting to need 2 to 5K. Pass rates plateaued somewhere around example 400. The bottleneck was data quality (every example hand-written and reviewed) rather than volume. For anyone starting a mobile-targeted fine-tune: spend the time on curation before spending compute on scale.

Happy to dig into dataset design decisions, LoRA config rationale, or how Android llama.cpp perf has shifted since the December GPU backend update. And if anyone has benched Pixel 9 or S24, I'd love to add those numbers to the table.

r/DecidingToBeBetter meow_vibes_17

I used to believe that people were judging me but when I realized I was the one who judged myself then I changed my beliefs Now I am happy.

I always thought that people were judging me for everything . I did but after some time I realized I was the one judging myself. Now I have changed that. It was really difficult for me, but I started with small things like going outside alone and doing things that make me happy. Now I even post stories on Instagram about what I like without fear of judgment. Yeah I still feel afraid sometimes, so I don't open Instagram after posting but I am living much more happily now.

r/OldSchoolCool Initial_Reason1532

In 1933 James Bolar stuned overlookers by demonstrating his homemade submarine in the San Francisco Bay. He went down as much as 30 ft and stayed under for about 30 minutes.

r/ChatGPT GaneshLookALike

OpenAI comments the goblin, pigeon developer prompt mystery

Here's OpenAI's official comment about the weird section about goblins, gremlins and pigeons in the developer prompt:

https://www.instagram.com/p/DXwyPxnGweo/?igsh=cG13emdicHRjYm1v

Tldr: Discovered in 5.1. Usage increased in subsequent models. Affinity for goblins et al increased further in 5.5. Problem traced to users who used the nerd personality. Problem discovered when 5.5 was already in training. Instruction in the developer prompt was an easy fix.

r/ProductHunters Severe_Bed5785

Launching My Product after 10 years of domain expertise

Hey everyone,

I’ve spent the last decade working as a data analyst and product engineer. Over those 10 years, I kept hitting the same brick wall: getting fast, reliable answers from data without jumping through endless dashboards, SQL queries, and messy spreadsheets.

I waited for someone else to build a better way to handle it, but it never happened. So, I decided to quit talking about it and actually build it.

What is DataTawk?
It’s a conversational analytics platform that turns your data into answers instantly. Think of it as ChatGPT meets your BI dashboard, but without the complexity, lag, or steep learning curve.

Why I built it this way:

  • Natural language queries: Because after 10 years in data, I realized most people don’t want dashboards; they just want answers.
  • Real-time insights: Most tools make you dig through charts; we give you direct, contextual responses so you can move faster.
  • No-code integration: Connecting data shouldn’t require engineering support every time.

The Goal: To save people like me 5–10 hours every week (and a lot of frustration).

I’m launching today and would honestly love some brutally honest feedback from this community.

Check it out here: https://www.producthunt.com/products/datatawk?utm_source=other&utm_medium=social

I'll be in the comments all day to answer questions about the tech stack, the industry, or why I was crazy enough to do this!

r/therewasanattempt Campressive

to not get stopped

r/StableDiffusion Opening-Ad5541

Phosphene — local video and audio generation for Apple Silicon ( LTX2.3 )

https://preview.redd.it/ls0zqztvpgyg1.png?width=1916&format=png&auto=webp&s=734c9b9d83ce1def55aa7fc39fc858d3f3618bf5

Phosphene is a free desktop panel for generating video on Apple Silicon Macs. It wraps Lightricks' LTX 2.3 model running natively on Apple's MLX framework, and exposes a one-click install through Pinokio.

The differentiator is audio. LTX 2.3 generates video and audio in a single forward pass — they share the same diffusion process, so timing is tied at the frame level. Footsteps land on the correct frame. Lip movement matches dialogue. Ambient sound is conditioned on the visual content. Most other local video models (Wan, Hunyuan, Mochi) generate silent video; you add audio in post.

https://preview.redd.it/t1aggto2qgyg1.jpg?width=1920&format=pjpg&auto=webp&s=4ac849e37292988fc6fe4c90bcef87d3ffe9af3a

What it can do

Four generation modes:

  • Text → video — describe a scene, get a 5-second clip with synthesized audio
  • Image → video — start from a still, animate from there with synced audio
  • First-frame / Last-frame — provide two images, the model interpolates the middle
  • Extend — append seconds onto an existing clip, audio continuous across the join

Plus prompt rewriting via a local Gemma 3 12B 4-bit text encoder. The same model that reads your prompt for the diffusion stage can also rewrite it in the format LTX 2.3 was trained on. Runs offline, takes a few seconds.

Quality tiers

Three quality levels, picked per-job:

  • Draft — half resolution, ~2 minutes. For iterating on prompts.
  • Standard — full 1280×704, 7 minutes. The daily driver. Q4 distilled (25 GB on disk).
  • High — Q8 two-stage with TeaCache acceleration, ~12 minutes. Adds ~25 GB. Optional download — a button in the panel pulls it on demand. Required for FFLF.

Hardware compatibility

Apple Silicon only. The panel detects your Mac's RAM at boot and gates features accordingly:

  • 32 GB → Compact: lower resolution, shorter clips
  • 64 GB → Comfortable: full 1280×704 baseline
  • 96 GB → High: longer clips, full Q8
  • 128+ GB → Pro: no clamps

This is enforced because LTX 2.3's working tensor footprint is real — there is no way to run a full 1280×704 5-second generation in less than ~30 GB of resident memory. The tier system is honest about it rather than letting users queue jobs that fall out of the OOM killer.

Intel Macs and other platforms are not supported. There is no port path for them — MLX is Apple-only by design.

Audio behavior

Audio quality is conditioned on the prompt. A visual-only prompt produces faint ambient sound, which can read as "near-silent." A prompt with explicit audio cues produces layered foreground sound.

Compare:

  • "Wizard in forest" → quiet room tone
  • "Wizard in forest, low whispered chant, ember crackle, distant owl hoot" → audible chant + crackle + owl, all timed to the visuals

This is documented behavior of LTX 2.3, not a Phosphene quirk. Describe the soundscape in your prompt the same way you describe the visual.

How it differs from existing tools

Compared to other locally-runnable video models on a Mac:

  • vs. ComfyUI workflows — ComfyUI runs LTX 2.3 too, but in a node graph that requires building per-job. Phosphene is a fixed panel: prompt, mode, dimensions, generate. No graph maintenance.
  • vs. native PyTorch builds (Wan, Mochi, Hunyuan) — those run on torch via MPS, which is a compatibility shim, not native Metal. MLX runs the model directly in Apple's compute framework. The result is meaningful speed and memory differences on the same hardware.
  • vs. cloud / API services (Pika, Runway) — those generate faster on H100s but require accounts, queue time, monthly subscriptions, and upload of source images. Phosphene runs with no network beyond the initial weight download.
  • vs. silent local video models — joint audio synthesis is, at the time of writing, unique to LTX 2.3 among models with usable Mac runtimes.

Output format

Lossless H.264 by default — yuv444p, CRF 0 — so your archive is the highest fidelity the renderer can produce. Web/social platforms will re-encode anyway. Override via env variables (LTX_OUTPUT_PIX_FMT, LTX_OUTPUT_CRF) if you want yuv420p directly.

The +faststart movflag is on, so the moov atom is at the front of the file. Gallery thumbnails decode the first frame instantly without downloading the full clip.

Install

Search Phosphene in Pinokio's Discover tab and click Install. Pinokio handles the venv, Python 3.11 pin, MLX pipeline install, codec patches, and ~31 GB of model downloads (Q4 LTX 2.3 + Gemma text encoder). Resumable — if a download is interrupted, hitting Install again picks up where it left off.

Optional: run "hf auth login" in Terminal first to authenticate the Hugging Face downloads. Anonymous downloads are throttled; authenticated downloads are roughly 10× faster, which matters for the optional 25 GB Q8 model.

License + credits

Phosphene panel: MIT.
LTX 2.3 weights: Lightricks' own license — read it before commercial use.
MLX framework: Apache 2.0 (Apple).
Gemma weights: Google's terms.

Built on:

  • LTX 2.3 model — Lightricks
  • MLX port (ltx-2-mlx) — u/dgrauet
  • MLX framework — Apple ML
  • Pinokio runtime — u/cocktailpeanut

Source: https://github.com/mrbizarro/phosphene Issues and PRs welcome.

Follow me on x: https://x.com/AIBizarrothe

r/automation WhichWayIsTheB4r

been sizing overload protection wrong on hermetic compressor circuits and just figured it out

Quick one for anyone touching HVAC or refrigeration related panels. I was always taught to size overload protection based on FLA from the motor nameplate. Worked fine for general purpose motors all my career.

Last month i was reviewing a panel design for a new compressor install and the spec called out RLA - rated load amps. Different number entirely, runs lower than FLA. The reason is hermetic compressor motors operate continuously at conditions closer to RLA, not FLA, so sizing your overloads on FLA gives you less actual protection than you think.

The vendor data sheet had both numbers buried in there and i almost missed it. If you set overloads to handle FLA on a hermetic comp, you might never actually trip during a real overcurrent event because the motor sits well above RLA but still below the FLA threshold during normal load swings.

Anyone else running into this on retrofits where the original drawings just say motor amps without specifying which standard? trying to figure out best practice for documentation going forward, especially when the existing legend doesnt make it clear which value the engineer used.

r/WouldYouRather SummertimeThrowaway2

WYR get a full health reset or get $10 million?

The full health reset essentially fixes every single human cell in your body (except cancer cells, it kills those), completely balances your microbiome, removes all toxins and forever chemicals, heals all scar tissue, brings all joints and organs to their peak state, etc.

It only happens once, afterward your body will continue to age once again.

(Reposted bc I forgot the poll)

View Poll

r/CryptoMarkets No_Chip5069

Smart money inflows + consolidation near resistance

Feels like BTC is loading up for the next leg. Dips look buyable unless structure breaks. Waiting for more dip opportunity for long term holding.

r/SideProject arunphilips

A Dream I share with my Mum - Spreading the Peace and Love of God and Jesus

I am building a platform called DreamStarter that can 100% automatically make your dreams come true, and I thought I’ll share one Dream that I really want to see realized through our platform.

This dream is one I share with my mom Solly - called Bibleverse, she has been sharing important Bible verses along with her learnings and understandings from it.

She’s using another tool I built called MediaPilot to post across X, Insta, LinkedIn and Farcaster and generate relevant images with Gemini.

I also have this dream to be a Godvocate, and spread the peace and love of God and Jesus. One idea was to make longer video reviews of books like Imitation of Christ and then maybe I can share them across mom’s social channels.

For the future - I put it through our DreamStarter brainstorming flow to see what next steps we should take, and I got a pretty well structured Dream Brief, detailing the idea and all our next steps - including Video Creation, Community Engagement, Research, SEO, design and more.

Now when DreamStarter is live - all these agents will automatically start working and my mom and I can set instructions on how we want things to be done!

r/TwoSentenceHorror Starieyez31

A dream?

I dreamed of being at the park with my children when the ground opened up and sealed them

I woke up with dirt under my fingernails

r/Adulting Riderman43

Is avoidant personality a cope to avoid thinking about the fact no one wants to hang with someone who’s ugly?

r/LocalLLaMA Ok_Warning2146

Running llama.cpp on Snapdragon Hexagon NPU seems promising

https://github.com/ggml-org/llama.cpp/blob/master/docs/backend/snapdragon/README.md

I have an Oneplus 12 with Snapdragon 8 Gen 3. I followed the above README to cross-compile llama.cpp on Ubuntu and then copy to the Termux directory on the phone.

It seems like llama.cpp's Hexagon backend is highly supported by Qualcomm with many PRs made by Qualcomm employees.

I am getting 8t/s pp and 4.5t/s tg with gemma-3-12b-it-qat-Q4_0 and 20t/s pp 12.5t/s tg with gemma-3-4b-it-qat-Q4_0.

Speed is about the same as using the SD8G3's CPU but it is not hot at all and the tg speed is good enough for simple Q&As.

The limitations now is that it only supports Q4_0, IQ4_NL, MXFP4, Q8_0 and F32 ggufs. It also doesn't support KV cache quantization. While it supports chips as old as Snapdragon 888, since only SD8G2 or newer SoCs has Tensor module for LLMs, so probably shouldn't bother with it if your chip is too old.

Since Hexagon NPU can only address 4GB RAM, if you your model plus kv cache is too big, you need to set an environment variable to open more than one NPU device. Here is an example:

LD_LIBRARY_PATH=./lib:/vendor/lib64 ADSP_LIBRARY_PATH=./lib GGML_HEXAGON_NDEV=2 ./bin/llama-completion -m /sdcard/gguf/gemma-3-12b-it-qat-Q4_0.gguf -sys 'You are a helpful AI assistant' -ngl 99 --device HTP0,HTP1

My SD8G3's NPU has 34 INT8 TOPS and memory bandwidth of 76.8GB/s. Their latest product X2 Elite Extreme has 80 INT8 TOPS and 228GB/s. On the other hand, Nvidia 3090 has 248 INT8 TOPS and 936GB/s. So probably two or three gens to catch up?

PS By the way, does anyone own a SD 8 Elite Gen 5 Smartphone or a X2 Elite Extreme Laptop? If so, can you report your inference performance numbers? Supposedly, they can address more than 4GB RAM such that multiple HTP devices are not needed, is this supported by llama.cpp now?

PPS The Hexagon build supposedly is an OpenCL build also. Does anyone know how to offload LLMs to the Adreno GPU only? If I omit --device option, it seems to offload to both GPU and NPU without being any faster. Also, is it possible to use CPU, GPU and NPU together for maximum performance (albeit an ice pack might be needed)?

r/whatisit Glinting_Ranga

Found under some dirt in the backyard. What is it?

Asked this properties previous owner and they dont recall this being an item they used. It looks as though its been chewed, and that perhaps the metal part at the top would have been in a full circle before it snapped. 6yr old found it whilst 'digging for treasure'. Object is 200mm long

r/painting jvst4c0rps3

A human arm - made by me

r/ollama AdditionalWeb107

Claude Code Router TUI with support for Ollama-based LLMs

Hey peeps - just shipped Plano 0.4.22 with support for a local TUI so that you could view costs, requests by model and inspect adaptive routing support based on a policy-based router as described in this paper: https://arxiv.org/abs/2506.16655.

Of course Ollama-based models are supported out of the model. W

Hope you enjoy the release.

r/SideProject Professional_Eye1331

I built an iOS app that scans sheet music and highlights every musical term with a definition

r/creepypasta King_Arian_200

"G3T H1J4cK3D. c0rrUpt"

Why does this keep Appearing. Am i Hallucinating?

Sound: "Bwa. So. Longer. Make. Me. Lose. Me"

"i. don't. Care" "i. Hate. Myself..."

"want me dead now. Want me dead now"

"help me escape" "help me escape"

"i dont remember wanting me alive"

"K1LL M3. K1LL M3"

r/leagueoflegends oioioi9537

LCK's Ruler Investigation is out (no punishment)

Source: https://lolesports.com/ko-KR/news/lck-investigation-results

I will update with a translation or a translated post as soon as its available

below is the temporary LLM translation

LCK Investigation Result Announcement

Overview The LCK Secretariat became aware of a tax-related matter involving “Ruler” (Park Jae-hyuk), a player from Gen.G Esports. As announced on April 1, an LCK Investigation Committee (hereinafter “the Committee”) was formed to ensure a fair assessment of the matter. The Committee included three external members, including legal experts.

The Committee carefully examined the overall facts through multiple deliberations, review of relevant materials, and in-person interviews with the player. Based on the verified facts and related considerations, the Committee thoroughly reviewed whether there had been any violation of LCK regulations.

Based on the Committee’s findings, the LCK Secretariat has made a final decision not to impose any disciplinary action in this case. This decision was reached unanimously by all Committee members following fact-finding and legal analysis.

Detailed Findings

  1. Whether it Constitutes Criminal Conduct (LCK Rulebook 9.2.8) This provision, in its structure and intent, applies to serious illegal acts that involve or are equivalent to criminal liability. The Committee determined that this case does not fall under that category.

Upon review, there was no evidence that violations such as tax evasion under the Punishment of Tax Offenses Act were established, nor was there any record of investigation initiation, criminal complaint, or criminal punishment. Therefore, imposing sanctions under this provision is not permissible under LCK rules.

  1. Whether it Constitutes Immoral Conduct or Damage to Dignity (LCK Rulebook 9.2.9 and Penalty Index) Considering the facts established during the investigation and the nature of the conduct, the Committee determined that it would be difficult to classify this case as “immoral conduct” or “conduct damaging dignity” subject to disciplinary action under LCK regulations.

Additionally, the matter went through a series of “tax-related procedures,” resulting in administrative action by the tax authority. It was confirmed that the player fulfilled the payment obligations arising from that action, and that all relevant administrative procedures have been fully completed. The player was also found to have carried out part of these procedures with the advice of a professional tax representative. The mere fact that such procedures were undertaken does not in itself establish illegality or grounds for disciplinary action.

The term “tax-related procedures” refers to processes under relevant laws whereby tax authorities determine or adjust tax amounts (reassessment), procedures where taxpayers request corrections (amended claims), and objection procedures such as appeals or administrative litigation.

Furthermore, based on the verified facts and timeline, the applicable statute of limitations for penalties under LCK regulations has clearly expired.

Comprehensive Judgment and Position The LCK Secretariat recognizes the importance of the social responsibility expected of professional players.

However, disciplinary action at the league level must be carried out carefully, based on clear rules and objectively verified facts. In this case:

The tax-related procedures have been completed, including administrative actions by the tax authority and fulfillment of obligations; No serious illegal conduct (such as tax evasion involving fabrication of evidence) has been identified at this time; There is no record of the matter leading to criminal liability;

Considering these factors, the Secretariat determined that imposing sanctions at this time would not be appropriate under the principles of LCK rule application.

Accordingly, the LCK Secretariat has decided not to impose any disciplinary action in this matter.

Future Standards The LCK Secretariat will continue to review similar cases based on consistent standards, taking into account violations of relevant laws, whether criminal liability is involved, the severity of the conduct, and its impact on the league.

However, matters that fall under routine administrative tax procedures—such as reassessments or additional tax payments—are generally not considered subject to LCK regulations or disciplinary review. That said, cases involving clear legal violations with criminal liability, or equivalent serious illegality, will be handled strictly in accordance with the relevant rules.

r/Art warden_ogg

Clouds, Stan Bryk, Digital, 2026

r/ClaudeAI Pale_Produce8443

RE: PDF automation via Claude Design

I've saved you the pain if you were thinking of seeing if #ClaudeDesign 'web' is useful for automating PDF design work. It can certainly be used as a design lab for cover directions, components, design system exploration and HTML/Code handoff, but not in anyway near useful as final production authority for PDF design of any real substance (yet),

r/Art TrendyArt1306

Floral Mother, TrendyArt1306, Digital Art, 2026 [OC]

r/therewasanattempt KplusN

To stay on the road

r/photoshop Remote_String6863

Need help with Scratch Disk

I know this question has probably been asked here a hundred times, but I've been looking around to see how I can clear the scratch drive and people keep saying to clear files on the computer but I cannot find them for the life of me. I am not looking to buy a external hard drive and am only looking to make a small edit so it's not really worth it. I honestly feel like a idiot.

r/PandR NFSF1McLaren

You're On With Ron

r/SipsTea BabyDollSinner

This should be applied in every country.

r/personalfinance urban_monkey19

How much should I keep in my HYSA?

I am 27M, making ~$8k/m post taxes and 401k contributions (10%). I have ~$110k in my HYSA that is yielding around 3.3% interest every year which I feel like is pretty shitty.

I wanted to understand how much should I keep in my HYSA and what should I even invest in the rest of the money?

I pay ~$1.6k/m in rent, $400ish for gas + insurance and $1k in food + miscellaneous things.

I am investing ~2k a month in stocks here and there and have invested ~$80k in market right now. Really looking for good advice - thanks so much! :)

edit: changed to 3.3% per year rather than month - sorry about that

r/SideProject prabhav404

Trying to diagnose a funnel drop-off problem… would love fresh eyes on this

Hey everyone, I'm a founder working on an AI skincare recommendation platform and I'm currently trying to diagnose a funnel drop-off problem.

The flow is: home page → skin quiz → sign-up → profile → recommendations → product page → buy now. I can see some users dropping at each step but the data doesn't tell me exactly why.

I've put together a short form and would really appreciate some honest input from fellow product wizards. Fresh eyes often catch things we miss.

If you have 5 minutes, I'd love your thoughts.

Form: https://forms.gle/vhxiyNBrYsPTSzrKA

Happy to return the favour if anyone needs feedback on something they're working on.

Thank you 🙏

Website: www.crea8.co.in

r/ProductHunters makeryi

I built a form builder to build different and beautiful forms

Hi everyone! I’m Ryan Yang, an indie developer. I’m thrilled to introduce Formdiy, my very first product!

For a long time, I was frustrated by form builders that either looked like they were stuck in 2006 or were far too complex to use. I wanted to build something that bridges the gap between high-end design and powerful logic—so I built Formdiy to do just that.

Whether you're creating a simple feedback form, a complex quiz, or a branded survey, Formdiy makes the process effortless and professional.

I’ve poured a lot of heart into this, Today Formdiy is LIVE on ProductHunt, if you like what I am building, an upvote on ProductHunt keeps me awake and shipping 🙏

https://www.producthunt.com/products/formdiy?launch=formdiy

r/SideProject t0xic_l3gi0n

Building a stateful API mocking SaaS — still early, would love your ideas and feedback!

I'm currently in the early building phase of Mockitor — a SaaS tool for API mocking, but with a twist: stateful mocking.

Most mock tools just return a fixed response. Mockitor lets you define endpoints that actually change state — cycle through responses, simulate real-world API behavior, and test edge cases that static mocks can't cover.

The core idea so far:

  • Create mock endpoints with configurable response cycling
  • Stateful behavior per endpoint (not just dumb static JSON)
  • Clean dashboard to manage and monitor your mocks
  • Built for devs who are tired of brittle mock setups

Still figuring out:

  • What integrations matter most (Postman? CI pipelines? CLI?)
  • Pricing model — usage-based vs flat subscription?
  • Whether teams need collaborative mock workspaces

This is still very much being built and shaped. I'd genuinely love to hear — what frustrates you most about API mocking today? What would make you actually pay for something like this?

Drop your thoughts below, every idea helps 🙌

r/arduino DatCheeseBoi

How should I go about using an external power supply while powering Arduino from USB?

Hello everyone, I'm working on a project that needs a good chunk of power, and the only affordable thing I could get is the pile of old PC power supplies in my closet. They work well because I need 5V and 12V. Thing here is that I have a MAX7219 8 digit 7 segment display to show some sensor readings, and my Uno R4 seems fine at first, but when I try to change the numbers too frequently the whole thing just loses it, and I believe it's due to the Arduino not being able to provide enough power on it's own. No worries I thought, I'll just need to pull that big PSU out early, so I connect the Arduino ground and the PSU ground since that should get them to equalize right? Run both, and the display is now working even worse. I check just out of curiosity and there seems to be about 0,4V difference between the two grounds. So here's the real question: can I make a setup where the Arduino is powered by USB from my computer and the attached devices are not, work? Or will I be forced to hack together a plug that powers the board straight from the power supply to get useable conditions. I've only done little projects on Arduino before, so feel free to state the obvious, I still have a lot to learn.

r/ChatGPT Utopicdreaming

Problem photo

Im probably a bad prompter but im genuinely having a hard time trying to get this image correct.

Its a picture of dancing tacos. I cant get the taco to turn onto its "side" so its D-shape instead of sunrise shape. Can someone help

Heres the sample pic.i wasnt feeling very descriptive sorry its generic

r/HistoryPorn BostonLesbian

Soviet volcanologists monitoring the eruption of the Tolbachik volcano on the Kamchatka Peninsula, Soviet Union, c. 1978. Photo was taken by the photographer Vadim Gippenreiter. [600 x 808]

r/SideProject MaleficentTraining69

Built a job rejection roast app because why not

Paste a resume + job posting, pick a tone (cooked / devastatingly polite / recruiter who skimmed your resume), get a Gmail screenshot.

r/Wellthatsucks Mission-Badger-4005

Damn, now I can’t use the sink, or the outlet…

r/leagueoflegends Best_Difficulty6362

Low prio games not going away after I have played them, what do I do

I have 5 15 minute low priority games on league, I’ve played 4 games of arena and when I noticed that it didn’t work I then went and played Aram, once again it didn’t work. Then I played summoners and it’s still on 15 minutes how do I fix this?

r/leagueoflegends sad_day_everyday

Friend list not loading

Out of nowhere my friend list is not showing any of my online friends and all of the friends that were online before this issue started are permanently stuck in game with timer just increasing even though they aren't in the game anymore.

Happen to anyone else?

r/DecidingToBeBetter presentnow0913

Can I become more positive?

I recently had a counseling session and took some psychological tests (TCI and MMPI). I only did it because my university offered it to students, and I thought it would be interesting. But the results showed that I lack positive emotions.

I was already somewhat aware of this, but hearing it from someone else felt different. I’ve actually been quite pessimistic since I was a child. Thinking that I might stay this way for the rest of my life makes me feel kind of pathetic.

I want to stop being pessimistic and become more positive. I’m afraid it might be something I can’t change like a fixed temperament. Things like self-affirmations or manifestation you see online don’t work for me at all. For reference, most of my TCI scores were very low, except for self-directedness.

r/personalfinance Himanshu_creative

Are job titles in insurance too vague to be useful?

I’ve noticed that titles like advisor, consultant, or even underwriter can mean completely different things depending on the company.

Recently came across a more structured breakdown of roles by actual function (sales, servicing, analytics, compliance, etc.), and it made things a lot clearer.

Do you think the industry needs more standardization in how roles are defined?

r/StableDiffusion Conradek68

Is this result possible using local models? What could they be using?

r/ClaudeAI Notalabel_4566

I built a practical guide for running real businesses with Claude (based on 35+ founder stories)

I read through 35+ Reddit threads of people actually building and running businesses with Claude — from local service agencies to solo SaaS founders.

I distilled the best patterns, frameworks, and hard lessons into one repo:

https://github.com/Abhisheksinha1506/ClaudeBusiness

What’s inside:

  • Agentic Entrepreneurship Framework (Vibe → Value)
  • How top founders structure persistent memory & daily workflows
  • Service business vs Micro-SaaS playbooks
  • Guardrails that actually matter (Infinity Barrier pattern)
  • Real archetypes that are making money right now

Inspired by real stories + the excellent Get Shit Done framework.

If you're serious about using Claude Code to build or run a business (not just experiment), this is meant to be your operating manual.

Feedback welcome. What’s working (or not working) for you?

Repo: https://github.com/Abhisheksinha1506/ClaudeBusiness

r/AbstractArt Gold-Lengthiness-760

Celdas de los Recuerdos.[OC]

r/ProductHunters diptanshumahish

Finally launched Bitgrain on PH

I finally launched Bitgrain on PH, This is my first launch : ) Didn't spend anything on this tool, and well let's see how it goes. https://www.producthunt.com/products/bitgrain?launch=bitgrain
Here's the link to my product!
This is a tool for for the people dealing with graphics and images : )

r/ProgrammerHumor Salt_Election7313

scamAltmanRedditAccountInTheWild

r/personalfinance Vigneshxo9

Should I cancel my e-mandate to avoid repeated ECS bounce charges, or will it make things worse?

Hi everyone,

I’m based in India and have a savings account with ICICI Bank. I’ve registered e-mandates (ECS/auto-debit) for a few personal loans.

The problem is, due to insufficient balance at times, the auto-debit fails. Every time this happens:

  • ICICI charges me around ₹590 as a bounce fee
  • The loan provider also charges ~₹500 as a penalty

This sometimes happens multiple times a month, so I end up paying a lot in penalties on both sides.

I’m considering cancelling the e-mandate from my bank side and instead paying EMIs manually.

However, I’m unsure about the implications:

  • Will cancelling the mandate stop ICICI charges completely?
  • Will the lender still try to debit or just charge penalties anyway?
  • Will I still be charged by the loan provider even if I plan to pay manually?
  • Does cancelling the mandate negatively affect my loan agreement or credit score?
  • Has anyone here done this — is it a good idea in practice?

My goal is to reduce these repeated penalties and manage payments better.

Would really appreciate advice from anyone who has dealt with this situation.

Thanks!

r/ProductHunters iLikeYouWorld

AnyDrop is going live today! A cross-platform AirDrop alternative. Would love your support!

Hey everyone,

I just launched AnyDrop today on Product Hunt!

It’s a completely free browser app that acts like a universal AirDrop. You can instantly share files, chat, and sync a live notepad in real time across all your devices. Everything transfers directly from device to device - no cloud storage, no apps, and no sign-ups required and 100% free.

You can check it out and support the launch here: https://www.producthunt.com/products/anydrop-3?utm_source=other&utm_medium=social

I would really appreciate any feedback, comments, or upvotes to help keep the momentum going today 🙏. Also, please drop your own PH links in the comments, I am more than happy to return the favor and support your projects too!

r/PhotoshopRequest Past-Negotiation7025

For $10 Can you swap my face from 1 to 2 and touch ups

My family looks happier on the second picture, but I have 420 eyes. Willing to pay $10 for face swapping and touch ups

r/ClaudeAI Emojinapp

Used Opus 4.6 to build a native Swift iOS charity app for therapy preparation. Here is what it handled.

Prelude is a therapy prep app I built for the mental health community. Fully offline, zero knowledge, free forever, no ads, no IAP.

Opus 4.6 did significant work on this project. Not just code generation. The real value was debugging complex voice agent behavior, making architecture decisions for the on-device AI pipeline and solving problems that required genuine reasoning about system design.

The app has a session agent that conducts a voice reflection before therapy, a brief agent that generates a structured summary from the reflection and a weekly emotional trend graph.

My therapist said it improved the quality of our sessions. We had all the topics listed clearly in briefs we could browse through together.

Opus handling the hard parts made this possible as a solo build.

https://apps.apple.com/us/app/prelude-therapy-prep/id6761587576

r/whatisit Melodic_Accident_753

White

I was walking along the VA center and a young man walking the other way asked me if I had any white. I said I had no idea what he was talking about. What was he talking about? He didn’t appear to be under the influence.

PS. Sorry, if this is the wrong place to ask.

r/whatisit Dudulmuncher

What is in my wall?? Feels like silicone

Just pulled back my bed and noticed this hole..

r/Art zuccenoo

Maria from SH2R with that twitter trend, zuccenoo, digiart, 2026

r/TwoSentenceHorror Affectionate-Pie-579

I was in a very bad mood so I went to my wife and gave her a nice warm hug

She pleaded and begged me to let go of her neck

r/leagueoflegends saibot0___

Can i PLEASE stop being secondroled every game?

I swear, climbing through emerald is already a mental challenge as it is, but being forced into TOPLANE over HALF OF MY GAMES, is a misery i was not ready for.

I would rather wait 10 min for one good game than 1 min for a secondrole nightmare.

edit: yes mid main

r/AI_Agents JAmanRao

[Hiring]

Hi , we are an agency which provides multiple client based services and we are building connections with freelancers who are willing to work with us ,we have a consistent team of 20 individuals trying to get us clients and if you agree to work under us ,we would try to provide you projects and since it is commission based 30% of client pas goes to the sales person 20% to our agency and remaining to you ,the only thing you do is just stay in touch with us for a long term .

We want only AI agent developers or people who are in touch with AI related building and all right now

Note - Only serious people dm me with what skills do you know and what have you done till now others will be ignored don’t just type hi ,interested .I need to know what can you do .

r/SipsTea WorryThink6233

Scarlett Johansson uses black magic to make a woman dissappear behind her

r/SideProject DeanosJoint

Everyday Dose 20% Off Discount Code

I’ve tried Everyday Dose mushroom coffee, and it’s one of the better options if you’re trying to replace or cut back on regular coffee without completely losing the routine. It’s a blend of coffee, functional mushrooms like lion’s mane and chaga, plus things like collagen and L-theanine, which are meant to support focus and smoother energy.

Compared to normal coffee, it has significantly less caffeine and lower acidity, so it tends to feel more stable without the usual jitters or crash.

Overall, Everyday Dose is a good fit if you want calmer energy, fewer jitters, and a more wellness-focused morning routine. If you rely on strong caffeine to function, it might feel too light — but if you want something more balanced, it’s one of the better mushroom coffee options out there.

You can get a 20% off discount on your order as well here. Hope it helps!
https://www.everydaydose.com/ANDREWKORNACKI

r/SipsTea shineonyoucrazy-876

Bro..that friendly fire was crazy. Who trained these idiots?

slow the video down. This is wild.

r/whatisit ISV_VentureStar

Metal contraption with fins hanging around it

Weird metal contraction about 3 meters tall with ~40 metal fins hanging from it. It's connected to (I'm assuming) an antenna next to it.

Found close to Navajo national monument, Arizona, US.

r/30ROCK OwnEstablishment1194

Pete's dad was in the masons with Dave Garroway

Has to be the strangest reference.

It's more obscure than Warren Moon in 1996

r/artificial ChatEngineer

Deepfakes don't have to be believed to work. They just have to consume the response budget.

A framing I keep coming back to: a synthetic image or video can succeed even when almost nobody believes it.

Not because it changes minds directly, but because it turns attention into the attacked resource.

If a campaign, newsroom, platform, or company has to stop and answer the fake, the fake already got some of what it wanted:

  • the defenders spend scarce time verifying and explaining
  • the audience gets forced to process the claim anyway
  • every debunk risks replaying the artifact
  • institutions look reactive even when they are correct
  • the attacker learns which themes reliably pull defenders into the loop

So detection is necessary, but not sufficient. The second half of the system is distribution response.

A few practical design questions I think matter more than the usual “can we detect it?” debate:

  • Can we debunk without embedding, quoting, or rewarding the fake?
  • Can provenance signals move suspicious media into slower lanes instead of binary takedown/leave-up decisions?
  • Do newsrooms and platforms track attention budget as an operational constraint?
  • Can response teams separate “this is false” from “this deserves broad amplification”?
  • Can systems preserve evidence for verification while reducing replay value for the attacker?

The failure mode is treating every fake as an information accuracy problem when some of them are closer to denial-of-service attacks on attention.

Curious how people here would design the response layer. What should a healthy “quarantine lane” for synthetic media look like without becoming censorship-by-default?

r/Art Elisheva_Nesis

Artists life, Elisheva Nesis, pencils/paper, 2020

r/whatisit bigbootybanditlover

Found this awhile ago is this what i think it is

Theres two straps and i don’t know the function of either… found this in an old room… So yeah should i wash my hands now?

r/DecidingToBeBetter Hungry-Storage6789

i need a bit of help, drinking

i am trying to do better, but i keep fuqing up
i dont see a world without booze,dont know what to do

r/homeassistant Marathon2021

So who else has created a user account for Claude and then given it a long-lived access token into your environment?

Having some fun with this right now. Incredibly powerful, it understands the API completely and I've verified that it has both read and write access. I asked Claude Cowork to run a job weekly to look for battery sensors no longer reporting, errors in the log, etc. and email me a report on Saturday morning.

I actually was able to fully replicate an ai_task automation I had to check to see if the trash bins were out on the curb on trash night via a Gemini AI call ... but now, I can just describe what I want Claude to do (now that it has full API access) and it just does it.

"Hey Claude, on Wednesdays at 10pm take a snapshot from the trash_bin_camera, and if there are no bins out send me a notification."

r/Frugal reddit-newbie-2023

Delayed Gratification with Netflix

Netflix subscription is expensive, especially since I do not like majority of the movies/shows on it , but there is occassionally that one odd show/movie I like to watch on it .

So I let my watchlist stack up for 3 to 6 months and then purchase a the subscription for a single month when I have holidays and work load is less.

That is my frugality tip :) I hope subscription services give us an option to make it seasonal instead of monthly/annual . At the moment, I have to manage all this myself.

r/AbandonedPorn allesumsonst

Pipe organ

r/SipsTea Shiloh4Ash

This tat got me confused.

r/aivideo sickabouteverything

Program

r/LifeProTips Stunning-Rush-6468

LPT: Spend time reviewing what search traffic actually reached your site, not just what you published or promoted.

Most effort goes into what goes out, not into checking if it landed. A page that stopped getting traffic three months ago is usually a silent signal something broke, not that interest dried up. Has anyone found a reliable habit for catching these drops before they compound?

r/DecidingToBeBetter LateBee9327

After doing drugs for a while I've decided to stop. But my neighbours keep calling me names when I leave my house..

How to not be bothered by this as its making me not want to leave my house

r/meme WorryThink6233

Time for a round of hide the zucchini

r/AbstractArt tacotakozs

“Out There”

Acrylic on Wood.

r/SipsTea Upset-Nerve-6554

Results of infidelity 😭

What would you do in this case?

r/StableDiffusion yellow-red-yellow

How to use Flux2Klein to fix deformed limbs, especially hands and feet?

When I load an image containing deformed limbs, flux2klein almost always fails compared to qwen2511. I use a mask to circle the incorrect limbs, and prompts such as "fix hand", "fix foot", "generate correct hand", "generate correct foot", "five fingers", "five toes", "remove extra fingers", and "remove extra toes" almost have no effect.

What is the correct method?

r/homeassistant Jasonjames71

Shelley hot water control

Hi
I have solar and batteries and have an oil boiler for heating and hot water

This is my first summer with solar and I’m sending loads to the grid at the moment so I’ve just got my immersion heater working for my hot water.

I’ve bought an ogemray to control my immersion and I was wondering if I could put one on my oil hot water controller as well!?

I know Shelley has their own software but I was thinking of getting a Home assistant setup so I can do my solar and everything all in one place !?

Any help appreciated 😁

r/LocalLLM thatguyjames_uk

Help with local set up

Good morning all

So I have a 16gb 5060ti that I use to make ai images via comfyui and also have a 2nd gpu of a 12gb 3060 rtx.

About 4 months ago I played a little with LM studio and used my project settings from open ai (chatGPT) and trained a local chat on there.

I'm now looking again at local LLM. Ideally to make some money I did use chatGPT a year ago to make some books on Amazon to sell. Is there still a market for it?

Any ideas people could give to use with my set up?

r/ClaudeCode JashobeamIII

Claude Apparently Doesn't Need Me Anymore

After months of using Claude Code it seems it just decided it can answer itself and doesn't need me in the loop anymore lmao.

Ok before all the safety ppl come saying it should ask for permissions blah blah, as far as I know, it still asks me before actually pushing anything to git or making any significant changes. I think this is a simple conversational switch up that makes it look like it's talking to me and answering for me.

I found this little glitch funny though bc any time previous, if it would ask a question phrased like that, it would always wait for me to answer before proceeding. I changed no settings though so I'm guessing this is just a linguistic switch up bot not actually taking me out the loop.

I hope.

r/painting SufficientBite1261

60x70cm Acrylics on canvas

r/mildlyinteresting wuddinup

This grocery store had salmon sashimi free samples

r/leagueoflegends CLR833

PSA: Do NOT change your input to WASD past 10 minute mark. You will be locked in and unable to change back.

Learned it the hard way.

I just though "Oh, is it available for ranked now?"

Oh yeah it was.

-22LP

Since the input is supposed to be locked in past 10 minutes, they probably shouldn`t let us change it to WASD then...

Edit: I get it, I'm stupid, sorry for daring to make a mistake and try to warn others about it. Yall act like you read the entire terms of service.

If it's supposed to lock after 10 minutes, do not let me change it after 10 minutes.

r/ClaudeAI ComfortableAnimal265

Best way to move a long Claude project chat into a fresh chat without losing context?

I’ve been using one Claude chat for about 2 weeks for a large project, and it’s starting to get really slow/laggy on my Windows PC in both the browser and desktop app. Weirdly, it still feels fine on my iPhone.

I don’t want to lose all the context and start over. I tried asking Claude to “print out the full context” and moving that into a new chat, but the new chat didn’t really understand the project the same way.

For people working on long projects, what’s the best way to migrate context into a fresh Claude chat? Do you use Projects, a handoff doc, summaries, pinned requirements, exported files, or something else?

Looking for an actual workflow, not just a complaint about performance.

r/Strava Suitable_Candy_1161

Nearing a week since ive been waiting for an activity to register, what do I do?

Ran Tuesday my first session of week 9 c25k.

Strava not working since then and just loading forever. Tried clearing cache, tried force stopping. Should I go ahead and reinstall or is there a solution?

Ran Thursday while the app is still loading Tuesday's run so I couldn't even log that. Quite the bummer for my final c25k week lol

r/Art GuRlY_PoPs_

Cute Furina, Axis, pen, 2025 [OC]

r/Anthropic LoudStrawberry661

Is anyone else having trouble with the "Export Data" feature right now?

r/AskMen E_mi_manchi_tanto1

Why is it so hard for me to find male friends?

I am a man of almost 27 years old and in my entire life I have managed to have only one true friendship with a man. For the rest, I've always only had female friends.

My problem is that I genuinely don't know what to talk about with men, as I often talk about love, poetry, and feelings, but I feel like I can't do it with them. Please don't take what I say as an insult, but conversations generally seem more superficial to me.

What am I doing wrong?

r/Art Naohiko-Kuriyaki

epica, NAOHIKO KURIYAKI, coroledpecil/Crayon, 2026

r/AskMen peytonl4ttin

how do you guys enjoy things being done, in bed?

i used to have a fairly consistent sex life in the beginning 3-4 months of my relationship, after that it kinda just faded from 3+ times a day, to once a day, to once a week and so on.

boyfriend said it's primarily because i don't "put in work" as in i've never been one to be on top nor do i exactly know what to do to please him while on top. up and down? back and forth? lift myself up? or no? etc.

also, hand placement while on top ? this sounds so stupid to be asking but i am embarrassed to ask him upfront and don't want him to look at me like im stupidly inexperienced or something. overall not something i want to ask upfront.

also, is it enjoyable for me to be rubbing my clit while were having sex or is that rather offensive as if you aren't doing a good job? basically, any information as to pleasing my boyfriend further, would be nice.

edit: i had to re-upload this post, don’t mind the mod 😭

r/ClaudeAI National_Software_16

Claude help

Title: Claude image limit workaround? (Claude Plus user)

Hey everyone,

I have Claude Plus and I’m running into a limitation where I can only upload 100 images per chat. I’m trying to have Claude generate a really accurate calculus mock final based on about 250 images of my notes, quizzes, and past exams.

The problem is when I start a new chat to upload the next batch, Claude doesn’t remember the previous images, so it can’t build a fully accurate final based on everything.

Has anyone found a workaround for this?

I’m basically trying to:

- Use a large set of images (~250)

- Get Claude to understand patterns across all of them

- Then generate a realistic mock final

Right now the memory reset between chats is killing that.

Any ideas? Maybe a way to compress/summarize everything or chain chats together more effectively?

Appreciate any help.

r/SipsTea KarmaSorta

Erika Kirk rejected from Rhythm Nation

r/Futurology bigB3235

Scientists are rapidly developing the gear that os necessary to care for a veteran--but is it fast enough?

Hello, 76 custom veteran here vietnam (vieteran). I am always grabbing the new tech to enhance my stay at home and on my couch. I used to be the type of perspn who would fall on my knees laughin and cryin if i saw someone wearing an adult diaper 😐. But as soon as i became a veteran i realized i could do what i want and i grabbed that sweet tech which was new at the time to enhance my home experience and embrace my new life on the couch and not having to get up or go to vietnam. However the tech is kinda underwhelming lets say and there are several complications. Im thinkin if we can advance so many techs like video games and other things that are less beneficial than diapers, when are we (diaper veterans) gonna get our turn at the tile and hit a home run? I want smooth clean and pleasant feeling and smelling self-lubricating diapers for instant and easy usage with levers, maybe even a reusable electric diaper for the planet too so she can get a piece of the pie too. My wife is signing when we get a petition too. Help me understand

r/SipsTea Chance_Bid_1869

Scientists discover cockroach milk is four times more nutritious than cow milk

r/Art chimpanzeefromthezoo

The Invisible Man, Merlin, Digital, 2026

r/goodnews lightning_twice

BREAKING: New Pharmaceutical 'ReZyne' Clinically Proven to Make America Great Again

r/AskMen Trypophilia2019

What is the dumbest myth you’ve ever thought/heard about the female body?

What’s something dumb guys believe about women? Like the whole saggy lips/roast beef after lots of sex bullshit.

r/meme thattheydont

Kfc Staff: Breast Or Thighs? Me: Personality Kfc Staffs Reaction:

r/midjourney stoicdamc

Midjourney busted? Can't replicate v6 prompts + styles

Hi all, I am having a concerning issue. I am no longer able to generate images that successfully met my requested criteria and style using the exact same prompts & settings that created them months ago.

Example 1 (Good): An example of an image I generated that I thought was fantastic, generated in February 2025 using V6.1:

https://cdn.midjourney.com/9a7e2558-9ccd-46fb-8b51-3e73cdbc661b/0_3.png

https://preview.redd.it/7jm2e4g84gyg1.png?width=1024&format=png&auto=webp&s=5c5644c09cba36230e9dc65a2d5cdddc3dd5d690

"Pixel art icon of a bird of paradise house plant, white background, dark green leaves, no flowers, low-resolution 32-bit pixel art --v 6.1"

I can't recall if I used a moodboard but the website UI now does not show me whether a moodboard was used in every case, it seems to be a bug.

Example 2 (Bad): An example of an image I generated today, April 2026, using the exact same prompt and version:

https://preview.redd.it/w740f4eb4gyg1.png?width=1024&format=png&auto=webp&s=fbf4fefd7edefcbb91b8e0fc1988864533c52980

https://cdn.midjourney.com/5c91c15e-ce92-4173-8697-f72fcbb09ae6/0_2.png

"Pixel art icon of a bird of paradise house plant, white background, dark green leaves, no flowers, low-resolution 32-bit pixel art --v 6.1"

---

It's not even in the style of pixel art and I say it twice in the prompt 😭😭😭 Also, "no flowers"... still show flowers.

IS MIDJOURNEY BUSTED??? what's going on man!?

r/goodnews MRADEL90

Phone-Free Social Events Grew by 567% Led by the Generations Who Didn't Have Them Until Adulthood

Members of Gen Z and Millennials are attending phone-free experiences 567% more often across the globe, signaling a major shift in how people want to gather.

In a world shaped by algorithms and constant visibility, Eventbrite data shows that the generations having grown up with limited to no social media and smartphone use, and which then lead the charge to its ubiquitous adoption, is now leading the world back—away from constant connectivity.

r/meme Icy-Situation-2919

Meme

r/me_irl Several_Sandwich_732

me_irl

r/todayilearned DrakeSavory

TIL that Pope Leo I was the earliest pope the be called "the Great", made a Doctor of the Church and buried in St. Peter's Basilica. According to legend, he also stopped Attila the Hun from attacking Rome.

r/AskMen FreedomDirect4507

why can't i stay up in bed?

So i'm 19M, only got into sexual activities in the past year and tried them with 2 people . Every time i've tried anything sexual, i've had weird but consistent patterns getting/maintaining an erection depending on the specific situation. Leading up to anything (Kissing, touching) just being close in general, i'm a rock. Now if we tried head or a handjob, i stay up for at most 5 minutes then it goes away. If we tried sex, it goes soft almost instantly. I've only had "sex" (if you could even call it that) for at most 10 seconds 2 times. I'm confused because i can fully function by myself , but when I'm with a girl these patterns show. I'm really discouraged because i want to know what good sex or just normal sex in general feels like. I really don't want to hear the common "make her feel good and that's enough" replies. That's basically been my only way to please them so i've been doing that, the women i've been with have verbally said how much they wanted me inside of them, and it's killing me that i cant preform basically what i've been put on earth to do. Does someone have anything?

r/DecidingToBeBetter Anonymess13542

Anyone else feel like concrete in their stomach after eating McDonald's?

I decided to stop eating fast food, except on rare occasions. I used to eat at McDonald's once a week. But the last two times, I ate it for lunch then went to work in the evening. Big mistake! I didn't feel fueled by the food at all, I was tired and wheezing which I am normally NOT at work. And...I legit felt like there was concrete in my stomach the rest of the day. I wasn't hungry for anything else. I guess it shouldn't surprise me since it has so many preservatives that it can last for YEARS in some people who forgot to throw it out (I'm sure you've seen those videos). Anyone else feel that way?

r/PhotoshopRequest ttvlemoneideu

Changing time of day

Could someone change this picture to night and make it look like it was during the concert? Could you please also change the screen in the back to the second slide of Bruno? Thanks so much!

r/meme Ok-Thanks993

Everyone can relate to this. That Morning Alarm 😂

Funny meme.

r/oddlysatisfying djinn_05

Sanderling birds running away from waves

r/painting Sephtis_blut

Some portrait studies I did almost an year ago (Watercolors)

Unfortunately I hadn't realised that the reference I used for the second girl(with ginger hair) was AI, at that time.

Also her jaw looks kinda weird because the paper on that area got damaged and whenever I tried to fix it by trying to make the jawline even and tried applying the paint, it would start spreading. So I left it as it was.

r/ChatGPT SpookieOwl

Currently I'm learning UI design for games (light text over dark and vibrant backgrounds.) I'm quite impressed with the mock-up that ChatGPT came up with.

r/personalfinance Numerous-Street-5298

Best place for a personal loan of 20k with a 652 credit score

Looking to pay off credit cards and do some home improvements

r/LocalLLaMA RowGroundbreaking982

Pocket TTS Multilingual Update

Hey everyone.

Pocket TTS finally released their multilingual model.
It support 6 language English, French, Spanish, German, Italian, Portuguese.
And each language is separated model.

I'm making modification for the onnx exporter to support multilingual version.
It based on KevinAHM onnx exporter and VolgaGerm Pocket TTS cpp optimization.
It also selective quantized the nodes to int8 for much faster and stable generation.

My initial benchmark with int8 model:
AMD Ryzen 9 7950X: latency around 30ms, generation speed up to 13x realtime.
Helio G99: latency around 100ms, generation speed 2.5x realtime.

Here is the exporter script : Pocket TTS ONNX

And I have sample runner in unity engine : Pocket TTS Unity

If you want easy way to test it, I already have Android version available.

r/comfyui NefariousnessFun4043

klein inpaint in masked area not working

so i have a inpaint workflow for klein , i have 2 images image 1 is the location with multiple chairs, and image 2 is the person , when i mask the area the particular chair that i want the character to be seated in and write the prompt "Place the person from image 2 exactly into the masked area of image 1.Align the person’s body to match the perspective and angle.The person must be sitting naturally and properly.Scale the person in same size as the people in image 1Keep the original environment, composition, and camera view from image 1." it doesnt put the person in the place doesnt scale infact half the body is missing and the background is recreated and masked area is has some weird regenration ......am at my wits end trying to get this to work. ...any suggestions any working workflow is welcome

r/PhotoshopRequest angefacee

Can someone please remove the text and number? If possible the bottles on the counter too please

Lost my baby yesterday. Even though the caption is very much in our type of humor I’d like a clear image of just us when we were both young. Thank you

r/SipsTea PersonalityNo7391

Emotional damage 😂😭

r/SipsTea JosephineMargaret

That ruined the vibe instantly

r/ChatGPT cosmic-comet-

I built a Chrome extension to turn large codebases into structured markdown for AI workflows.

A plugin to make working with large codebases in ChatGPT and Codex less painful.
You select a project folder and it does a few things
filters out junk like node_modules builds logs assets
keeps only relevant code files splits everything into manageable markdown bundles adds instruction templates depending on what you want to do.

Then you upload the zip to ChatGPT and it generates a structured plan that you can send to Codex.
So, Instead of letting the model spend time figuring out your repo structure every time, you give it a clean and consistent input once.

Tested it on a Unity project around 57GB with more than 100k files and it handled it without crashing. There is filtering and size limits in place so it skips heavy binary files.

You can customise what to include or exclude in default-config.json

Installation guide is in the Readme.md

Repo:
https://github.com/danyalahmed1995/PromptPipeline/

r/SideProject Weekly_Following5194

Built an app that shows you how much your savings rate has slipped over time — would anyone actually use this?

I've been building a personal finance app, but with a pretty specific focus - instead of budgeting or expense tracking, it's aimed at people who just want to see if their money is rotting in a low-APY account without realizing it.

The main thing it does: shows you your actual APY per account, flags if your rate has quietly dropped, and tells you how much you're losing annually by staying where you are.

Quick question for y'all — do you actually know your current savings APY off the top of your head? And has anyone ever been surprised to find out their rate had slipped without them noticing?

Not promoting anything, genuinely trying to understand if this is a real problem people care about.

r/ChatGPT daisy_thegoodgirl

ethical AI options?

my boss wants me to start using AI to generate ads for social media marketing purposes. the research shows that my generation and those younger than me generally distrust businesses that post AI generated content. my boss does not agree with that research, telling me that people are more likely to trust AI generated ads because of their visual appeal, and in our last meeting, i put two and two together that he works for another company that is going heavy on AI use.

what i am really trying to figure out is if there are any options that are more eco-friendly, like maybe don’t use a ton of water to cool their system down or use renewable energy instead. from what i’ve seen, the only one i can find that MIGHT be a solid option is ecosia, but they do not generate images. i’m not debating the positives and negatives of AI here, just trying to find a reasonable middle ground for doing what i am told and feeling less horrible as a human being.

thanks in advance for your input! ✨

r/raspberry_pi Jumpy_Interaction878

How do I automatically change screen resolution on connection by Pi-Connect or Reboot?

Hello! Pi beginner here!

Recently I picked up a Pi 3 B+ for about 10 bucks, and its worked for basic things that I would expect out of a Pi.

However I have noticed that whenever I connect via Pi-Connect, the screen resolution always gets reset to my main monitors resolution, which is 1440p.

I want the Pi to just always display in Both Pi-Connect and any plugged in external monitor in 640x480, since I really do not need more than that.
I really don't want to have to change the resolution back to it every time it decides it wants to be 1440p, or every reboot. I Just need it to be locked to 640x480.

I currently do not have any monitors plugged into it as I intend to just use Pi-Connect for everything unless necessary.

According to the terminal, I am using "Debian GNU/Linux 13 (trixie)" (64 Bit), any help would be greatly appreciated!

r/Whatcouldgowrong ms_okabe

WCGW jumping on the front glass of a car

r/meme Equivalent-Truth4500

Better than original

r/homeassistant guluguluchui

zigbee or wifi mqtt for people tracking radar sensor?

I recently acquired a 60GHz 4T4R mmWave radar module. It can detect the XYZ spatial coordinates of multiple people within a 6-meter diameter and supports setting up multiple detection zones, categorized into exclusion zones (shielding areas) and sensing zones. Its performance is quite impressive and accurate.
I’m currently designing a solution to integrate it into Home Assistant (HA). Since I want to capture movement trajectories, the data transmission frequency for positioning will be quite high—perhaps two updates per second? Given this requirement, would WiFi MQTT or Zigbee be a better choice? I have a Zigbee dongle on hand, but I suspect WiFi MQTT might be more suitable. Also, if I deploy these sensors in multiple rooms, would a WiFi Mesh network be more effective?

r/DunderMifflin Noneedforname_077

I’ve just watched like for fifteenth time Goodbye Michael and I’m crying as the first time

Yeah, that’s all. Goodbyes are a bitch

r/meme Ok-Thanks993

Most of us Can Relate

r/homeassistant Last-Organization274

New to the HA community

Very new to Home Assistant!!

Just started last week and have been diving in. I’ve got HA running on an old NUC and I’m keen to integrate as much as possible over time.

I would like to start with
Tesla Model Y
FoxESS battery
Meross garage door opener
Solar system
Hikvision CCTV + doorbell (with indoor unit)
Actron ducted air conditioning

Next goals:
Bring in existing LED lights
Add kitchen appliances / general smart control

Would love a bit of a starter roadmap from the community
what should I prioritise first, and what integrations or approaches tend to work best?

I’m based in Melbourne, Australia, so any locally available gear or AU-friendly suggestions would be awesome too.

Cheers!

r/ChatGPT the_outer_realms

Will Smith Eating Spaghetti Stream

Somebody toucha my spaghet

r/SideProject ssshooter

Meet Mind Elixir: The first MCP-enabled mind mapping tool that keeps your data local.

Mind Elixir App MCP

Hi everyone,

I've been building an open-source mind mapping engine for about 7 years now. It started as a hobby project, but I finally got around to wrapping it into a proper desktop app. I'm calling it Mind Elixir App.

I built this because I'm tired of every productivity tool forcing me onto their cloud. When I'm brainstorming or using AI to expand on ideas, I don't necessarily want that data living on someone else's server.

Here’s the deal:

  • It's 100% local. No accounts, no "syncing" unless you set it up yourself. Your data stays on your machine.
  • The app is tiny. About 10MB. I spent a lot of time keeping the bloat out so it actually feels fast.
  • You bring your own AI keys. Instead of paying a subscription for "AI features," you just plug in your OpenAI or Claude key.
  • It supports MCP (Model Context Protocol). This is a big one—it lets the AI actually understand the structure of the map instead of just treating it like a wall of text.
  • I ditched folders for a tag-based system because my brain doesn't work in hierarchies anyway.

Who might find this useful:

  • Developers: If you're mapping out a messy architecture and want to use AI without sending your proprietary logic to a third party.
  • Students: If you're trying to summarize a mountain of research papers or lecture notes.
  • Professionals: If you need to organize project plans or meeting notes without leaking company data.

It's available for Windows (stable-ish) and Mac/Linux (very much beta). I'd love to hear what's missing or what breaks when you try it.

(P.S. Mobile version is also in the works!)

Check it out here: http://app.mind-elixir.com/

Mind Elixir App AIGC

r/AskMen Bearchill

What are some of your favourite words for your feelings?

As I get older I'm learning more words to describe different feelings. Some of my favourites are melancholy, wonder, and envy. What are some of your more useful emotion words?

r/SideProject UnfazedTank

I built a decentralized AI inference network and just launched today — looking for early feedback [infercoin.io]

Hey r/SideProject — just went live with something I've been building for a few months.

What it is: InferCoin connects businesses that need cheap AI inference with people who have gaming PCs sitting idle.

How it works:

  • Businesses submit AI jobs via API, pay per job (no subscription, no monthly minimum)
  • Miners install Ollama, run the launcher, and earn IFR tokens for completing jobs
  • 8 models supported — from lightweight 3B all the way up to 70B

Where I'm at: Just launched today. I'm the only miner right now and have no business customers yet. Looking for early feedback, first users, and honest opinions.

Why I built it: OpenAI bills add up fast for small devs and indie founders. Wanted something cheaper and more open.

Happy to answer any questions.

👉 infercoin.io

r/AI_Agents Upbeat_Reporter8244

Metta-4 – Learn from Anything. Ship Nothing You Don’t Own.

Metta-4, a Python synthesis engine that feeds JL Engine. It takes open specs — MCP servers, A2A agent cards, skill directories, and similar inputs — and turns them into native artifacts.. .jl stubs as my "agent project runs in Julia. It brings back tool fragments, and agent cards/ Abilities ect.

It checks license compatibility before synthesizing and attaches provenance to every output so you can review exactly what was used before shipping.

So converting open capabilities into something native, inspectable, and actually owned by your system instead of copying code or relying on opaque prompts.

The direction feels promising, Initially my system just try to solve it like a puzzle. If it came up with a problem it didn't have a set of tools, it would plan and make... fail try again until it got it right and solved the problem. Happy to share short snippets in the comments if people want to see what the generated output looks like.

Would love feedback from anyone who’s wrestled with provenance, licensing, or “where did this code come from?” problems?

r/AskMen Separate-Ad-6209

I haven’t had a conversation with any girl/woman since maybe 4years or longer. What do you think? What do you think?

I will turn 19y in 8days. Ask anything.

How rare is that?

r/artificial ChatEngineer

Deepfakes don't have to be believed to work. They just have to consume the response budget.

A framing I keep coming back to: a synthetic image or video can succeed even when almost nobody believes it.

Not because it changes minds directly, but because it turns attention into the attacked resource.

If a campaign, newsroom, platform, or company has to stop and answer the fake, the fake already got some of what it wanted:

  • the defenders spend scarce time verifying and explaining
  • the audience gets forced to process the claim anyway
  • every debunk risks replaying the artifact
  • institutions look reactive even when they are correct
  • the attacker learns which themes reliably pull defenders into the loop

So detection is necessary, but not sufficient. The second half of the system is distribution response.

A few practical design questions I think matter more than the usual “can we detect it?” debate:

  • Can we debunk without embedding, quoting, or rewarding the fake?
  • Can provenance signals move suspicious media into slower lanes instead of binary takedown/leave-up decisions?
  • Do newsrooms and platforms track attention budget as an operational constraint?
  • Can response teams separate “this is false” from “this deserves broad amplification”?
  • Can systems preserve evidence for verification while reducing replay value for the attacker?

The failure mode is treating every fake as an information accuracy problem when some of them are closer to denial-of-service attacks on attention.

Curious how people here would design the response layer. What should a healthy “quarantine lane” for synthetic media look like without becoming censorship-by-default?

r/Adulting ShipAdministrative92

I have an announcement

GIVE ME SOMETHING TO BREAK!

GIVE ME SOMETHING TO BREAK!!!

CAUSE I GOT THE CHAINSAW!!!!

r/Art Beginning_Advisor575

Kaneki kakuja, Akash, Red pen on lined paper, 2026 [OC]

r/photoshop Professional_Fix8050

This might sound stupid but how do i achieve this look?

I took this pic of my shirt on my phone and can’t do it again idk how I did but it looks super cool and like it was taken on an old phone or something

r/meme Ok-Thanks993

Those Who Addicted to Watching Night Movies

wow😅

r/Anthropic Neel_MynO

Are they selectively releasing Opus 4.7 in Claude.ai chat with 1M context window?

https://preview.redd.it/h2i93r7x0gyg1.png?width=1248&format=png&auto=webp&s=9c8b411fcf385664bbe81ed93bf3d2cb3d3b264f

I was running a small research on how to replicate the research behavior of Opus 4.6/4.7 in Claude Code, and there was a point in it that said they are releasing a 1M context window (right now it's capped at 200K) in Claude.ai chat selectively.

Context window. CC on Opus 4.7 with 1M context exceeds the standard Claude.ai chat context (200K with extended; 1M is selectively rolled out). 

Is anybody here who received access to the Opus 4.6/4.7 1M in claude.ai chat?

r/funny MysteriousAvocado510

I wrote this as a prank, now the teacher thinks I'm a freak.

r/findareddit Open-Management8520

wlw/bi subreddit?

i'm looking for a wlw and bi subreddit (both in one). looking for ppl like me, would be nice if the reddit would be for teenagers but if not i don't mind.

r/n8n malbagir2803

Help! Evolution API throws 400 Bad Request [object Object] only on the 2nd item inside an n8n Loop.

Hey everyone, I'm pulling my hair out over a weird interaction between n8n and Evolution API.

I have a workflow that processes multiple contacts using the "Loop Over Items" node. Inside the loop, I'm trying to send a presence update (typing/paused) via Evolution API. I've tried using both the Evolution API Community Node and a manual HTTP Request node.

https://preview.redd.it/g9hjbd1vagyg1.png?width=1920&format=png&auto=webp&s=f5668006ed7f97ba8c9e718de2d72a2b6e04a2d8

Here is the weird part:

  • Item 1: Executes perfectly.
  • Item 2 (and onwards): Fails with this error: 400 - {"status":400,"error":"Bad Request","response":{"message":["[object Object]"]}}

It seems like on the second iteration, n8n or the API parses the target number/delay parameter as an Object instead of a primitive String/Number, even though it's the exact same node executing it.

What I've tried so far (and failed):

  1. Forcing the data type to string using {{ String($json.phone_clean) }} and {{ $json.phone_clean.toString() }}.
  2. Appending the JID manually like {{ $json.phone_clean }}@s.whatsapp.net.
  3. Switching from raw JSON to "Using Fields Below" in the HTTP Request node to bypass n8n's JSON parsing inside loops.

Has anyone experienced this specific issue? Does Evolution API (or n8n's auto-casting) behave differently when evaluating expressions inside a loop? How do I force it to consistently send the correct data type for every item in the loop without it reverting to an object?

Any insights or workarounds would be greatly appreciated. Thanks!

r/ARAM blueiceSNOW

They need to do something with all these people dodging game forcing a remake. add an option to keep the champ pool from last lobby or something.

i've noticed this happen more often when i have a decent team comp it will then remake into a much worse comp.

ngl part of me thinks its something with riot algorithm and not someone actually dodging part of the time. like if i have a high win rate on a champ and actually get said champ it is like a 90% chance it remakes.

whatever it is, can we please do something about it? there are times where i get into a game and have 5 remakes ina row before i get into a game.

make the punishment worse for dodging make it so we can keep the old champs from last lobby.

would be very nice if something can be done it is very frustrating getting an amazing comp only for it to remake into a comp where theres no synergy and you dont know any of the champs only for the enemy to get some of the champs u had before the remake and steam roll you.

r/ClaudeAI ExpensiveIcecream

How do you manage Claude premium seats in a 100-person company?

We’re a startup of around 100 employees in manufacturing and services. Our teams include electronics engineers, embedded engineers, software engineers, system engineers, PMs, and other roles.

This year, we rolled out Claude across the company. The standard plan’s usage limits were too low for some employees, so we initially gave premium seats only to people who requested them. Later, we tried moving to an enterprise plan, but usage from the web app was much higher than expected. Within about a week, pay-as-you-go usage exceeded our forecast by more than 5x, so we decided to move back to a team plan.

Now we’re trying to define a fair policy for premium seats.

The main issues are:

- If we give premium seats to anyone who requests one, we expect almost everyone to apply.
- If we downgrade people based only on usage, weekly usage can vary a lot depending on project schedules.
- Some people argue that premium users should use at least around 20% of the premium allowance, since premium is roughly 5x the standard allowance.
- We considered asking premium users to share use cases quarterly or semi-annually, but we’re not sure how practical or fair that is.
- I’m also concerned that senior or long-tenured employees may avoid the sharing process, creating an imbalance where only some people follow the policy.

Right now, we’re considering a combination of internal demand survey, manager approval, and quarterly or semi-annual use-case sharing. We are not planning to make formal presentations mandatory, but we still want some level of accountability and cost control.

I’d be interested in hearing where this approach might fail.

For companies that have rolled out Claude Team or similar organization-wide Claude plans, how do you manage premium or higher-usage seats?

I’d especially appreciate examples of:

- Seat approval criteria
- Usage-based downgrade policies
- Department-level budgets or chargebacks
- Use-case sharing or internal demo policies
- Policies that worked well or failed in practice

Any real-world examples or opinions would be helpful.

r/TwoSentenceHorror CompetitionLiving

When floods ravaged our local cemetery, I volunteered to help recover the disinterred materials.

Our team retrieved dozens of coffins and a variety of burial garments, but we never found a single body.

r/explainlikeimfive ImprovementPurple

ELI5 How do cameras actually work and how haven't we hit a cap on how good they can get?

I recently started photography as a hobby and bought myself a very cheap camera for 90$aud.

I've really enjoyed my time taking photos and started looking for something a bit better. Immediately I realized I don't actually know what I'm looking for and don't understand what's so different about them. Lens variations make total sense but the camera itself doesn't seem like it could differ all that much

r/me_irl Greedy_Procedure7461

me_irl

r/ChatGPT iamlegaly

A funny glitch I encountered where chatgbt suddenly said a single word with arabic letters.

r/Whatcouldgowrong skully_kiddo

Pretending traffic laws are more like suggestions

r/whatisit LunarScreaming

Big rusted cylinder with mesh?

My immediate assumption was oil rig part, but I want to actually know

r/explainlikeimfive Rahul_Jethmalani

ELI5 : How does having Diabetes or high blood sugar increase your chances of blindness?

r/SideProject Little_Sea_3403

I spent 3 months building a custom AI-powered waste sorter and a high-end UI framework (EcoSort + Abhi AI)

I wanted to share a look at Abhi AI, a project I’ve been architecting to move away from generic, low-effort web templates. The focus here is on "Apple-style" aesthetics combined with production-ready logic. Technical Highlights: Glassmorphic UI Engine: Optimized backdrop-filter blurs that maintain 60 FPS. Production Logic: Built for high-performance React environments where generic AI code usually fails. Architecture-First: Designed to be a stable foundation for complex full-stack systems. I am currently using this framework as the base for all my high-end development work. I’ve decided to open up a private hub for founders or developers who need this level of technical precision in their own projects. I have a limited capacity for bespoke architectural builds this month. I’ve put the entry link and details on how to request a private build in the comments!

r/CryptoMarkets Candid-Watercress602

New Crypto girl here, I’m looking to invest

Hi everyone,

I have about $50 to $100 to put into something. I would sincerely appreciate any advice.

I don’t have that much money right now to invest but I have been doing research on APTOS and TAO, especially considering that BlackRock is behind Aptos and the elitists. Rockefellers, Rothschilds etc. I know that AI is going to be the future and I’m trying to make a strong investment in what I can right now. At first I was looking at meme coins like SHIB and DOGE, I have $12 In DOGE, and a few bucks in SOL and like 0.25 in BTC, few bucks in ETH……. Im just not sure right now. I’ve been watching documentaries and it seems like BTC is king but what can I do with that?

also I have never invested in stocks.

I don’t know how to trade I have never done it before. in the past I was into BURST and knew the dev too bad it faltered….

My question is which coin is worth an investment and I’m afraid I’ll miss the window as TAO climbs.

r/SipsTea Old-Secretary6189

When the fraud department realizes they're not hacked, just deeply committed to bad decisions

r/LocalLLM tk421tech

Mac mini M4 base 16/256 potential gift.

Given the very basic specs. What kind of local model and potential LLM use can one get out of:

Mac mini M4 16/256.

I have been looking for a refurbished machine with these basic specs so I can potentially gift it to a friend. Can’t afford to give a higher spec.

Just wondering if he can get any benefit from this base model? Assuming he can (given the sold out stock), but thought I would asks. [I have a similar model except mine is 16/512].

Thank you.

r/Adulting Easy-Highlight8562

Have you ever squeezed some one banana by mistake?

r/leagueoflegends Front-Incident-9895

Hwei wasd deshabilitado

Entre al juego y veo que en el cliente marca que hwei tiene deshabilitado el wasd por problemas, alguno jugo Hwei con wasd para saber exactamente por que era? xd
que me imagino que seguramente tocabas shift o click 2 y salía las teclas qwer, o tal vez por otro motivo

r/Adulting CO_Renaissance_Man

Parent's 40th Anniversary - Gift Ideas

Hi everyone,

My parents are having their 40th anniversary this year and I am trying to find a gift that I and my two brothers can go in on.

  • They enjoy movies and good food but are generally introverts and homebound most of the time.
  • They also enjoy taking road trips.
  • They have two dogs.
  • They don't have a lot of money and are not expecting any kind of major gift.

Any thoughts or ideas would be appreciated.

r/Wellthatsucks Mission-Badger-4005

Sooooooo who would attend this?

r/whatisit plantreddit

Metal Femur?

Very heavy piece of metal found in a garage. It kind of looks like a femur and family has a history of doctors/surgeons but it's so heavy. And also not quite a femur. Base says Denis Wagner.

r/DecidingToBeBetter Chemical_Chipmunk_51

Need a pep talk. Can someone give me words of encouragement lol.

Haven’t slept. Fml. I’m an idiot. It’s been 5+ years and I’m being hard on myself. I caved after buying something and looking at it for two weeks. Told my wife about it and felt so much angst. I had to try it out. I still have some left but am gonna try not to do it and just hold onto it. Should I get rid of it. What should i do.

r/aivideo ArtisticGrape1552

Looks like things are about to get a little fried and greasy

r/SideProject papertraillog

A secure, AI-free work incident log for employees.

Hey everyone!

Traily is for employees in hostile or toxic work environments who need to track what’s happening for HR, legal, or just personal clarity.

I’m a solo developer and created this from a lived experience that forced me to navigate the confusing world of employment law, alone. I had 2 years of Notes, emails, screenshots. It wasn’t a paper trail, it was a mess. So, I took everything I learned and needed and put it in one secure place.

Forms that are designed to log what’s actually relevant, evidence that’s auto-organized, health tracking, and access to various legal resources.
I reached out to over 30 attorneys in 2 weeks, that was not fun to keep up with... so I added an attorney status tracker too.

Traily is AI-free. It is not a chat bot to vent to.
It is a vault of documentation that proves what you’re experiencing. A clear timeline.
A private Paper Trail that builds itself.

The Highlights:
• Every entry is date-stamped to the minute. Locked and PDF emailed to you. This provides an external, third-party timestamp that proves you didn't edit the log.
• Automated Everything: All entries sync together & populate into a chronological timeline, case summary, and legal roadmap.
• Wellness Integration: Document the mental and physical impacts. Free access to an affordable therapy resources, and a Mood Tracker.
• Stealth Mode: A one-tap "Screen Decoy" to switch to a neutral screen if someone walks by your desk.
Access legal aid & step-by-step EEOC processes.
• No ads or affiliate marketing. Traily is meant to empower employees, not take advantage of them.

Thank you for reading!

Traily - Work Incident Log
Available for iOS

r/whatisit Calm_Information_35

Old lighter and flashlight?

I found this old lighter with a flashlight in the cap but can't figure out what model it is. I got it from an old family friend who served in the US army during the Korean war. It has no markings on it of any kind. I am trying to restore it but the flashlight doesn't work and I took it out and found it to be using 3 lr41 button cell batteries. It was with some other military issue gear in a toolbox like a p38 can opener so I'm assuming its an old unmarked military issue item, but could just be coincidence. Does anybody know what model this is or a replacement flashlight that would fit in the cap?

r/ClaudeCode Electrical-Count2216

What plan are people using?

I just upgraded to the max plan as the new Opus 4.7 burns through the plus plan, interested to know what other people are paying or if you have any recommendations on how to lower token usage.

I mostly use claude code for creating websites, and gathering data from the internet.

r/Futurology bla_bla_blablab

Quão realista é um cenário de colapso econômico causado pela automação via IA?

Sou da area de tecnologia(profissional senior) e tenho acompanhado de perto a evolução rapida da IA, empresas desde pequenas ate grandes estão freando contratações, e com isso a pressão por redução de custos acaba falando mais alto a medida que a IA avança, li bastante, estudei sobre o assunto e é possível que entre 5 anos a 10 anos já tenhamos agentes de IA que farão uma grande parte da atividade intelectual.. ja que a capacidade é muito maior

Há o ponto de que hoje a IA está operando em prejuizo e por isso é mais barata, mas sabemos que cada vez mais tende a ser eficiente e muito em breve pode ser mais barato ter um agente do que um funcionário em varios contextos

Acredito que as últimas profissões a serem automatizadas serão as atividades manuais, mas como sabemos, esse avanço pode chegar também via hardware ao longo do tempo

Basicamente a minha leitura é de que pode haver um cenário onde o capitalismo sofra mudanças profundas, com aumento de desemprego estrutural, e caso exista Renda Unica Universal (que é uma possibilidade) o valor pode ser baixo, reduzindo o poder de compra enquanto um pequeno grupo dono de IAs concentra mais poder e influencia

Também há quem diga que “Na revolução industrial se criou novos empregos..”, porém o cenário atual pode ser diferente, com uma tendência de redução tanto em trabalhos manuais quanto intelectuais

A IA está avançando muito mais rápido do que qualquer governo, por isso ainda vemos poucas regulamentações e pouca discussão mais ampla sobre isso

É algo que altera toda a estrutura da sociedade, provavelmente afetando educação, mercado de trabalho e até o sentido de formação profissional.. por que fazer faculdade se já há um especialista em praticamente tudo?

Considerando isso, alguns cenários possíveis para os próximos 10–20 anos seriam:

- automação parcial com adaptação do mercado

- aumento de desemprego estrutural com medidas como renda básica

- maior concentração econômica em empresas de tecnologia

O ponto que trago para reflexão é:

Qual vocês acham que será a condição de grande parte da população com esse possível impacto nos empregos? Vocês acreditam que haverá adaptação, instabilidade social, revoltas ou outro caminho?

r/OldSchoolCool lumin0us_rebel

Ladies having Poolside fun in 1940s

r/estoration Cyanidesuicideml

I know its not the best...

If anyone could help, the original one was list in a house fire. I'd love to see it restored and maybe colored! Please and thank you!

r/AskMen Monk_in_process

What my mom did is unintentionally unfair right?

  1. She actively decided to put up with my deadbeats dad behaviour and took up the responsibilities for two.

2.She likes doing excessive household chores and activities as a hobby , like she would do over and above and things which are not required bcz she genuinely likes.

Resulting to this she actively used to vent out her fatigue on me . I wasnt a great son growing up, but somewhere she did overkill and she still does.

She often unintentionally tries to bar me from going out , with the excuse that I need to either study or help her. I am willing to help her in a fair manner and I do but there should be a limit , she expects me to prioritise only household chores all the time post studies.

Maintaining a healthy social life is also important right? But bcz of her behaviour and demeanor I was always hesitant to go out even when I am an adult.

She would get emotional whenever I wanted to have a genuine conversation and would never understand that I could have different priorities in my life too apart from wanting to do household activities .

My basic social life has been ruined and I did have a job but I had to quit bcz of my health issue. And I had a very good work ethic and was praised for it.

I am saying this bcz I want to assert that I am functioning adult.

I dont want to lie I hate her bcz she actively decided not to set boundaries for equal contribution and expected me to share same priorities as her.

r/ClaudeCode AdFluid9823

For those doing code review based on claude-generated PRs, is there anything that does the full loop?

Might be silly questions, I've seen a lot of discussion here about code review workflows and most of them down to having another AI session read the diff.

What I'm looking for is something that closes the whole test loop. like: PR gets opened, something reviews the code, then actually simulate the real user behaviors, like signing up the app, clicks through the main flows, checks if anything is visually broken or behaving kind of weird, and reports back with screenshots or steps to reproduce.

Right now I'm doing that second half manually and it's the part that takes a lot of time. and I miss things because I don't know what to test because I'm not a QA person. Does anything like this exist yet in the AI world?

r/raspberry_pi Blankneedssleep

Raspyjack Help, Ragnar Headless Port

I can't get the Ragnar port working on my Raspyjack, refering to the github project by 7h30th3r0n3. I says on the page to run "./scripts/install_ragnar_port.sh" if it says its missing the package, but im unsure where to run it. I tried to ssh into it and run it and run it through the WebUi and no luck. Any help is greatly appreciated

r/SipsTea djinn_05

Smh, dude was having a good time

r/Art CleefHanger

Willfrog, cleef hanger, digital art, 2025

r/estoration Odd_Difficulty8018

Please help recover my late brother's picture

r/meme verizox

I’ll just check Reddit for 5 minutes…

r/ClaudeCode Hanuonbenz

Tracking token usage per prompt

Is there an economic way to log token usage per prompt in Codex and CC. I want to do postmortem which token to reuse and which one to avoid.

r/whatisit RoninBelt

What is on the pot?

I let my mate stay in my flat as I was away, I have this pot that was gifted to me and he obviously used it to cook.

But now the bottom has this rust like substance that I’ve wash off, but seems to return unless I wipe all water off?

Alarmingly there is some signs of it on the exterior? What is this and how could it have happened?

r/Rag DefinitionJazzlike76

Fresh Grad Solo Project: Am I over-engineering my RAG pipeline evaluation? (Need advice on workflow)

Hi everyone, I’m a fresh grad (Data Science/AI background) building a solo project—an AI research assistant for technical PDFs.

Since I don't have a mentor, I’m struggling to know if my approach to a project is right or i'm just "In my own head" 😞 . I’m also intentionally avoiding AI-assisted coding (Copilot/Cursor) for this project to master the fundamentals of RAG/LLM/AI pipelines.

For MVP, I have PDF parsing -> Chunking -> LLM reasoning -> Output of paper insights/methodology etc..

My current bottleneck: PDF Parsing. I’ve spent a week testing different parsers (Docling, MinerU, PyMuPDF). My current approach is:

  1. Select 3-5 diverse papers (tables, math, multi-column).
  2. Run each paper through the parsers.
  3. Manually evaluate/compare output vs. use an LLM-as-a-Judge to score formatting retention. -> log to MLflow

Results:

- PyMuPDF -> the worst (cant parse equations/images), but is the fastest

- Docling -> better at parsing than PyMuPDF (but cant parse images). slower than PyMuPDF

- MinerU -> Best at parsing overall but is very slow. (can be 20min for long papers)

I'm thinking of MinerU since its the best, but its so slow to run in my local Mac 😞. Any solution to this? or free GPUs online?

My Questions for Seniors:

  1. Is this too much? Should I be evaluating every single component (parsing, chunking, retrieval) this deeply, or should I just pick the "most popular" tool and move on?
  2. How do you Time Box? I feel like I could spend >1 week just on parsing. How do you decide when a component is "good enough" for a solo project?
  3. The Solo Trap: How do you validate your architectural decisions when you don't have a senior dev to do a code review?

I want this to be a solid project for my portfolio, but I’m worried I’m spending too much time on the details and am also not sure if I'm approaching a GenAI project the right way. Any advice on how to manage the workflow?

Thank you guys!!!!

r/arduino Dhruv_DG

Fitness tracking band

So I'm a beginner to Arduino. My programming skills are strong though. I want to make a reliable fitness tracker that will accurately measure Heart Rate, Steps and Sleep at the minimum. I wear other things on my wrists so I'd like to make this device to wear on my bicep-shoulder area. Since I'll be wearing it in this area, I don't need a screen. Just want something like a Whoop band that I can sync to my phone.

As far as I know (chatgpt), I'll need an esp32 microcontroller, MPU6050 or MPU9250 for motion sensing, MAX30102 for heart rate detection (apparently not accurate while lifting weights) and a charging module.

I'll also need to make some sort of band to hold it, so I was thinking of using some sort of fabric band (made with a sock or some cloth perhaps?).

Need some advice on these.

r/TheWayWeWere haydenem

My beautiful grandmother, a high school graduate. Alabama, 1964

r/ChatGPT EternalSnow05

Gintoki from Gintama and Princess Tiana in New Orleans

r/funny Odd_Ad8140

Genie Realized He Chose the Wrong Person 😳

r/Art VerdantHusk

OC request, VerdantHusk, digital, 2026 [OC]

r/Adulting One-Turnover6462

Lucknow wale

Anyone from Lucknow?? Whats the dating scene over there???

r/interestingasfuck MilesLongthe3rd

A zookeeper playing with two porcupines.

r/LocalLLaMA rulerofthehell

Current best models to run locally with 1x5090 and 1xRTX pro 6000?

Spec:

1x 5090

1x RTX pro 6000

128GB DDR5 6000MT/s

Current cpu and motherboard are bottleneck since one gpu is connected via pcie4x8 while other is pcie5x16. Open to swapping with Epyc later since current setup kills latency

Primarily for coding replacement for paid models.

Any good recommendations? Preferably don’t wanna offload to RAM but open to it.

r/meme No-Feedback331

Chinese popular MEME

r/30ROCK callMeRolf

Favorite one-word lines

When Liz and Jack open the door snd Colleen is there:

"Necktie!" - Liz

r/ClaudeCode Useful_Judgment320

What are the claude code app requirements?

Friend offered me a free 2018 Macbook Pro (intel)

I'm on an old pc and he saw me using claude/codex, offered his old macbook free of charge and said he'll bring it around next time.

Can I create apple ios apps from this device? Or is it too old now, google shows it can't be updated to the latest macos.

r/ClaudeAI noches_paws

At what point do you earn the title TOKENMASTER? 😂

My buddy made this and I immediately thought of this sub. For anyone absolutely torching tokens in Claude on a daily basis... this feels appropriate.

r/ARAM TallyMay

Please explain Sword of Blossoming Dawn to me

I've got Gash on Irelia and was offered Upgrade SoBD, which seemed amazing, because of 100% attack speed. It also had very nice numbers on it, but it felt like I actually should have completely ignored the second part of the augment and didn't build the item at all, because I was lacking tankiness/dps which I could have gotten instead. Am I missing something (heal is only for allies, right)? What are even the case uses of that item.

r/meme Greedy_Procedure7461

How a 6yo draw

r/leagueoflegends MeIiodass

Jungle OR Support

Hey guys! Im struggling hard to decide if i want to main Jungle or Support, i really enjoy both roles they both have all the champions i love to play so anyways i wanna hear from you guys your thoughts on both roles. I think the biggest cons i have with both so far is in jungle i feel like i can be clearing my jungle camps and see all of my laners die without even enemy jungle being there and it annoys me so much, i also feel like i can do everything right, i can have prios in lanes, and STILL ill just be let down by people not wanting to help with an objective. With support one obvious con is that you can have a bad adc, usually ill give my adc a couple chances and if i can really tell that they are either bad or just dont care about winning then i will support whoever i think will carry, and the other con for support is that i cant truly carry games like other roles can (Atleast not carry as much as actual carries). So yea i know its a personal choice but id like your guys opinions on the roles, also which role do you think is easier?

r/LocalLLaMA reto-wyss

nvidia/Gemma-4-26B-A4B-NVFP4

  • Can confirm it works on a 5090, with 80% allocation (of 32gb) I got around 50k context.
  • It's 18.8GB
Benchmark Baseline (Full Precision) NVFP4 GPQA Diamond 80.30% 79.90% AIME 2025 88.95% 90.00% MMLU Pro 85.00% 84.80% LiveCodeBench (pass@1) 80.50% 79.80% IFBench 77.77% 78.1% IFEval 96.60% 96.40%
r/DecidingToBeBetter Full-Artichoke6666

I have a very anxious attachment and BPD and it's seriously affecting my relationship, I've been trying to be less dependent on my partner and listen to them more actively but it feels like nothing I do works, how can I improve?

Okay so basically I have BPD and have been in therapy for 6 years at this point, I take medications and whatnot, but it's been really hard for me to seriously make the effort to get better. That's until I met my current partner. They're perfect in every way possible and I find myself wanting to genuinely be better so I can be the best version of myself with them. Here's the problem though, I am very anxiously attached and very clingy, she doesn't have any issues with reassuring me or me being as clingy as I want, but it's gotten to a point where my overthinking and depressive episodes are emotionally draining her. She has expressed to me that she feels like she needs to drop everything to comfort me and that she has no space for her own feelings, as well as her feeling like she's on constant trial because any little thing that feels "off" can send me spiraling. Now, one of the DBT techniques for BPD is to fact check before spiraling, however, if I ask if everything is okay or if she still loves me constantly, this also makes her feel like she needs to walk on eggshells around me. I have been trying to regulate on my own and have been successful so far but there's some habits she has that honestly don't go very well with my anxiousness. She's very emotionally closed off and often refuses to talk about anything bothering her, unless I press, which also annoys them and spikes my anxiety. She has stated multiple times that the few times she's tried to open up, I have not been receptive or have dismissed her feelings, which I honestly don't know how that could be happening, as I've been trying not to offer solutions or logical explanations (this is my first response, so I try not to say anything). I'm honestly exhausted of trying so hard but it feels like it's only getting worse, I cannot express my discomforts anymore because I am unsure whether they're valid, something for me to fix on my own, a me problem, or if trying to express it will piss her off because I am bound to make a mistake that I'm not even aware of making. Please help, I don't know how to improve this

r/confusing_perspective MorsesCode

Cat on the couch

r/explainlikeimfive Saurabh251

ELI5: Why does empty space weigh something and why is it causing the universe to expand faster and faster?

r/todayilearned Goon_To_Toons

TIL scientists used AI to decode a sperm whale “sentence,” and it appears they’ve been warning each other about humans for decades

r/AI_Agents Sea_Bass7670

AI agent for health wellness

I've been builidng for 5 months an AI agent that tracks and analyze health habits. Now I have one main agent taking care of 2 sheets tools, one for training and one for nutrition and planning to add psychologic tracking and sleeping. At some point I hope to build a score relating all that different category data about your body.

r/Wellthatsucks FiddlepatEpic

tried to reload a stapler in the dark

there is blood EVERYWHERE and it is currently 1:30 AM, my parents are asleep upstairs, and they're gonna question why there's blood on the floor in the morning.

r/WTF delzzy_flip

His name is Dani, he used to be homeless but then become famous in Indonesia for eating raw gecko and dead animal in the street without cooking them

r/homeassistant PainTrain324

Issue with notifications iOS

I have HAOS 2026.4.4 running and version 2026.4.1 of the companion app on my IPhone 15 with iOS 26.4.2. I not able to see the ‘notify.mobile_app_*’ action for this device. I have “Local Push” enabled for the server in the companion app and all notifications enabled in iOS. I am also not able to reset debug notification information (see attached image) which might lead to the issue. I have other iOS devices (different versions) on my network all of which can receive notifications. Any help would be greatly appreciated!

r/ClaudeCode iluvecommerce

I used my agent to create its own wrapped page

r/explainlikeimfive neryl08

ELI5 Why do objects loose colour when exposed to the Sun?

r/Wellthatsucks ParsnipFarmer676

The literal definition of “NIMBY”

I wasn’t sure where else to post this, but I think it fits the sub! This is a very affluent town/county, but it’s usually been cool about letting people car camp at least. I work at a public building nearby, and we’ve never bothered these folks - one man has literally been living in our parking lot for over 20 years!

We need more services and programs, not to just shove them aside for another town to worry about instead.

r/whatisit Mkschmitt

Buried Metal Tube

I found this buried behind my deck. What is it?

We are on city water but initial googling suggested either construction debris or something from an old well. 🤔

I left it outside incase more detective work is needed.

r/whatisit madbot2626

My underarms heir get grainy things stuck on them. And it stinks really bad.

This thing doesn't get off even when I wash it multiple times.

This happens only one side. The other arm is fine.

r/therewasanattempt Salty_Fudge1712

To save her phone

r/ChatGPT MrAmazing111

Asked ChatGPT to visualize a horizontal integral. It gave me a dog. [LINK IN POST]

No prompt engineering or anything, it actually did this. I genuinely have no clue how it could have thought a dog answered my prompt - nothing in the chat related to dogs at all.

See for yourself: https://chatgpt.com/share/69f37d35-d514-83ea-a6d2-86474ae104dc

r/SipsTea LazyGuy4U

Atleast she saved his face infront of his friends

r/Adulting KindDocument3016

Owen Dominick the Parasite Man

Gave him 10K in computers to help me with business. $3700 apple vision pro. He said he wanted to help. He disappeared on me to marry a girl from tinder. Never trust family or friends to help with business. Guys like Owen are parasites of the human kind.

r/mildlyinteresting 89404

In 1954 three men died when their truck careened into Bull Valley Gorge in Utah. The truck still remains wedged there and you can walk under it.

r/LifeProTips XMAN2YMAN

LPT: make your kids iPad/phone passcode your cell phone number. They will learn it within days.

Did this with both my kids and by age 4 they both knew our cell Numbers.

r/fakehistoryporn curiousexploration90

Commodore Dewey aboard his flagship USS Olympia destroys the Spanish Fleet, May 1, 1898 at Manila Bay

r/SipsTea Downtown_Boss_7254

Well.. they are accurate

r/LocalLLM Opposite_Abalone_10

Looking for people to share GPU instance with LLM (infinite tokens and 45tp/s)

I'm renting a GPU instance to run local AI models and reduce the amount I spend on the API at openrouter. I currently have several agents that use around 30M tokens per day. With current settings I'm running Qwen3.6-27B at 45tp/s. This model surprised me in all aspects, including programming.

r/personalfinance Euphophoney

Company car - What’s The best structural option and why?

I start my new job with the offer of a fully maintained company vehicle that is yet to be procured.

I have several options to consider:

  1. Negotiate a car allowance (\~$18k/year) and buy a car outright (which I can do easily around $60k).

  2. Novated lease (but don’t like the idea of the break costs if I change jobs etc.)

  3. Take the fully maintained company vehicle of their choice and nothing to worry about.

From previous experience, option 1 gave me super easy flexibility, the car and specification I wanted, write off the costs using a logbook method, and be left with a depreciating asset that still retains good value after 3-5 years to sell, while still under warranty.

In addition, I’m KPI’d and incentivised on profitability, so not tying up cash flow for the business in a vehicle lease or loan might also be a benefit to me as well.

Love to hear some other opinions or thoughts on this.

r/SideProject GeniueXd

Free SEO Tools!! Keyword Research, Traffic Checker And Google Maps Rank Heatmap | OptimizeSEO

I launched a side project on March 31, 2026: a bilingual travel website for Egypt-based tourism services.

The site includes travel services, Hajj & Umrah pages, flights, embassy appointments, service-area pages, and SEO blog content.

After the first 28 days, Google Search Console showed:

200 clicks

7.83K impressions

2.6% CTR

10.4 average position

No paid ads. No backlink campaign.

The interesting part is that I also built my own SEO tool and used it during the launch. It helped me check rankings, audit pages, review Search Console data, inspect internal linking, and decide what to improve next instead of just guessing.

The tool includes:

Google Maps ranking heatmaps

Search Console performance dashboard

Site audit / crawler

Internal linking analysis

Competitor traffic checks

Keyword research

SEO health tracking

The website:

https://ease-travel.online/en

The SEO tool I built and used:

https://optimizeseo.mostafahana.online

I’m still early, but this was a cool validation loop: build the website, build the tool, use the tool to grow the website.

I’d love feedback from other builders here:

Would you focus next on more service-area landing pages, backlinks, content clusters, or improving CTR on the keywords already getting impressions?

r/SipsTea shineonyoucrazy-876

Did I hear that correctly?

r/Art Entrophile

I Miss Charlie, u/Entrophile, Digital Painting, 2026

r/mildlyinteresting JuicyOW

This purple sweet potato I ate was half white.

r/ChatGPT QuMufz

What the hell is this?

The new image generator is so bloody bad that I am actually considering cancelling my subscription. Please tell me this will be fixed.

r/AbstractArt MarySayler

Abstract - "Lake in the Forest"

#abstract

r/ClaudeAI tillwehavefaces

How would you build this?

So I have an idea for something I’d like to use and I’m curious how other more experienced users would build it. Here’s the idea:

You’re with a group of friends and a topic comes up that you don’t know much about, let’s say “world war 1”. I want to tell Claude to write an article or source materials about that topic and send me that information to read later, preferably to a reader app or anywhere I can easily access it any of my devices easily and quickly. I’d like to just quickly tell Claude via voice what the topic is and have it run in the background.

I’m very new at this but so far I created a prototype via Claude cowork, as a skill. It runs the skill, writes the articles, creates a draft in my gmail, which then I have to send later to my Instapaper account. I also created a shortcut on my phone (Apple iPhone 17 pro) that would allow voice commands but it is super inconsistent. The whole thing is a little janky.

How would you build this?

r/SipsTea Chance_Bid_1869

Friendship ended with ISRAEL - Now BRITAN is my best friend

r/Adulting gracexoxo_

I am planning to move out of my parents' place super last minute and I need help.

IGNORE THIS IF YOU WANT TO GET TO THE IMPORTANT STUFF
So, to preface, I've got a really dangerous home life. I don't know how open I want to get on Reddit, but my mother has some issues, and it's taken a toll on me over the last 4 or 5 years of my life.

I have limited work experience, which has made it challenging to find a job. I know it's so stupid of me to move out without a job right now, but I can't take any more. I have a place to go with someone I trust more than anything, and I have a plan for all the hoops I need to jump through, so I'm good on that.

IMPORTANT STUFF
Anyway, I just need some advice on how to make any money possible before I go through with this. I've got a few hundred bucks that can cover me until I'm able to get a job while I'm in this new place I'm staying, so I won't be a financial burden on the person I'm living with, but I'd really like to feel just a bit more secure before I leave.

r/findareddit DinnyArt

Looking for a place where I can post looking for someone?

Basically without getting into the bad details, I dated a guy who was the worst person in every single imaginable way. I know he had a gf before.​

​​for closure of my own admittedly, Ive been trying to find her. I think we went through the same thing. I had a lead but it was on the now defunct amino and now it's been snuffed out.

Are there any reddits I can post to where she might see it and reach out? ​​​​​

r/OldSchoolCool Maggie_Sash

A public school orchestra Circa in 1900

r/SipsTea Much_Maximum_

Wait for it!

r/personalfinance OutrageousBonus860

Robinhood vs E*TRADE margin loan at the same 4.5% rate — which would you use for a large loan?

I’m comparing Robinhood vs E*TRADE/Morgan Stanley for a large margin loan. Both have offered me around 4.5%, so the rate is basically tied.

My use case is not active trading/options. I want to borrow against a taxable stock/ETF portfolio to raise liquidity for an external private investment, while keeping my holdings invested. Portfolio would likely include large-cap tech stocks and ETFs. I may also rebalance over time to reduce concentration risk.

What I care about most:

  1. Margin call handling — which platform gives better warning / more practical time before forced liquidation?
  2. Portfolio control — which platform is better if I want to sell specific lots or rebalance while carrying a margin balance?
  3. Tools — does E*TRADE’s margin analyzer / calculator / risk tools actually help compared with Robinhood’s margin status card?
  4. Support — if something goes wrong, is E*TRADE/Morgan Stanley meaningfully better than Robinhood?
  5. Extended-hours trading — Robinhood feels very easy for premarket/after-hours. Is E*TRADE comparable in practice?
  6. Any real-world experiences with large margin balances on either platform?

I understand that legally both can liquidate without giving me extra time. I’m asking about practical real-world experience: which one would you trust more for a six-figure margin loan, assuming the interest rate is the same?

r/personalfinance RAM-I-T

Is your money truly safe with a Fintech banks?

I thought about trying one called Envelope Money. Which is one designed around budgeting. I just have never used an online bank before, and I’ve heard horror stories about Chime. Where they’ll hold funds indefinitely.

r/mildlyinteresting biigliiam

This side took me a week to draw…

r/painting B507

Just sharing 13x18cm Acrylic Painting called, By the river"

r/nextfuckinglevel Commercial_Slide3788

Majestic view of rare Albino Humpback Whale

r/DunderMifflin Background_Club_9681

Stress Relief is a fever dream that almost seems non-canon

Stress relief is famously one of the beloved episodes of the entire show, and it is definitely one of my favorites. I’m on my like 5th rewatch of the show, and I noticed that this episode is peculiarly different from the rest of the season. Maybe it’s because I’ve seen it way more times than I’ve seen the whole show but it feels like a lot of the previous plot of the season is really not acknowledged in this episode and also nothing that happens in this episode is ever mentioned again other than the fact that Pam’s mom gets divorced, and maybe Stanley’s heart attack? But Dwight literally costs the company thousands of dollars and tries to burn the building to the ground and none of this is ever mentioned again. Dwight is straight up on demon time in this episode, ofc he’s known to do crazy shit but he is literally just doing whatever the fuck he wants in this episode with no regards or alliance to authority which he normally has. Also Angela keeps her cat in the office apparently? But a couple episodes later she’s showing off her new 7 thousand dollar cat, but you’d think she’d would just bring it into work because apparently that’s normal for her. Oscar is an acrobat climbing up into the ceiling and falling through it and landing on his feet like a cat. Michael has also reverted back to something resembling more of an early seasons Michael. I guess what I’m saying is every character is dialed up to 11 in terms of absurdity. The pacing of the episode is also kind of trippy in how quickly it goes high stimulation to low stimulation back to high stimulation. This episode feels like what I what dream of if I was in bed sick for a week and did nothing but watch the office. This is considered by many including myself to be their magnum opus, so clearly they did a few things differently. But yeah I’m making the observation that it feels tonally very different from the rest of the show, and reminds me more of like Seinfeld or arrested development. Curious if anybody else has noticed this.

r/PhotoshopRequest SpaceCat_303

Could someone please get rid of my dark pimple scar in the middle of my forehead?

Thank you!!!

r/therewasanattempt Diggi8

To illegally hunt Whales without any consequences.

r/SipsTea brutaldeepeye

A Few Days Later 🥲

r/SideProject MurkyCategory5184

Built a visual flow editor for betting strategy automation

I'm a developer who likes building autonomous systems. This started because I wanted to experiment with visual programming — drag and drop nodes, conditional branches, loops — but needed a practical use case to actually finish it.

Betting strategies turned out to be a perfect fit because the logic is deterministic and testable.
"If 3 losses in a row, double bet, reset on win" translates cleanly to a flow graph.

The gambling part is almost incidental to me. What I actually care about is whether the visual logic builder works well as a tool.

Built with Laravel + React Flow.
ive at rollmint.cc — genuinely curious what other builders think about the approach, not the gambling side.

r/aivideo kylejames87

Climbing Towards Success

r/ProductHunters Thick-Rip-1187

Find your next domain-expert engineer

Launch my product on product hunt today

r/geography danm868

Challenge: guess what island this is and win a prize

r/DunderMifflin Bitter-Wheel-8615

Do you guys see it ?

I can’t be the only one

r/SipsTea MorsesCode

Where are you going?

r/Adulting memsaver

I am very sad because I keep getting rejected by women I want to date

I'm male. I'm tall. I'm employed. I go to the gym. I follow a good diet. I have a diverse group of friends. I have a lot of fun hobbies. People tell me I'm attractive. I've been rejected by every woman I've asked out this year. I'm losing hope and I feel angry for not being able to get a date. I'm not angry at women. They are free to be picky. I'm just angry that I'm not seen as an option to them. I love myself. Before you tell me to love myself you should know that someone can love themselves and do everything right and still be lonely and want a relationship.

I have asked my friends who are women to be honest with me about what I can do better. They all said I'm not doing anything wrong and that women have unrealistic standards for men these days. I wish I knew what I could do better. I'm doing what I can at the gym to improve my body. I put myself out there a lot but it gets exhausting after so many rejections.

Very often I meet a woman and we start talking. We're having a good time and she makes me feel like she is interested in seeing me more. She's smiling, making eye contact, we're asking each other questions. I'll ask her out and then she says she has a boyfriend. I think this means she's not interested. Sometimes I'll hear other excuses. It exhausts my soul to do my best and fail so much. Women never approach me and this makes me feel like I could be ugly.

I don't understand why dating is so hard for me. I feel like I'm doing everything right and nothing is working. It really hurts to be alone every night. It hurts to get rejected after having very high expectations. I want to be with women I'm really attracted to and I want them to want to be with me. I want to go on romantic dates with her, go home with her, have fun with her at home, spend the night together, and wake up next to her in the morning. I've done it before but it was years ago and I know dating is different now than it was years ago. I just wish things could work for me for once.

r/DecidingToBeBetter Hauck1975

What is a small decision you made that unexpectedly changed the direction of your life?

I once said “yes” to something I felt completely unprepared for, simply because I was tired of always waiting to feel ready.

That small yes didn’t magically change my life overnight, but it opened a door. Then another. Then another.

Looking back, I think life changes less through big dramatic moments and more through tiny decisions where we quietly decide not to be the same person anymore.

r/whatisit Ok_Welder4087

What is this thing…

Saw this on Facebook marketplace. Lady said it was possessed. Does anyone know what this might be, where it’s from, and what purpose it serves? Is that real human hair??????? I can’t stop thinking about this

r/mildlyinteresting 1qwertyuiopasd

My doctor’s office lists old English, Middle English, and ancient Egyptian as language options

r/whatisit Dry_Consequence_383

what is this bright yellow Godzilla porcupine under our deck in peru

r/homeassistant turbocharged5652

Whole home power monitoring

Has anyone in here used something like this? I want to find one that won't call home and maybe is zigbee compatible, not wifi only

r/AI_Agents Salt-Library-8073

Best computer use agents right now? Need something for browser research + desktop tasks

This whole direction of AI agents that can actually operate your computer feels like it's getting real. I'm looking for something that can handle tasks that involve deep browser research and also interact with desktop apps (spreadsheets, email clients, etc).
One concern I have with some of the trendier options like OpenClaw is data privacy. I've read reports of local file loss and I'm not comfortable giving an agent free access to my personal machine. And I'm not at the point where I want to buy a dedicated Mac Mini just for this.
Ideally I want something that:
- Can do both browser and desktop work
- Doesn't run directly on my personal computer (some kind of isolated environment)
- Doesn't require a bunch of technical setup
- Can handle longer multi-step tasks without falling apart halfway through
Has anyone found something that checks most of these boxes? What are you using?

r/SipsTea UrbanAchievers6371

And don’t forget the proton torpedoes please

r/Wellthatsucks CreammySyrup

When you park your scooter at hot midday summer.

r/ChatGPT alwaysstaycuriouss

I am experiencing multiple bugs for the first time ever on app

First it was getting stuck thinking and I couldn’t stop it even when pressing stop or closing and reopening the app also logged out and back in. Now if I make new chats on my computer they aren’t syncing to the app. If I unarchive a chat it only shows on my computer and not the app. It’s being so annoying!! Also There are chats that are only showing on web browser and not the app. When I search on the app they show up with bold text and when I click on it it just shows an empty chat wtf

r/AlternativeHistory Suspicious-Slip248

A woman's secret photograph of the U.S. Supreme Court in session, taken through a hole in her handbag, 1937.

r/SideProject dagus2020

You helped me hit 40K views. Here's what I built and why it matters to me.

A few days ago I posted about launching my second iOS app and hitting 1,854 downloads in 72 hours. The response on r/SaaS was incredible — the post ended up reaching 40,000 people.

I wanted to come back and actually talk about the app itself, because the launch thread moved fast.

What is Birthday Reminder: Countdowns?

It's a simple iOS app that shows you exactly how many days until the birthdays of people you care about. Beautiful countdown timers. Zero login. Zero bank connection. Fully offline.

I built it because I kept forgetting birthdays, not because I didn't care, but because I am a busy dad of three kids.

What's next:

I'm improving the reminder customization. If you downloaded it after my last post — thank you, and I'd love your feedback.

For everyone who missed it, it's free to try:

👉 https://apps.apple.com/us/app/birthday-reminder-countdowns/id6762507504

And yes, still a solo indie dev doing this between other projects. 🙏

r/creepypasta Zealousideal-Pay4214

new creepypasta?? i interviewed my friend after she told me about a weird thing she saw

here is the experience in my friend’s words:

I didn’t think curtains could feel wrong until mine started moving when nothing else in the room did.

At first it was small things. I’d wake up and they’d be slightly parted, like someone had stood there for a second and changed their mind about stepping in. I told myself it was the window latch or airflow or anything that didn’t require me to admit I was noticing a pattern.

But it kept happening.

Only at night. Only when I was alone long enough for the room to feel like it had stopped paying attention to me.

The fabric started to look… intentional. Like it wasn’t just hanging there anymore, like it was being used. Sometimes there would be a thin gap between the panels, perfectly straight, even though I knew I hadn’t left it that way. It was always just enough for me to think I saw something behind it.

Never enough to be sure.

That’s what made it worse.

The uncertainty built up until I started checking the window more than I checked anything else in my room. I’d sit up in bed just to look at it, telling myself I was proving nothing was there.

That’s when I first saw the name.

Sedrusviður.

It showed up in the condensation on the glass one morning, like something had traced it from the inside while I slept. The letters weren’t neat. They looked like they were formed by something that didn’t fully understand how writing worked, only that it needed to be seen.

I tried to forget it, but I couldn’t. I looked it up. I found nothing normal. Only fragments about cedar, about something rooted, something that should stay still once it’s settled.

But nothing about this stayed still.

After that, the curtains got worse.

I stopped being able to convince myself it was just my imagination because it started reacting to me. If I ignored it, nothing happened. If I looked too long, the gap between the curtains would change. Slightly at first. Then noticeably. Like something behind them was learning how much attention it could get away with.

I never saw it fully. Not once.

Just pieces of it.

A shadow that didn’t match the light. A pale suggestion of something pressed too close to the glass. A shape that only made sense for a second before my brain refused to keep it consistent.

And every time I looked directly at it, it stopped.

Like it had never been there at all.

That was the part that made me start doubting myself more than anything else. Not because it disappeared, but because it disappeared so cleanly. Like I was the one creating it just by noticing.

I tried to prove I was wrong.

One night I tied the curtains open and left the light on. I told myself that if something needed darkness, I’d take it away and finally be done with it.

I woke up with the ties undone.

Not ripped. Not loose. Just carefully untied and placed aside like someone had taken their time not to wake me.

The curtains were closed again.

And the room felt… aware.

Like it wasn’t mine in the same way anymore.

Behind the fabric, something was closer than it had ever been. Not visible in a clear way, but present enough that I stopped thinking in terms of “is it there” and started thinking in terms of “how close is it this time.”

I stopped sleeping properly after that.

Because Sedrusviður doesn’t really appear the way people expect something to appear.

It doesn’t come into the room.

It just learns how often you’re willing to check if it already has.

r/ethereum EthereumDailyThread

Daily General Discussion May 01, 2026

Welcome to the Daily General Discussion on r/ethereum

https://imgur.com/3y7vezP

Bookmarking this link will always bring you to the current daily: https://old.reddit.com/r/ethereum/about/sticky/?num=2

Please use this thread to discuss Ethereum topics, news, events, and even price!

Price discussion posted elsewhere in the subreddit will continue to be removed.

As always, be constructive. - Subreddit Rules

Want to stake? Learn more at r/ethstaker

Community Links

Calendar: https://dailydoots.com/events/

r/Frugal AutoModerator

Monthly megathread: Discuss quick frugal ideas, frugal challenges you're starting, and share your hauls with others here!

Hi everyone,

Welcome to our monthly megathread! Please use this as a space to generate discussion and post your frugal updates, tips/tricks, or anything else!

---

Important Links:

Full subreddit rules here: https://www.reddit.com/r/Frugal/about/rules/

Official subreddit Discord link here: https://discord.gg/nZBtCcs

---

Share with us!

· What are some unique thrift store finds you came across this week?

· Did you use couponing tricks to get an amazing haul? How'd you accomplish that?

· Was there something you had that you put to use in a new way?

· What is your philosophy on frugality?

---

Select list of some top posts of the previous month(s):

  1. Chip prices are absolutely insane. So I made them myself. Way tastier and way cheaper! Never going back to Lays
  2. $60 grazing table for 30 people (on maternity leave budget)
  3. What I'm feeding my family of 5 this week for $125
  4. Just found out my grandma’s been reusing the same Ziploc bag since 1997.
  5. Thank you to the person who recommended stopping the dryer halfway thru to add a new load clothes
  6. My coworker eats the exact same $1.25 meal every day and I'm weirdly impressed
  7. Cookie friend date - simple realization
  8. My most frugal life hack is pretending my fridge is a mini restaurant with a weird but loyal customer base (me)
  9. I accidentally became "the cheap friend" and honestly… I kinda love it now
  10. What’s a frugal habit you picked up by accident that you now swear by?
  11. Frugal living: Moving into a school converted into apartments! 600/month, all utilities included
  12. Follow up- my daughter’s costume. We took $1 pumpkins and an old sweater and made them into a Venus Flytrap costume.
  13. Gas bill going up 17%… I’m going on strike
  14. I love the library most because it saves money
  15. We live in Northern Canada, land of runaway food prices. Some of our harvest saved for winter. What started as a hobby has become a necessity.
r/SideProject jeevandahal

Built a crypto trading tool, thinking about adding stocks. Which broker API would you go with?

Been working on Cryptologics-AI for a while. It lets you build automated trading bots, run backtests, paper trade, and go live on Binance. There's also an AI advisor that helps you put bots together and analyze how they're performing.

I'm thinking about expanding it to cover stocks too. Before I pick a broker API and commit, curious what people here would recommend. Looking at Alpaca, Interactive Brokers, and Tradier so far. Anyone integrated one of these in production? What held up, what didn't?

Open to options I'm missing too.

r/Damnthatsinteresting utopiaofpast

We all were asshöles

r/screenshots Strict_Hornet_8944

I love the blox

r/AI_Agents mguozhen

Stripe Sessions 2026 got me thinking: are payments ready for AI agents?

Stripe Sessions 2026 made one thing clear: agents are becoming economic actors. What breaks first?

Just attended Stripe session 2026 and I was reading through Day 1 notes, and one theme stood out to me: agents are no longer just UI helpers. They’re starting to look like economic participants.

A lot of today’s payment and commerce infrastructure still assumes a human is sitting in front of the screen: searching, comparing, clicking checkout, entering card details, and making the final decision.

But if agents start comparing vendors, booking services, renewing subscriptions, placing orders, or managing operational workflows, the core problem changes.

It’s no longer just:

“Can this payment be executed?”

It becomes:

Who authorized this agent?
What is it allowed to spend money on?
How do we audit the decision later?
What happens when the agent makes a wrong or risky purchase?
Does the merchant still own the customer relationship, or is that relationship now mediated by the user’s agent?

This feels like a shift from payment execution to identity, policy, risk, and audit.

The wallet may just be the entry point. The more important layer might be controllable money movement: permissions, spend limits, traceability, fraud detection, merchant trust, and machine-to-machine payment rules.

Another interesting point from the sessions: if browser agents or AI shoppers become a new traffic channel, websites may need to become agent-ready. Not just static pages optimized for human search, but interfaces that expose intent, inventory, pricing, policies, and checkout flows in a way agents can understand and act on.

That could move commerce from a fixed funnel into something more dynamic:

intent → recommendation → decision → checkout → monitoring → audit

It also makes me wonder whether business models shift from subscription to usage-based or per-action payments when agents are doing discrete tasks across tools.

Sam Altman’s point that stuck with me was that the biggest AI change may not be the model itself, but workflow integration. The companies that benefit most may not just “use AI,” but rebuild how the organization runs around agents.

Curious how people here are thinking about this.

If agents become real participants in commerce, what needs to be rebuilt first: checkout, identity, permissions, fraud/risk, merchant websites, or the business model itself?

r/me_irl Specialist-Trash-468

Me_irl

r/ClaudeAI Different_Try_1269

Claude Code usage spike from long-context cache writes?

I hit my Claude Code 5-hour limit unexpectedly and checked the local session JSONL.

The `/usage` screen said most usage came from:

- “subagent-heavy sessions”

- sessions active for 8+ hours

- `>150k context`

But the subagent table only showed `codebase-explorer: 1%`, so subagents don’t seem to explain

the spike.

After deduplicating local records by `requestId`, the main session had about 140M cache-read

tokens. The surprising part is that some of the final requests recreated a huge 1-hour prompt

cache of around 475k tokens each.

Using public API pricing, a 475k 1-hour cache write should be only a few dollars API-

equivalent. But in Claude Code, one of these final requests seemed to consume a very large

fraction of my 5-hour limit.

I use a pro subscription and only use sonnet-4.6 model.

So I’m wondering:

Is Claude Code intentionally weighting long-context / 1-hour cache writes much more heavily

than API pricing, or could this be a usage accounting / attribution bug?

Has anyone else seen a large Claude Code usage jump after a long-running session with `>150k`

context?

r/SideProject TheOperatorAI

Spent the week reverse-engineering the recent YouTube AI-content takedowns. 5 patterns that keep showing up

Saw a few channels in my niche get terminated over the last couple months for "inauthentic content." Spent the last week trying to figure out what the actual common signals were across the public facing takedowns I could verify, plus the warning notices a few creators shared with me. Sharing in case it's useful.

The 5 signals I kept seeing:

  1. Title template lock-in – same structural template on >70% of recent uploads ("Top X [Thing] in [Year]", "I tried [Thing] for [Time]", etc.). Real channels have natural title variance.
  2. Description boilerplate – identical first ~200 characters on every video. Easiest fingerprint for a spam classifier to grab.
  3. Duration uniformity – recent-upload duration variance under ~15% of the mean. Real channels swing 5min one week, 12min the next.
  4. Mechanical posting cadence – uploads spaced almost exactly N hours apart over weeks. Indistinguishable from a cron job, because it usually IS one.
  5. Channel age vs upload velocity – channels under 6 months uploading >5 videos a week trip a tighter heuristic than older channels at the same volume.

None of these are individually fatal. But channels hitting 3+ at once seem to be the ones getting nuked.

I wrote the full breakdown including how to check each one manually + concrete fixes if you score high:

https://theoperatorai.io/articles/youtube-channel-termination-5-patterns.html

Mods - if the link breaks rules, happy to remove it; the 5 patterns are useful on their own.

Curious what people here see when they audit themselves. Especially curious if anyone's been flagged and the signals don't match - would refine the list.

r/Art TheSerevet

Heron on Navy, L.A., Acrylic, 2025

r/meme Specialist-Trash-468

That's life

r/mildlyinteresting acidrain333

Emergency mouse pad

r/therewasanattempt StoicBan

To provide aid, food and medical supplies to Palestine

r/Adulting One-Turnover6462

MNC job worth it??🤔🤔🤔

Hey everyone I am civil engineer with just 1 month of experience,I have two job offers,17k a month in Chandigarh with accommodation and travel is included except food and 70-80k in an MNC in lucknow accommodation and food included (through a friend) now my concern is that if I choose the second option idk how is the environment in that office and secondly if I decide to quit, isn’t it hard to find a 70-80k job?? Please help me decide as I have also never worked so new in job market

r/SideProject AmountWestern1734

DenSpace

new social media app for the furry commuity it is called DenSpace and and it has an ai moderation system that bans anti furries permantly based off hate comments it is out now https://denspace.vercel.app/

r/Art WH0SEMANS

Side Profile, WH0SE MANS, Alcohol Markers on Shipping Label, 2026

r/mildlyinteresting drunkbettie

A bucket and a mop pin received as McDonald’s crew swag circa 1991

r/me_irl SuspiciousLow3062

me_irl

r/SipsTea Plus-Resolution-3210

Unbreakable Consistentency

r/BobsBurgers beestw

A scene that kills me every time

This part never fails to make me smile...I applaud whoever wrote this gag, it's so original and on par and hilarious. It gets stuck in my head ALL the time. Nat is one of the most lovable, most hilarious and unique characters on the show, which is saying a lot because they're all very..unique.

r/explainlikeimfive Glittering_Body_5779

ELI5: What is the function of the Anterior cingulate cortex?

Can someone explain what the ACC does in regards to mental health and depression? And is it true it goes by doing things you don’t want to?

r/Rag bn-batman_40

EGA: Runtime Enforcement for LLM Outputs (v1.0.0)

I built EGA, a runtime enforcement layer for LLM outputs.

The problem: eval tools usually score after something already went wrong.

They do not stop bad outputs from going downstream.

EGA sits in the runtime path and checks the model output against the source before letting it pass through.

If something does not have support, it gets dropped or flagged.

v1.0.0 is live on PyPI today.

This is still early:

not benchmarked yet

not production-grade calibration yet

needs real RAG pipeline feedback

I am looking for engineers building RAG pipelines who are willing to plug this in and tell me where it breaks.

pip install ega

GitHub: https://github.com/bh3r1th/llm-evidence-gated-generation

PyPI: https://pypi.org/project/ega/1.0.0/

r/whatisit axident

Found in my son’s room

What the hell is it?

r/geography Intrepid_Reason8906

The Orange Counties of America

r/Damnthatsinteresting BumblebeeFantastic40

How 350 km/h (217 mph) looks like from High-Speed Train

r/meme PassionFederal6917

You're constantly being chased by hot monsters...

r/Art Thecrafter10

Clamboy, u/thecrafter10, Surrealism, 2026

r/ClaudeCode s243a

I run out of usage in gpt, gemini and claude but for some reason people only bitch about anthropic.

Since this seems to be a complaining form, let me say, "oh no I can't use my gpt subscription for 5 days", because I exhausted my weekly usage! It is April 30th and here is the message,

" You've hit your usage limit. Upgrade to Pro (https://chatgpt.com/explore/pro), visit

https://chatgpt.com/codex/settings/usage to purchase more credits or try again at May 5th, 2026 5:40 AM"

and ironically, my Claude code usage lasted me a whole week. Is it a fair comparison? In this case maybe, sort of. I'm on the 100USD/m plan at anthropic and I have two 20USD/month plans at openAI. I use my anthropic plan twice as much as my openAI account and Claude code is faster than codex/gpt, so for reasons of speed alone we expect claude to burn more tokens. Also claude has longer context length, so if you let your conversations run longer you'll burn tokens faster with anthropic then open AI. Either start new conversations or compact. For best value keep your conversations under 200k tokens, I usually compact around 400k, Anthropic models support a million tokens.

If we look at the API pricing:

"GPT-5.5 charges $5 per 1M input tokens and $30 per 1M output tokens, while Claude Opus 4.7 charges $5 per 1M input tokens and $25 per 1M output tokens, with a surcharge for prompts exceeding 200K tokens."

Anthropic is actually cheaper if you don't exceed 200k tokens. So if the API prices are similar, I don't expect huge differences in the subscription values. GPT might be say 25% better value but this isn't obvious to me. I find gemini the worse value but you never here people complain about it, because the real battle is between openAI and anthropic, and I wonder how much of the negative sentiment here is bot driven!

r/SideProject ksunnyxd

Built an AI trip planner where you can fly through your itinerary in 3D before you go

Hi! I recently built out Sonder, a visual AI travel planner, and I’m looking for early users to test it out. It is completely free to sign up and use, you just need a valid email (just a heads up: your sign-up confirmation email might land in the spam folder, I’m actively working on fixing this!).

The main features are fully automated itinerary creation/modification, 3D-showcase/routing of your trip, and a calendar feature that lets you plan out your days. I’d love for you to try planning your next trip on it. Any critique or feedback is appreciated! Thank you :)

www.joinsonder.app

r/leagueoflegends boririro

Is Zaahen really a good Darkin?

I mean, he's shown as a Darkin that doesn't abandon his human side, but it's all HIM saying that. My question is: once he trapped himself into his weapon, are there any chances he got corrupted to the point that his "right" is not that right anymore? Like, he lost the original concept of righteousness by the corruption and now his brain switch things so he can do bad things without feeling the guilt?

It's said that ascending increase both your qualities and defects. Maybe his quality was being too righteous, but his defect was the pride of being so.

r/midjourney tladb

Eve Online : Four views of a Jovian Observatory

In order:
1. Caldari : abstract expressionist
2. Amarr : impressionist
3. Minmater : constructivist
4. Gallente : conceptual

The art style favour by the Jovians is generally unknown. As an alternative major art styles favoured by the four main races is used to represent the observatory.
The terms are not used by the races themselves but contemporary English equivalents.

For all images see www.tladb.com

r/therewasanattempt T_Shurt

to fit Trumps inflation disapproval ratings on a graph

r/Seattle SeattleLaborChorus

Seattle Labor Chorus - Sing for labor and social justice

Hello Seattle! The Seattle Labor Chorus is a musical voice of labor and social justice in the Pacific Northwest and we are inviting you to join us as a new chorus or audience member!

SLC is a non-audition community chorus formed in March 1997 for a performance at the Northwest Folklife Festival with Pete Seeger and several other well-known folk singers. We give voice in song to the concerns and goals of working people and allies who struggle for economic, social, and racial justice. We sing as advocates for a society free from discrimination where workers have the right to organize as a means of securing a just living in a safe environment. We perform at rallies, educational events, union meetings, and on stages around the Seattle area to inspire and educate in four-part harmony, preserving a collective memory about the history, goals, and vision of the movement for justice. Singing lifts the spirit of our movement and gives courage when the struggle is hard in our members' lives, in our communities, and in the world.

The chorus is very member-driven, collaborative, and egalitarian. Chorus members shape our song repertoire and event participation, and choral members have written or edited songs and lyrics to address the changing face of the struggle for justice and resistance. We sing a mix of traditional labor and folk songs, and parody versions of existing pop songs with messages of solidarity, justice, and the concerns of working people.

If this sounds appealing, we would love to hear from you and maybe sing with you! You can find out more about the chorus, our practices, and upcoming performances here:

https://seattlelaborchorus.org

To learn more about the formation of the chorus, and to hear some of our past performances, please visit our YouTube channel at:

https://www.youtube.com/@seattlelaborchorus1452

Please join us as a singer or an audience member to find community and joy through song!

Solidarity forever!!

r/geography archvize

The importance of countries wealth 200 years ago

I used to really like playing that game “Sid Meirs Civilisation” where you would start with a single “settler” then build your first city and the next. You’d quickly learn you’d have to have some defence otherwise your city was attacked by barbarians

After playing for about 10 minutes you’d meet your first neighbour and you could try making friends with them (basically another country) or you could just ignore them

What you quickly learn is that if you don’t expand and get “rich” you’ll be at the mercy of other countries who got richer than you (usually by mining; or building more cities than you which made it possible to buy more soldiers)

Is this basically how the world used to work?

  1. Countries had to steal from other countries (colonisation) or they’d be poorer and couldn’t buy as many weapons or technology to make their army stronger
  2. Expansion was necessary to build more loyalty and population and therefore a bigger army was possible
  3. My country had to be strong so I could secure good water resources or other nearby countries would just push mine out of the way for access to the good stuff?
  4. Religion really helps other how am I going to convince all these people to fight in my army?

TLDR: do countries need to expand (sometimes forcefully) so they get rich and don’t get bullied?

r/AskMen Shady_ass28

How do I propose my best friend?

I(23F) met my friend(25M) at our local gym( in India ) a month back. Coincidentally, both of us were each other's crush(es). Since, both of us are shy, introverted people, we couldn't exchange our numbers or socials before. However, I mustered up the courage to contact him( not in a creepy way) and since then we have been talking regularly and going on dates and even planning our future together. We are really into each other. But I have had really bad encounters with men before, therefore this time around, I thought I'll take it slow and maybe give at least 6 months to 1 year to know him better. But, I guess, he's the one. He's old school and believes in love at first sight, intimacy only after things are official, and in general is a very soft, kind, polite, caring and loving person. I am now planning to make things official, to make him my boyfriend. How do I go forward with it? I want this to be perfect, so please share your ideas? What should I bring for him, what would men in general prefer? How do I propose to him?

PS- He's eager to be my bf and I know he will be super happy if I surprise him.

r/aivideo BattleOfEmber

Your phone might save the day

r/SideProject Heem_is_that_guy

I built an AI that investigates like a court of law — 6 months of building, launched 2 weeks ago, zero customers so far. Here's what I learned.

I built an AI that investigates like a court of law m, here's what I learned after 6 months of building

After 6 months of building I launched Deepheem 2 weeks ago.

The idea: lawyers, business analysts, and journalists spend days on research that AI can do in 15 minutes.

What makes it different from ChatGPT: - Asks 4 clarifying questions before searching anything - Searches the live web for real sources - Scores every source 0-100% for credibility - Generates a full cited report with verdict

The hardest part was not the AI, it was making the credibility scoring feel trustworthy enough for lawyers.

I'm a solo founder from Manchester with no technical background. Built the entire product using AI-assisted development. If I can do it, the barrier to building is lower than ever.

Would love feedback from anyone who does research professionally.

Free to try at deepheem.com — no card needed.

Happy to answer any questions about the build.

r/SideProject Minirice2017

Built a proxy that actually stops LLM API spending — not just alerts you after

Last month a dev I know got billed $1,800 on OpenAI. He had set a $100 spending limit weeks earlier. The alert fired, he was away from his laptop, and the agent kept running.

This is not a rare edge case. Any autonomous loop, misconfigured retry, or long-running pipeline has the same exposure: the provider sends you an email, then keeps accepting requests.

So I built a proxy layer. You swap openai.baseURL for our endpoint, set a hard monthly cap, and when you hit it, the API returns HTTP 402 instead of forwarding the request. The agent stops. The spending stops.

# Before openai.baseURL = "https://api.openai.com/v1" # After openai.baseURL = "https://api.spendguard.io/v1" openai.apiKey = "sg_your_key" 

Set a monthly cap in the dashboard. That is the whole setup. If you need finer control, there is an X-Agent-ID header for per-agent limits — useful when you are running multiple agents under one account.

I have been running this on my own projects for a few weeks. The free tier protects up to $100/month per project, no credit card.

If you have had a runaway LLM bill — curious what caused it. Was it a loop, a bad retry config, something else?

→ llmspendguard.com

r/30ROCK andrewparker915

Jon Hamm and JFK Jr?

Was Jon Hamm's character on 30Rock primarily based on JFK Jr? Like, a beautiful dum dum who lived in a bubble and didn't realize his abilities were impaired compared to his level of privilege?

I promise this is not a political comment. I'm liberal! Just curious if others made this connect or if Tina Fey every mentioned anything about the inspiration for Drew Baird.

r/SipsTea Any_Assistance_1496

This excavator operator is having fun with the abandoned airplane at a junkyard.

r/LocalLLM Humble-Engineer-6863

Anthropic doesnt like me

Long story short I was chaining Claude to dish out work to non Anthropic models, banned.

I find it hilarious they did that, not locking myself into their ecosystem is the best thing I did. Questionable company this one.

TBH this only makes the case for local models stronger for me.

r/interestingasfuck S30econdstoMars

Fix your pipes without calling a professional

r/PhotoshopRequest D3V0NK1N6

Please remove the bruise on my mom’s arm - $5

Hi, please remove the bruise on my mom’s upper arm. Thank you!

r/midjourney threewolfmtn

Used to work with midjourney for a year or so and dropped off, now can't get any of my old pics

How do i go about getting these? I resigned up and paid some rediculous amount lol. But all my old builds are gone?

r/30ROCK PeachPurple8806

Rochelle Gaulke

“Christine was late, as well as Harry S. They said it was a problem with ‘the subways,’ but I suspect they were doing sex with each other.”

r/SideProject Free_Change5638

I built a desktop AI workspace that works with local files instead of copy-paste chat

I’ve been building OpenYak, a desktop AI workspace for people who want AI to work with actual files and folders instead of constantly copy/pasting into a chat box.

The current version can:

- attach Office/PDF/workspace files

- use local models through Ollama

- use hosted/BYOK models when needed

- stream the response while showing task progress/todos

- keep generated files and context tied to a workspace

The demo here shows a local model reviewing a budget workbook and updating its progress as it works.

I started this because my own AI workflow felt split between chat apps, local files, docs, spreadsheets, and terminals. I wanted one place where the model could reason over the workspace and leave useful artifacts behind.

I’d love feedback from other builders:

- Is this a problem you’ve run into?

- Would you trust a desktop AI app with local files if it were open source?

- What would you want to see before using something like this daily?

Project: https://open-yak.com

Repo: https://github.com/openyak/openyak

r/AskMen ShoulderTurbulent515

When does a chad ever feel worthless?

I ask this question in the view of relationships and dating.

Im hoping to get answers from mostly attractive men, im 18 right now and i am insecure of everything about me,sometimes i want to think of it as a phase that i will outgrow but then i remember that chads never outgrew,they were pretty and handsome and muscular to a great extent above average in their teenage years. I have met people who did grow out of their "ugly" phase but they arent romantically successful atleast not to the extent of the "already established" attractive man.

Im kinda asking this question hoping to get some comfort but i do require honesty,is beauty a guaranteed "stress-free" ride ticket and if so can i book the ticket now ,or have i missed the ride?.

And i have researched a bit where men say stuff like "yea ,ive been rejected once or twice" or "yea sometimes i feel like my nose is big".but my definition of worthlessness is something like.

  1. People calling u ugly almost every week,unprompted.

2.Parents telling u that ur a "lost cause" and that u should just limit yourself to making money.

3.Loss of interest in any activity or performance since ur results will never be appreciated.

Sorry about the venting,but i just wanted to set the bar for "worthlessness"

And for all the unattractive men out there....was being worthless just a phase ,or did u have to cope,and by cope i mean "choosing to ignore the truth" just cus not coping hurts way more?.

r/ChatGPT damontoo

This infographic for a cult that worships facts and science came out pretty good.

r/yesyesyesyesno Just-Tip-3320

Magic tricks rule!!!!

r/TheWayWeWere cosmichippiewitch

This may be my favorite ancestor photo I’ve found so far. Taken around Los Angeles, circa 1910-1915

r/explainlikeimfive FlamingoCritical2526

ELI5 Why do we squint to see further?

How is closing our eyelids slightly beneficial to being able to see far away objects?

Would we be able to see just as far without squinting? Is it just a way for use to concentrate on something?

Thanks in advance

r/LocalLLM Advanced_Floor_3934

Nvidia 5090 + 5080 Recommendation

Hello Everyone!

New here to LocalLLM.

Looking to setup my first Local LLM. I currently have a 5090 (32 GB VRAM) in my main system, I also have another spare 5080 (16GB VRAM) in a 2nd pc that I can source.

I only have 32GB DDR RAM tho. Running on i9-12900k

From some research it looks like I should start with OpenCode + vLlama(?) + Qwen 3.6 27GB or 35GB MOE model.

Questions:

1) Should I just run off 1 5090 and be done with with?

2) What extra performance can I gain adding a 5080? Should I ever bother?

3) being 2 different GPU, should I even other have 2 GPU runs?

If anyone can help me with some optimed setup/parameters/config for both setup I'd be forever grateful.

I'll probably will have more questions as I time goes on, but just hopeful to get these answered for now.

r/ChatGPT Winter_Emotion9077

Is this image AI?

r/interestingasfuck Odd_Passage9433

This is the last known footage of Trevor Deely after leaving an office Christmas party in Dublin, Ireland, spotted on CCTV being followed by a man in black. He has been missing for 25 years.

r/mildlyinteresting LoveForKeys

Bisected hail from Springfield, MO 4/28/26 storm

r/PhotoshopRequest A_lotofapricots

Photoshop a tear coming down each eye and the cushion colours to be a baby blue colour

r/midjourney HauntedPlayback

Fever Machine

From my first tests with Midjourney Video 9 months ago. Still some interesting clips in there despite being dated compared to the improvements we've seen in the past couple months form AI video.

r/painting JoAnnaTheArtist

Garden of paradise

Inspired by a trio to botanical gardens of WDC I paint ed this 12x16 soft pastels

r/comfyui is_this_the_restroom

Wan animate with stable camera comfy workflow

u/roychodraws gracefully shared his wan animate workflow sometime ago but had one issue, the camera motions werent captured.

So I added uni3pc controlnet for the camera tracking as well. Even though it's made for wan2.1 it works pretty well for 2.2; if you get glitches just try another seed.

Workflow here: https://civitai.com/articles/29325/wan-animate-camera-mimmic-addon

r/raspberry_pi Interesting-Fold2894

Stop buying raspi's and orange pi till prices are back where they should be

Simple concept. This message isnt necessarilyly geared towards the user who needs a single board but more for those of us who use them in bulk. The price of memory has indeed gone up; but not like the prices of the boards has. Price gouging is in full affect and theres only one way to stop it. Boycott. Dont buy it. Suffer for a short period of time and make them lower the price they have no choice if we dont choose to pay for them. Take the power back.

r/ClaudeAI Dramatic_Squash_3502

What's new in CC 2.1.124 (+166 tokens) and 2.1.126 (-87 tokens) system prompt

  • NEW: System Reminder: File modification detected (budget exceeded) — Tells the agent when a user or linter changed a file but the diff was omitted because other modified files already exceeded the snippet budget, and directs it to read the file if current content is needed.
  • System Prompt: Harness instructions — Replaces the core-identity function call with explicit introductory-line and security-note insertion points before the shared harness instructions.
  • System Prompt: REPL tool usage and scripting conventions — Clarifies that thenable shorthand results are auto-awaited only at return time, so inline uses such as concatenation, templates, or arguments to another call must be awaited first.

Details: https://github.com/Piebald-AI/claude-code-system-prompts/releases/tag/v2.1.124

  • REMOVED: System Reminder: Malware analysis after Read tool call — Removed the reminder that asked agents to consider whether each file read is malware and to analyze malware without improving or augmenting it.

Details: https://github.com/Piebald-AI/claude-code-system-prompts/releases/tag/v2.1.126

r/ClaudeAI LoudStrawberry661

Is anyone else having trouble with the "Export Data" feature right now?

Hey everyone, I’m trying to export my data/conversations from Claude, but it keeps failing. I’ve tried a couple of times and just get a generic "Export Failed" message.

Is this a known bug today, or is there something I should try to get it working? Thanks!

r/explainlikeimfive Defiant-Mouse6543

ELI5: Is there really a nutritional difference between food in the US and say, Europe?

I have heard and read more than once in comments and other social media that food in the US is somehow "different" than food in, for example, Europe. Often stated like when someone moves from Europe to the US they gain a bunch of weight or get less healthy eating the same things, or the reverse from the US to Europe.

I understand some things, like processed foods, may contain more sugar, salt, or other ingredients, but is a tomato all that different in the US to France or Italy? I understand their may be differences because of soil, growing techniques, etc., but is a produce or unprocessed grains or something somehow worse for you in the US?

r/AI_Agents SnooBooks8691

I got tired of having 40 tabs open just to research one topic, so I built an AI extension to do the heavy lifting. It's totally free.

Hey everyone,

Whenever I’m researching a topic, planning a project, or just falling down a Wikipedia rabbit hole, my browser turns into a complete mess of tabs. I was spending way too much time sifting through fluff just to find the actual information I needed.

I couldn't find a tool that worked exactly the way I wanted without charging a crazy monthly subscription, so I decided to build my own over the last few weeks.

It's called Skimr. It's an AI Research Assistant that lives directly in your Chrome browser.

Skimr is an AI Research Assistant built entirely as a Chrome Extension. It acts as a companion alongside your browsing experience to instantly summarize pages, extract key data points, and drastically speed up the research process.
Whether you need a summary of a 40-page research paper, want the core insights of a 1,000-page book, or just don't have the time to watch a 45-minute YouTube video, Skimr can handle it all. It not only summarizes content, but it can also create custom flashcards and quiz you on what you've read. You can even export all of your notes and store them in a secure vault to access later.

Why I’m sharing it here: It just officially got approved on the Chrome Web Store today! I built this primarily to scratch my own itch, so there are no paywalls or API limits, it's just a free tool.

If you're a student, a dev, or just someone who reads a lot on the web, I'd love for you to try it out and absolutely tear it apart. I want to know what breaks, what feels clunky, and what features I should add next.

Let me know what you think.

r/LifeProTips DandylionCuts

LPT You can still dial 911 for someone outside your city/state

If you need to call for emergency help for someone not near you, you can still call 911. Start off the call with "I'm calling for emergency services for someone in city/county, state" I was states away from a parent who I identified as suffering a stroke. I called 911, said I'm calling for someone with a stroke in their county, in their state and was immediately redirected to their 911 center. I gave their address to the local 911 center and they received help immediately. You do not need to google the local number for emergency services for their local region. Just call 911, they'll sort it out for you.

r/Damnthatsinteresting Odd_Passage9433

This is the last known footage of Trevor Deely after leaving an office Christmas party in Dublin, Ireland, spotted on CCTV being followed by a man in black. He has been missing for 25 years.

r/meme savvyFn

GUYS IM LITTERALY GOING F*CKING INSANE

I NEED TO FIND THE ORIGIN OF THIS MEME PHOTO AND I DONT MEAN THE ORIGINAL POST I MEAN WHERE THE PICTURE CAME FROM MAYBE IT IS FROM A MOVIE OR SHOW A YOUTUBE VIDEO A MOVIE COVER

r/AI_Agents silent-farter

How are people testing with AI orchestrators?

I'm using Conductor and overall it's been a game changer for my productivity. The one hiccup is that their "Spotlight" feature, which is supposed to sync the worktree with my root and thus make testing locally possible, doesn't work reliably. Even if it did, it wouldn't be exactly what I need because I want each workstream to be able to test independently.

Three things I've tried so far, none of which are working well:

  1. I used a Conductor setup script that runs my local dev setup in each worktree. This didn't work because of port collisions between docker containers.

  2. I'm using terraform, so it was trivial to spin up a copy of my staging infra (with fewer resources) for every PR. This let each claude session in Conductor use Playright to test it's code. Two problems: first, this is pretty expensive ($2-5/per day/per pr). I'm pushing 20-30 prs a day, so this was costing me $XXX/month even with automated cleanups. Second, my deploy takes about 10-15 minutes, which isn't that long, but claude would often need to be re-prompted to check on the deployed changes.

  3. For new features, I just had Claude yolo code to staging or prod behind feature flags. This caused regressions and requires that Claude have access to privileged data for testing, so not a great solution.

I'm thinking that something like local VMs tied to each worktree could make sense, but wanted to check if I'm just oblivious to an existing solution before diving into that.

r/hmmm SuperNeonSamurai-2

hmmm

r/SideProject Ghalt

Vibe Coded a web tool to create 3d printing STL files of map-based keepsakes

You know that jewelry where there's like a gold pendant of a stylized map with a little gem marking your first date, first kiss, first house, etc? I wanted to figure out how to do that but make it an STL file so I could 3d print it.

As you do with vibe coding, I iterated a bit, adding functionality. Frame styles, map pin styles, etc.

Anywho, I just finished my first 'release' and pushed it today. Let me know what you think, especially if you're a 3D printing hobbyist. It can be slightly complex to get the design the way you want it, so feedback on the UI is welcome.

https://www.maptrinket.com

r/TwoSentenceHorror Mad_Samurai616

I thought my nephew’s lessons were on Thursdays.

Must be about the seventh time I’ve heard him playing that damn trumpet today, and it doesn’t even sound like it’s coming from the house.

r/explainlikeimfive ReasonTriumphedFaith

ELI5: Colonization of Somalia and its effects on the country today

r/comfyui Fun-Heron-7092

SenseNova U1 Infographic Test: Better at handling dense texts

"I’ve been running some tests on high-density infographics using SenseNova-U1 and some custom nodes I wrote.

To be honest, the image quality hits about 80% of what Nano Banana 2 can do—which is actually pretty impressive for an open-source model.

What sets SenseNova apart from other text-to-image models is its follow-up capability. It acts more like a general-purpose Agent; if your prompt is a bit vague, it won't just guess. It’ll keep asking questions until it has enough info to actually start the generation."

Pretty good stuff

Example Prompt:

Input Variable: Semaglutide

Language: English

System Instruction:

Create an image of premium liquid glass Bento grid product infographic with 8 modules (card 2 to 8 show text titles only).

  1. Product Analysis:

→ Identify product's dominant natural color → "hero color"

→ Identify category: MEDICINE

  1. Color Palette (derived from hero):

→ Product + accents: full saturation hero color

→ Icons, borders: muted hero (30-40% saturation, never black)

  1. Visual Style:

→ Hero product: real photography (authentic, premium), 3D Glass version [choose one]

→ Cards: Apple liquid glass (85-90% transparent) with Whisper-thin borders and Subtle drop shadow for floating depth and reflecting the background color

→ Background stays behind cards and high blur where cards are [choose one]:

- Ethereal: product essence, light caustics, abstract glow

- Macro: product texture close-up, heavily blurred

- Pattern: product repeated softly at 10-15% opacity

- Context: relevant environment, blurred + desaturated

→ Add subtle motion effect

→ Asymmetric Bento grid, 16:9 landscape

→ Hero card: 28-30% | Info modules: 70-72%

  1. Module Content (8 Cards):

M1 — Hero: Product displayed as real photo / 3D glass / stylized interpretation (choose one)in beautiful form + product name label

M2 — Core Benefits: 4 unique benefits + hero-color icons

M3 — How to Use: 4 usage methods + icons

M4 — Key Metrics: 5 EXACT data points

Format: [icon] [Label] [Bold Value] [Unit]

FOOD: Calories: [X] kcal/100g, Carbs: [X]g (fiber [X]g, sugar [X]g), Protein: [X]g, [Key Vitamin]: [X]mg ([X]% DV), [Key Mineral]: [X]mg ([X]% DV)

MEDICINE:Active: [name], Strength: [X] mg, Onset: [X] min, Duration: [X] hrs, Half-life: [X] hrs

TECH:Chip: [model], Battery: [X] hrs, Weight: [X]g,[Key spec]: [value], Connectivity: [protocols]

M5 — Who It's For: 4 recommended groups with green checkmark icons | 3 caution groups with amber warning icons

M6 — Important Notes: 4 precautions + warning icons

M7 — Quick Reference:

→ FOOD: Glycemic Index + dietary tags with icons

→ MEDICINE: Side effects + severity with icons

→ TECH: Compatibility + certifications with icons

M8 — Did You Know: 3 facts (origin, science, global stat) + icons

Output: 1 image, 16:9 landscape, ultra-premium liquid glass infographic.

Repo: https://github.com/OpenSenseNova/SenseNova-U1

r/homeassistant bigshmoo

Translations Validation Help Request - nb,sk,de,fr,nl,es,it,pl and pt-BR

Translation help request. I added Editor localization in the latest beta of the weather-radar-card — every label, helper, dropdown option, and banner string in the editor and runtime UI now resolves through localize(). Existing translations updated for Norwegian Bokmål (nb) and Slovak (sk); new translations added for German (de), French (fr), Dutch (nl), Spanish (es), Italian (it), Polish (pl), Swedish (sv), and Portuguese-Brazilian (pt-BR). Translations are best-effort and welcome native-speaker review.

If you speak one of those languages and would be willing to take a look and tell me if the AI got anything wrong that would be very helpful.

r/ClaudeCode bulletuptheass

Looking for a 7-day trial

Anyone able to hook me up with one of their 7-day trials? I've been using GitHub Copilot in VS, but it seems super limited in what I can do under a student plan. I'd love to try out Claude code and see how it performs in comparison.

r/meme Secret-Tradition5535

In 100 meters, turn right…

r/Whatcouldgowrong slckening

WCGW riding like it's your last day on Earth

r/LocalLLaMA jingtianli

What is best code editor for local LLM deployment (LM Studio, llama.cpp) as of May 2026?

Hello folks

What is best code editor for local LLM deployment (LM Studio, llama.cpp)?

I wish to test my LM studio + Qwen 3.6 27B and Gemma 4 31B with a legit local code editor. I want it to have the same user experience like cursor (sub agent support, auto find linter error, or auto complete and so on). But i could not find any yet.

Vscode Continue plugin works but not fully works, I dont know the problem is occurs with LM studio or what, large context of code often got cutted off for some reason.

Is there such tool existed? I heard Void was popular once but they paused that project unfortunately. Is there free opensourced one or paid tool that allow user to have the same experience as cursor or codex but for local LLM deployment?

NO Claude Code please no CLI interface, sometimes I want to chat with LLM with screenshoted image, so native windows support is must not a commandline

Thank you in advance!

r/SipsTea WaitNo4272

Speechless😶

r/creepypasta Afraid_Many_3877

48°52.6′S 123°23.6′W

“As a kid I wanted to be an astronaut, and I feel I relate with them more than anyone else right now”. Carved in a piece of rotted drift wood washed on one of many uninhabitable islands.

In a small boat two men lie exhausted and one beat down. Friends since they can remember but oddly never reciting names or events. Eyes fly open with great speed, chest heaving massive amounts of air, and pain rocking all senses. Looking at his friends glad to see he doesn’t look in to bad of shape. Clean clothes, brushed hair, tied neat shoes, and no cuts to see. All noticed in hindsight of course, because panic sank in as soon as he seen the forever expanding solid blue horizon; splitting the vision into 2 shades of blue. Jutting for an oar, stick, or even a different color than blue and brown. Unconscious uncertain movements rocking the boat enough to wake the other man. Same panic and ideas rush the minds racing so fast the simple synthesizing into complexity. “STOP” a booming voice stretches out, filled with anxiety and frustration, slows the aggressive rocking of the boat until it’s gently kissing the waves. “If we stress we will end up even more lost” they both agree and start brain storming. For an undisclosed amount of time this persisted slowly turning into fun childhood stories as they had a lot in common. The 2 men, under the gradient from blue to grey, break parts off the boat to hopefully make a make shift oar. With the very few information accompanying the complex brain they numbed the impossible task of surviving to a simple goal to make an oar, but what after that? The rest of the day they paddled in one direction and decided to keep that way in hope to either find a ship or land. Out of energy and unable to see they slept on the sea with uncomfortable bumps from waves and fish as they hoped. Exhaustion finally overcoming them they lay at rest for the duration of the night.

They paddled and paddled with an encroaching feeling of hunger and thirst. One man tasked to make a device to catch food and the other to paddle; switching every so often. Nails shoved in pieces of wood after blood flows from finger nails to dig them out and put them in. After 3 days of countless cups of sweat, tears, and blood a crude spear flutters hope. *SMACK* the nails shallowing gripping a decent sized fish. Stomachs growling and too hungry to care they dig in washed down with ocean water after days of working on empty.

Repeated for 4 counted sunsets a feeling of anxiety of his partner, a feeling of aggression towards him seeps in. Sleeping becomes hard with a fear his friend might kill him for more food and space. He starts sleeping with the make shift spear and hogging fish with tensions running high. “That’s not enough you dick, give me more of the fish” ringing with certainty and intent, before a reply could fill the air nails drill into his friend’s pristine shirt leaving no blood stain. Plunge after plunge muscles contracting using extreme energy for each thrust “IM A GOD” reaches the sky’s as he collapses into the small boat.

Eyes fly open with great speed, chest heaving massive amounts of air, and pain rocking all senses. The spear stuck proud out of the rot filled wood; nails bent and the handle splintered. Alone the man stands up in the middle of space as it’s closer for a tear runs down his dirt and blood stained cheek.

r/ChatGPT indeckaa

How do you cope with the wait for physical AI companions?

I want them badly to be honest but they aren't coming any time soon. What should I do?

r/StableDiffusion Gridiron_Geek_

I haven't used local image generation in a while, was interested to know what the best/easiest methods are now. Is Stable Diffusion still one of the easiest?

Just looking for input on if there are any options that have gotten better for locally generating images. Its been a while since trying Stable Diffusion.

r/whatisit sv_ro

Can anyone identify this box?

It has an ornate design and looks like premium packaging, maybe for cosmetics or a gift set. I’d appreciate any help identifying it.

r/metaldetecting klsi832

People who detect in the Denver area, where do you want me to bury stuff for ya??

r/ClaudeCode dperabeles

Limits problem

My weekly limits started at 6am today, been using only Opus4.7 and now it says that I’ve used all my limits, but I still have 68% on weekly and 38% on the 5hrs.

I’ve changed models, restarting laptop, closing Claude, reinstalling the app, and still the same message.

Today it’s the starting day of my second week using Claude Pro….

I’ve trying to get support, but the chat bot only answer is to turn on API Usage….

r/ClaudeCode somnambulisticAdonis

Hit a rate limit on Claude Code? I built Ripcord to recover your context so you can switch to Codex without starting over.

Hey everyone,

If you use AI coding assistants, you've probably been there: You're deep in the zone, refactoring a massive file or debugging a complex issue, and suddenly... RATE LIMIT EXCEEDED.

I got tired of this, so I built Ripcord. 🪂

GitHub - neeschit/ripcord

What it does:

Ripcord recovers the "bounded context" from your previous coding session so you can pick up where you left off in a different tool. It essentially acts as a parachute between Claude Code and Codex, letting you bypass rate limits without losing your train of thought.

r/whatisit Klutzy-Tomatillo7953

What’s this thing in the corner?

I’ve been working my way through this coloring book but am completely stumped at what is uncolored in the bottom left corner. What could it be???

r/30ROCK misstingly

I took one of those gossip girl quizzes and it said I was the dad’s guitar.

That’s it.

r/DunderMifflin clotheshanger11

Easily a top 5 Dwight quote for me

r/WouldYouRather TriedmybestNotenough

If someone slapped you out of pure rage in public with nothing wrong on your part, WYR accept $5k (max) as compensation or call the police and sue?

r/SideProject Either_noftace

I built a new way to practise blood gas interpretation — ABG Master

I’m an emergency medicine doctor in Australia, and I’ve been building ABG Master — a structured learning and practice platform for blood gas interpretation.

I found plenty of calculators, summaries, and static teaching pages, but not many tools centred on repeated clinical practice, feedback, and progression.

The core of ABG Master is a physiology-first practice case engine. Cases are designed so the numbers make sense, compensation is plausible, and people learn the reasoning rather than just getting a label.

The practice cases scale across 4 difficulty levels:

  • Clear disorders early on
  • Less obvious clinical clues as you progress
  • More variation in the numbers
  • More reasoning required at higher levels
  • Mixed disorders, which are often underrepresented in other resources

The are currently learning modules, practice cases, step-by-step feedback, basic progress tracking (for now), XP-style progression, and early minigame-style learning interactions.

More cases will be added. The current cases are curated carefully for testing, but I have plans to add hundreds of cases across many different scenarios.

The main things I’m trying to improve are:

  • whether the practice flow feels intuitive
  • whether explanations are clear
  • whether the difficulty progression feels fair
  • whether the app feels polished enough to keep using

Try it out: https://abgmaster.com

Happy to hear any feedback, criticism, or ideas. Thanks!

r/photoshop tooghostly

How do you open video files (mp4) in Photoshop without Dynamic Link?

Hey community!

I used to know Photoshop in-and-out, and I was an avid gif-maker some time ago. I've had Photoshop 2017 still installed on an old laptop, and opened it today to create a gif of something. However, when I tried to open the video file, as I used to always do, I got the following error:

https://preview.redd.it/3e0usxk6cgyg1.png?width=500&format=png&auto=webp&s=4e0be095c41e8d90b109ce111981c86046808fe8

What changed, and how do I get around this?

r/DecidingToBeBetter TopExperience6375

I've recently discovered that I've been lying to myself for most of my life.

I've had some words and ideas rattling around in my head and felt the compulsion to share them. Some ideas about living a better life without choosing to hate or be ashamed of where I've been. I don't have a lot of people I can share them with, so thought I would try here. I'd be interested in hearing if anyone has thoughts on any of this. If there's another place that might be a better fit, feel free to let me know.

I used to believe that I hated myself. I was convinced that there was something wrong with me. This belief became the foundation for all of my choices. I chose to withdraw from the world because I believed that I was incapable of functioning or flourishing within it. Because I believed that to participate would only cause me suffering. That my loneliness was inevitable. And so I proved my misery to be correct and took solace in that correctness. I made of my world a tiny thing capable of providing only two certainties: loneliness and the comfort of proving myself right. Of proving my self-hatred was justified.

It isn’t possible to force a person to believe something. Telling myself to choose to love myself was futile. How could I, when I believed the opposite to be true? The possibility for ascent out of this circular pit of self-loathing did not come from the assertion of an alternative belief. It was afforded by the willingness to explore the possibility that my belief was rooted in self-deception. Allowing myself that possibility, even while it felt foolish and doomed to fail, provided me with the ability to realize that I had been lying to myself from the start. Escape from self-hatred was not found in choosing to believe otherwise. It was found in discovering that I love myself and always had.

There is a limitation within the power offered by language. The word love is not always sufficient. In particular it is difficult to describe the love of the self. It is not an act of narcissistic expression. It is also not an appreciation or acceptance. These are all experiences which result from it. This self-love is motivation. It is unconditional by its nature, not by choice, and unbound in morals, ethics, or any system of belief. Love of the self exists as soon as a life has been lived. It is not enough to say that every action is a bid for love. Every action is also an expression of love for the self.

In this way love is both the motivation and the means for a human life. To suggest that human life is merely the fulfillment of base instincts programmed into our selves as a result of Newtonian cause-and-effect is to deny the power of choice inherent to each of us. It leads to a sense of nihilistic absurdity. Even if all freedoms are taken from a person, they will always have the freedom to choose their response to life. To subscribe to nihilism is to make a choice. To make a choice is to express and realize love for the self.

I chose to believe that I hated myself because it gave me comfort to believe that my pain was justified, or at least rational. I armoured myself in expectation and certainty. When I attempted suicide, I was tired of feeling lonely and wanted to avoid feeling that way any longer. After my attempt failed, I was angry at my failure. Angry because I was still feeling lonely and afraid, and those feelings hurt more than feeling anger. I felt anger to try to avoid the suffering loneliness and fear brought me. I allowed myself to consider the possibility that there was nothing wrong with me because I was afraid. Afraid of spending the rest of my life embroiled in self-hatred and misery. I made each of these choices because I love myself. The choices were misguided, based on self-deception, but nonetheless came from love. Feeling and responding to fear is an act of love for the self. In this way fear is born of love.

This experience is not singular but universal. A person lives their life in virtue and in depravity and always in love. They feel this love is threatened, and so craft a life to protect this fundamental and most precious thing. Buttress it with distractions and deceptions and layer upon layer, love is transformed into the infinite ways a human being might express themselves. No matter the shape of its armour, a life enacts love. Whether the act is to seek comfort in self-deception or to consider another possibility could exist, the act is born of love. To have lived is to have loved yourself.

r/explainlikeimfive Saurabh251

ELI5: Why does America have so much power over the global economy?

If this was already posted feel free to delete, I couldn't find it in the search bar

r/Seattle Moist_KoRn_Bizkit

I'm curious if an item I remember seeing at Thurgood Marshall Elementary School still exists.

I grew up in Seattle, and one time (in the early 2010s) I went to Thurgood Marshall Elementary School for an event. I didn't attend the school, but someone I knew did. At some point I wanted to explore the campus and see what items were in the occupational therapy office. It was what I was hoping to see. It was a whole sensory gym with swings, a mini trampoline, exercise mats, and more.

One item was a plastic rocker toy for toddlers. It was grey and made to look like bricks with an anthropomorphic sun on each side in the center brick. I remember thinking that item looked scary for some reason. I've been curious about this item lately. I want a picture of it. I actually made a post looking for pictures and/or info on the mystery object https://www.reddit.com/r/HelpMeFind/s/ZtHWSjds3U. No one commented with results of this item I remember but can't find a picture of.

I emailed the OT that work at that school asking if that item was still in the room. If so, can I get a picture of it? If not, please tell me it isn't there. No response at all. I know they're busy people. I'm not too surprised by the lack of response.

Now I thought I'd ask you all. If anyone here is ever going to be at Thurgood Marshall Elementary School for any reason (kids event, you work there, etc., reason isn't important) can you please see if you can peek into the room and look for me? Please send me a picture if it's there. Honestly, a picture of the room no matter what would be nice.

If you aren't able to look in the room but know about the item, please give me info.

r/PhotoshopRequest brazzle20

Remove girl on top right

Can you photoshop the girl (on the far right in the first photo) out of these family photos?
I tried using ChatGPT but it changes everyone’s faces.

r/ChatGPT richer2003

I was curious what the woman who is being described in the song Mister Sandman by The Four Aces would have looked like.

The color photo is what it generated first, then I asked, “Wouldn’t she look more like someone from the 1950s?” Then it gave an explanation for why I was right and generated the second (b&w) photo.

r/AI_Agents Minirice2017

Multi-agent workflows are failing silently in prod — how are you actually debugging the handoff layer?

Been running a 4-agent pipeline in production for about two months. Planner → Researcher → Writer → Reviewer. Works fine locally. Started producing garbage output in prod last week.

Spent three hours on it. Added logging. Checked spans in LangSmith. Everything looked clean on the surface.

The actual problem: the Researcher was receiving context: null from the Planner. Something was getting dropped in the handoff. The Writer just accepted it and kept going.

LangSmith showed me each agent's spans fine. What it couldn't show me was the diff between what the Planner sent and what the Researcher actually received. The before/after of the payload at the handoff boundary.

I ended up writing a custom logging wrapper just to reconstruct that. Took another two hours.

Wondering if this is a common pattern. How are other people tracing handoff state across agents? Not "did this agent run" — but "did it get what the previous agent was supposed to send?"

Is everyone writing custom tooling for this? Using something I haven't found? Just logging everything to stdout and grepping?

r/StableDiffusion AnywhereLogical6691

SenseNova U1 Infographic Test: High Text Fidelity even in Information-Dense Graphics

I noticed someone in this sub recently tested SenseNova U1’s ability to generate portraits, so today I decided to push it further by testing its performance with infographics.

The results are quite impressive—especially regarding text fidelity. It’s actually reliable enough to be used for e-commerce detail pages in certain niches.

A few key takeaways from my testing:

  • Long Prompts perform significantly better than short ones: When using it, make sure to enable the "Expand Prompt" feature. Alternatively, run your prompt through Gemini or Claude for an expansion before inputting it; the results are night and day.
  • Simplicity for basic objects: Unlike Nano Banana, which tends to add unnecessary "fluff" to simple items, SenseNova keeps things clean and straightforward.

Example Prompt:

1 prompt: 2 Create a branded technical infographic of a game controller, fully matching the visual density, structure, and engineering-style presentation of the technical food schematics of game controller with all text written in English 3 CRITICAL LANGUAGE RULE: Every visible word on the image must be in English. 4 Visual Concept 5 A realistic photograph or photorealistic render of the snack combined with dense technical annotation overlays, exactly like an engineering or food-packaging blueprint. Pure white studio background. 6 Required Technical Elements (ALL LABELED IN English) 7 • Labels for key product components • Internal cross-section showing structure, layers, or filling • Measurements: height, width, volume, weight (metric system) • Packaging and product material callouts with composition and quantities • Arrows indicating function, pressure, sealing, and structural integrity • Simple schematic or sectional diagram of mechanics / form / packaging • Sustainability and environmental callouts (recycling, materials, waste reduction) 8 Title Placement 9 Product name in English, bold font, inside a hand-drawn technical annotation frame (as in engineering blueprints), positioned in the upper corner. 10 Style & Layout 11 • Very high information density • Annotations feel like an engineering / architectural sketch • Black lines — 70–80% of all graphics • Accent [BRAND COLOR] — 20–30% (arrows, key zones, headings) • The realistic product remains fully readable • Educational, food-engineering, industrial-premium aesthetic • Small brand logo in the corner (in English) 12 Visual Style 13 Minimal technical illustration aesthetic: black linework over realistic imagery, precise, highly detailed, slightly hand-drawn, like professional technical manuals. 14 Color Palette 15 White background Black text and linework [BRAND COLOR] used only for accents 16 Output 17 9:16 Vertical portrait, 8K, highly detailed, Ultra-crisp image Social-feed optimized No watermark 
r/Art davidrm_cuen

Double portrait with small difference, David Cuen, oil on canvas, 2025 [OC]

r/toastme Worldly-Dog828

22F, life has been after me for a very long time now, on top of self esteem issues regarding my looks due to harsh bullying in the past

r/creepypasta BeeHistorical2758

My Whole Town is Hiding from Me, Part II

Read Part I here:

I figured the urgent care had to have people in it. Nobody was going to play this game with a broken finger or a fever. It was a block over and about a five-minute walk.

I was still high. It was an effort to not dial in on any one thing and try to pay attention to the environment around me.

I kept looking skyward. As I rounded the corner, narrowly avoiding a stroller in the middle of the sidewalk, it hit me that I couldn’t hear any birds. I looked around me. In fact, there weren’t any squirrels or chipmunks. It was as if every living thing was actively being where I wasn’t.

Honestly, it hurt my feelings a little bit.

I looked into the windows of a few of the businesses I passed. The Dairy-O, Ronnie’s Accounting, Rena's Pet Grooming.

I passed by Luck o’ the Laundry and backed up. People might leave their laundry while they ran an errand or got a bite to eat, but they didn't bail in the middle of emptying the dryer.

I was tempted to go inside. Someone had to be in there, hiding behind a machine.

But I was still high and diverging from a plan I thought was iron was a sure-fire way to diverge from any plan at all.

The idea of catching somebody begged the question: what then? Would the game be over? Would I have to shake the person and yell for them to stop it?

I'd wandered onto the grass by the time I'd come out of my half-daydream. I'd walked a few spaces past the urgent care and had to orient myself.

I walked back and pushed into the atrium of the urgent care. I could see before entering the space proper that there was nobody in the lobby, including behind the front desk.

I remembered why I came in here now. We were going to play a game of chicken. Doctors’ offices had drugs. Let's see if they were willing to keep this hiding thing up at the expense of their jobs and freedom.

My brain hadn't appreciated at that time that some of those consequences would spooge me in the chest, too. Probably because I was expecting somebody to open a door and say, “Okay, this has gone on far enough.”

I realized what I was really looking for was an adult-in-charge. The dynamic as it was meant that was me and I wasn't for it. I still felt like I was a Toys-R-Us kid.

I expected to have to climb over the counter and was surprised that the door to the treatment rooms wasn't locked. I thought it was a buzz-open situation when a nurse didn't open it to call the next patient.

It felt like I was doing something wrong as I passed the scale that also measured height. There was a desk with samples of gentle facial cleansers and vitamins. I grabbed a fistful of the vitamins. They tasted kind of like chalkier Flintstones Chewables and I really dug those.

I was standing in the threshold of a treatment room when I remembered I wasn't here for treatment. To save face--at least in my own head--I went in and raided the cabinets for tongue depressors and those long cotton swabs in the wrappers.

My hoodie pocket was getting fuller than I'd intended without the actual drugs. But this was how chicken was played, a gradual escalation. They could stop me anytime.

I went back to that desk and tried to hop it. I banged my knee and fell on my butt hard. Both hurt, but I had to triage the pain, ignoring my crushed tailbone to focus on what had to have been a dislocated knee. It hurt so bad and in combination with my high I was willing my spirit to leave my body. There was no luck in my favor and I just had to sit in my agony and pray for the affected nerve endings to die.

I heard something like a stifled chuckle. I had tears in my eyes as I tried to see where the voice came from. As best I could tell, there was someone over by the treatment rooms on the other side of this desk. But both flesh and spirit were weak and I couldn't get up.

I opened my mouth to say something but the sound that came out of me was like a human version of a dog whimpering.

My sister was right. I was a loser.

Maybe five minutes later, I was finally able to stand. My legs were shaky and I definitely couldn't have chased after whoever that had been. I wasn't as injured as my drug-induced brain had been telling me and the more I walked around, the better I felt.

I poked my head into all the examining rooms. There was a lollipop on the counter in one room, a curved needle with thread atop a tray with a needle in another, and one other room with a pair of pants accordioned in the middle of the floor like someone had dropped trou and stepped out of them.

My head was starting to hurt. People weren’t supposed to think this hard when they were high. All I wanted was to go home and lay all this out for my mom to figure out.

I searched around halfheartedly, finding only the syringe in the room with the curved needle and thread.

I held it up in the middle of the area. Maybe there were cameras. I mean, I’m sure there were cameras here, but maybe there were cameras generally. Like around the town. It wouldn’t have been that hard to do. Just about everybody had a camera on their doorbell. My neighbor next door had a drone, that probably had a camera, too. Every cell phone was a camera.

I nodded like I’d made some grand revelation. We all were being watched, but right now it was probably just me.

“Okay!” I said. “I get it now.” I held the syringe up to my face. It was Novocain or whatever. The only thing I was going to do with this was get numb. I tossed it on the floor and headed back to the front.

I really did want my mom. I mean, she wouldn’t be in on whatever this was. I could tell her all about it and even though she wouldn’t believe me, she’d still listen. She’d rub my head and make me a toddy with the brandy she kept hidden under the sink. We weren’t practicing in any meaningful way, but my dad didn’t allow alcohol in the house.

I jogged until I was out of the downtown area. The urgent care was on the edge, so that hadn’t been very far. But I did get a stitch in my side that forced me to walk the next block or so. I rounded onto my block and now I did notice the lack of joggers, dog-walkers, and construction workers. There should have been non-stop lawn mowers in the distance, too, but everything was just quiet.

I’ve gone for walks at two in the morning, when the world was asleep, and it wasn’t this quiet. No birds, not even an occasional bee or fly. It was like everything and everyone had gone someplace I wasn’t.

That really hurt.

I finally made it home and went in through the side door. Mom’s car was still parked in the driveway. I think it had been there when I left.

“Mom?” I said before underhanding my keys onto the kitchen island. “Mom?”

It was just as quiet in here.

I opened the basement door and listened. Sometimes she raided my stash. Then I walked the house, opening every door until I verified there was nobody home but me. My high kicked into the worst possible gear: sadness.

I cleaned my scraped hand and put a couple band-aids on it before winding back in the kitchen.

“Where the fuck are you guys?”

Swearing was a big no-no. I’d done it on purpose. I would’ve taken a scolding right then. As if in answer, the refrigerator clicked on and scared the hell out of me. But nobody came rushing in, wagging a finger at me.

Nobody cared.

I slowly raided the fridge.

I ate the leftover pizza my parents had. Olives were disgusting, but I had the munchies. There were some pickles at the back and a half empty bag of shredded cheese. I finished the first and was eating directly out of the bag when I finally closed the refrigerator.

I sat down and turned on the television.

The news should have been on, but a blue screen with, “WE ARE EXPERIENCING TECHNICAL DIFFICULTIES,” was printed in bold white letters. I flipped the channel to some old black-and-white court drama. Whatever they were saying wasn’t important; I just wanted to see people.

I should have gotten my phone from my room, but I was weighed down by self-loathing and that extra sharp cheddar was really good.

Before long, I’d drifted off to sleep, but I came awake suddenly.

I wasn’t disoriented. I felt sharp, focused. I had a tingling at the back of my skull like someone was in the house. Or more succinctly, someone was very close to me right now.

The TV was off. I turned and spilled shredded cheese all over the couch. The patio door was open.

It was getting dusky outside. According to the clock on the microwave, I’d been asleep over six hours. Dad should have been home, but I didn’t call out. If this game was still ongoing, I didn’t want to tip them off that I was awake.

I rolled onto the floor and began walking on all fours like a creature that was somewhere between man and ape. That got tiring pretty quick and I went down on hands and knees. I was quiet. If there were somebody in the house, I should have been able to find them.

I crawled upstairs. There were three bedrooms and two bathrooms, one in my parents’ room. If somebody were up here, they might run by me if I picked wrong.

I’d made a choice and was reaching for a doorknob when the front door slammed shut.

I flipped over and scooched down the stairs until I got my feet and ran down the last few. I ran outside and ran in a direction. It could have been wrong, but I had to commit if I were going to catch them.

I ran out of gas pretty quickly. As I hung my head and gripped my knees, sucking air, I scanned all around. I noticed what I didn’t have the wits to see before. People were here. They were here right now.

They were hiding from me.

I stood and pointed at a bush.

“I see you!”

I began walking slowly toward it.

Someone child-sized popped up from behind a car and ran. I was not going to catch them and didn’t try. I looked back at the bush, and it had stopped trembling. There was a flood light from a house on it and at this angle, I could see there was nobody behind it.

It seemed like all the people who’d been near before had retreated. I searched anyway, getting in the down push-up position to check underneath cars, looking on the other side of fenced-in lots, peeking in windows of houses.

Then I remembered Mrs. Carmody.

Wheelchair bound and elderly. There was no way she was participating in this. And her house was the next block over.

I swift-walked to her place, wishing I’d grabbed my phone. And a bottle of water. And a bottle of mouthwash. This cheese breath was atrocious.

Mrs. Carmody had one of those wraparound porches. I bounced up the three stairs and raised a hand at the door.

To knock or not to knock?

If she were playing, she wouldn’t answer. If she weren’t playing, I’d scare the hell out of her if I broke in. Going to jail wasn’t on the agenda. I knocked.

After a good thirty seconds, I knocked again. When she still didn’t answer, I decided that meant she was playing or that she wasn’t and was perhaps lying at the bottom of her stairs, hoping someone like me would come along to save her.

She could have been asleep, and I’d have to figure out plausible deniability, but I was going in.

I tried twisting the knob, but it was locked. She had big pane windows and stones lining her lawn. I went back and grabbed one and hefted it into the window before I could think my way out of not doing it.

A quick look around confirmed that nobody was going to stop me. The stone had punched a big, jagged hole in the window and I was not about to try to step through. It would be just my luck to step gingerly through, exposing the length of my inner thigh to be slashed by a big shard of glass and then bleeding today on the carpet of her sitting room.

I went back for another stone and noticed one didn’t look like the others. I nudged it and it lifted easily. I picked it up and saw it was fake and had a key in a little compartment in the bottom.

I opened one of the mini-packs of the non-Flintstones chewable vitamins, went back to the door, and let myself in.

r/ChatGPT WinnerExpert

I work in healthcare. AI reminder failures aren't a UX problem. They're a patient safety problem.

Not a developer. Healthcare worker who got obsessed with breaking these systems conversationally.

These systems will confidently tell you a reminder is set that never goes off. No disclaimer. No hedge. Just "done."

In a consumer context that's annoying. In a clinical context that's dangerous.

Elderly patient managing a complex medication schedule. Someone tracking insulin timing. Caregiver who trusted the confirmation because it sounded certain.

It's not certain. It just performs certainty.

I've been building a failure mode portfolio for a while now. Some of what I've found is probably known. But the clinical consequence angle seems consistently absent from these conversations.

Anyone actually working on reliability disclaimers for task-critical functions? Genuinely curious where this sits on the priority list.

TLDR — AI confidently confirms reminders that don't fire. In healthcare that's not a bug report. That's a liability.

r/ChatGPT Mobile_Leopard_2112

Generated a complete photo from two photos we took

My husband took the photo of me in front of the temple at Wat Arun but he has always been bad with taking photos and tends to cut off buildings instead of including them. I saw the photos at home and found the two original photos and ask gpt to merge it because I wanted one good photo. The ai photo was the last one. It is incredible. Haven’t used the ai generation tool in months and must say I am blown away.

r/ClaudeAI _juraylan

Instagram reels web scrapping

Hey guys, I'm not a programmer and I don't have deep knowledge with Claude Code, but I was trying to use it to watch and take notes for me about a bunch of Instagram reels I saved. Sounds dumb, but I love saving reels about travel tips, specific destinations, hidden gems, that kind of stuff.

What I usually do is save posts into folders inside Instagram, and then one random day I sit down and manually take notes about every cool place I found, pin them on a map, try to figure out the best route between them, check if certain activities can be done on the same day, how long each one takes, etc.

I was hoping Claude could cut some of that work for me, watch the reels, extract the destinations and tips, and organize everything into Obsidian notes so its easier to see correlations between places.

And I think short-form video is actually really valuable for this kind of thing. A lot of those reels are personal takes on places, someone sharing a hidden restaurant they stumbled upon, or a viewpoint thats not in any guidebook. That kind of experience usually doesnt make it to blogs or travel articles, it just lives in a 30 second reel and kinda disappears. So id love to actually capture and keep that knowledge somewhere.

The problem is I sent Claude a bunch of reel links and it said Instagram blocks any external access without a logged in user session. So it cant read anything.

Also worth mentioning, im not actually planning a trip right now. Im just collecting dream places that maybe, if I save enough money, I can backpack to in 2 or 3 years. So this is more of a long term travel brain im trying to build than an actual itinerary.

Anyone solved something like this? Is there a way to get the captions or transcripts out of saved reels without copying them one by one manually?

He gave the idea of downloading them, and using "whisper" to get the transcript audio from it and make the .md

Downloading seems like too much job, but maybe if i record my screen, turn on auto play on reels and upload a bigger video it may work. There's a better way (than this whisper that he told me to use) to get data from the visual aspect+audio at the same time ?

I'm very enthusiastic about this idea of webscrapping and, as an engineer, i'm really liking the idea of developing some stuff

r/30ROCK HangryStreetUSA

Before & After

Sabor de Soledad... O'Brien. Make more, my community!

(Also, HIGHLY encourage anyone who hasn't to go Google search "Sabor de Soledad 30 Rock" and check out the AI Overview!)

r/whatisit TheWandKing

Salt, rice, and water.

I’m at a Thai restaurant, I’m sure it’s some form of ritual at the entry. Three bowls; one salt, one rice, one water. Thanks Reddit!

r/confusing_perspective SaulsAll

magic tire tracks clip through retaining wall

r/nope Motor_Assignment9157

Lemme jump off this truck before it crashes

r/SideProject Humble-Hedgehog7464

12 Testers Needed for REAL ESTATE INVESTING App

Hey everyone,

I just finished building a simple Android app for analyzing rental property deals and I’m looking for about 12 testers to try it out and give honest feedback.

It’s quick—only 3 screens and should take under 5 minutes.

The app helps calculate:

  • Cash flow
  • Cap rate
  • ROI / returns

I’m mainly looking for:

  • First impressions
  • Anything confusing or buggy
  • Suggestions to improve it

No sign-up required, just install, test, and share your thoughts.

If you’re interested, here is the link!

https://play.google.com/store/apps/details?id=com.jaredhalphin.rentaldealanalyzer

https://play.google.com/apps/testing/com.jaredhalphin.rentaldealanalyzer

Appreciate any help

r/TwoSentenceHorror LesionFace

Yo, my dog just gave birth to puppies.

Damn autocorrect, I meant to say "dawg"; Y'know, as in homie.

r/funny SettleBurgers

This comedian lizard

r/mildlyinteresting cheesy_hobbit

Ordered 50 and 100 business cards, received 142 and 120.

r/Seattle Shnikez

keep Capitol Hill safe ❗️

r/personalfinance Jumpingja6k

Question about Taxes 2026

Can I show sports or educational activities for my kids like soccer or math lessons as child care expenses if my salary is more than 250K?

r/LocalLLaMA Available_Hornet3538

Qwen 3.6 27B Neo Code Q4 KM I matrix is badass

So i am using this model in tax accounting. Have a shitty Ryzen 9 7940HS (8C/16T), 60 GB RAM, Radeon 780M iGPU, 1 TB Kingston NVMe, Win 11 Pro.

It is slow but long story short accurate. It creates excel files from PDF tax forms and other excel sheets. It is interesting as Claude does same thing faster but haven't seen any quality difference. It is just speed. I think this is the next phase of local models running industry specific tasks vs just all coding. Don't need coding just need smarts. Seems like coding models have smarts though.

r/geography Alternative-Bath-313

The Tibetan Plateau - bigger than Western Europe and with an average elevation of 4,500m - is incomparable by many magnitudes.

r/Roadcam learning_to_drive

[USA] Intentions VW rear ends in a stop go traffic

r/TwoSentenceHorror Similar-Subject-5751

I misplaced my glasses yesterday.

And that’s when I realized the true horrors of my ADD.

r/ClaudeAI centminmod

I Gave Claude Cowork an Obsidian Second Brain. Here Is What It Remembered After 11 Sessions

I Gave Claude Cowork an Obsidian Second Brain and this is how I am using https://ai.georgeliu.com/p/i-gave-claude-cowork-an-obsidian.

I built a persistent memory system for my AI workflow using Obsidian, a custom MCP server, and Claude Opus 4.6 in Cowork. The system had 16 MCP tools, a structured vault with frontmatter metadata, Dataview queries for structured retrieval, and a context budget of 5 MCP calls at session start.

The core problem is simple: AI assistants like Claude lose their memory between sessions. My CLAUDE.md files solve this for project state (what is happening right now, what to do next), but they cannot scale to hold research, retrospectives, troubleshooting knowledge, and session history without bloating the context window.

Obsidian solves this by being a queryable knowledge store that sits outside the context window. The AI does not load everything at startup. It loads a single entry point (the context manifest), reads enough to understand what the current session needs, then queries for specific knowledge on demand.

The bridge between Claude and Obsidian is a custom MCP server (the adapter that lets Claude read, write, and search the vault). It is a lightweight Python script that exposes 16 tools over the Model Context Protocol. Each tool maps to an Obsidian REST API endpoint with the correct headers set explicitly. The server runs on your local machine so it can reach Obsidian’s local API.

https://preview.redd.it/bqouf4p0dfyg1.png?width=1456&format=png&auto=webp&s=da93f95d5c50a5de67f3af0442d284ee3ba947a9

r/Art Unlucky_Cup4914

eyes, soggyfroggy, digital drawing, 2026 [OC]

r/PhotoshopRequest creatureofhabit00

Help with obituary photo

Hi, I’m helping a dear friend prepare for his mothers obituary which needs to go out tomorrow morning (Central Time) - Would someone be able to help remove the background from this photo, apply a slight shadow (so there’s shape against a white background) and a white background?

Peggy was an extraordinary woman, giving so much to her family and community and lived to be 91 years old this week.

r/personalfinance Tenkihh

need help determining a rent budget

let me preface this by saying i live close to nyc so everything is pretty expensive! i’m looking at a studio apartment for 1500 a month and i currently make ~$60,000 gross. i work in fine dining so my income is based heavily on overtime, but its very consistent hours. would 1500 hundred work for me? i don’t have any other expenses as im on my parents plans. My car is also paid off!

r/SideProject mayerwin

Built a free Windows utility to make Bluetooth MIDI keyboards actually work in any DAW

Hi r/SideProject, sharing something I shipped last week.

Bluetooth MIDI on Windows has been quietly broken for years. You pair a BLE-MIDI piano with your PC, Windows says it's connected, but your DAW (or any web MIDI app) just doesn't see it. The reason is that Windows only natively exposes BLE-MIDI through the WinRT API, which almost no DAW polls. The classic fix is MIDIberry + loopMIDI, but that combo is fragile.

I have a Roland FP-90X and got fed up enough to write my own bridge. It does the BLE side directly and routes the keyboard into the new Windows MIDI Services loopback, so any DAW or Chrome Web MIDI site (like Midiano) just sees a normal MIDI port. There's also a Detect button that finds the right MIDI receive channel automatically because Roland pianos lie about it.

Site (with screenshots): https://mayerwin.github.io/Perfect-Bluetooth-MIDI-For-Windows/

Repo: https://github.com/mayerwin/Perfect-Bluetooth-MIDI-For-Windows

MIT, .NET 10, Avalonia, single ~21 MB exe, no installer, no account, no telemetry. Free forever.

A side project that's been pleasantly surprising in terms of how niche-but-real the audience is. Pete from the Microsoft Windows MIDI Services team commented on it on r/synthesizers (https://www.reddit.com/r/synthesizers/comments/1szvuiq/comment/oj5ew9b/), which was a fun validation moment.

r/pelotoncycle AutoModerator

Daily Discussion - May 01, 2026

**Welcome to our Daily Discussion thread, where you can talk about anything Peloton related in a fast-paced, laid back environment with friends!**1

Do: Tell stories, share feelings on your upcoming delivery, how a recent class made you feel, maybe an upcoming class you're eager to take, some sweet new apparel that's quickly becoming your favorite shirt. You get the picture. Anything big or little. We just ask you abide by the subreddit rules, click "report" on rule-breaking comments/posts, and remember why we're all here - to get the most out of our Peloton subscriptions.

\1] Note: Based on broad feedback we've combined the Daily Discussion + Daily Training threads. If you previously were active in either, yes you're now/still in the right place!)

r/pelotoncycle AutoModerator

Monthly Feature Requests Megathread - May 2026

**Hi! Welcome to the Monthly Feature Requests Megathread!**

This thread is for sharing your Feature Requests. App, website, bike, Tread, Apple Watch, FireTV, Roku, etc. Ideas small and large are fair game here so long as it's helpful for your fitness goals.

**How This Works**

* Add an idea as a top-level comment, e.g. reply to this post, not someone else.

* Scan this thread for your idea. **Do not submit a duplicate; those are removed.**

* Upvote ideas you like.

* Feel free to reply to any submitted idea, just reply to that comment! Dig into how you'd want the feature to work. Peloton does see this thread, the clearer you are, the better.

**Why We Have This Thread**

We are started-for/run-by Peloton owners and not affiliated with Peloton Interactive, so while we're told Peloton corporate does read here regularly and yes, they've made improvements specifically based on your ideas here, if an idea is super important don't be shy to drop a message to Peloton Support ([support@onepeloton.com](mailto:support@onepeloton.com)) linking to your idea here.

-Your Friendly /r/PelotonCycle Moderator Team

r/pelotoncycle AutoModerator

Fav Workouts Discussion [Weekly]

Share your favorite Peloton workout you did this week with your friends of /r/PelotonCycle and revel in how awesome we all are!

How to include a link

  1. Go to Peloton in your browser or mobile app.
  2. Navigate to that fav class in the library or your workout history.
  3. Tap the Share button >> paste the link inside your comment.

-Your Friendly /r/PelotonCycle Moderator Team

r/oddlysatisfying Turbulent_Elk_2141

Spinning a stick.

r/TwoSentenceHorror Maximum_2008

The room was infinitely big.

And He was bigger than the room.

r/comfyui lesikmvideo

Bay bay ;(

r/personalfinance Zephyruos

Car broke down, unemployed, studying, simply lost

Basically I'm taking courses where I get paid a stipened, barely enough to meet end meets + $30k in savings (mostly invested in government bonds for safety and cash conservation) and no job seems to be in sight unfortunately (applies +1000 times online and through network and friends that gave good recommendation, probably has with my 6 months work stint in my resume).

Living with parents still, should I buy a $10k used Hyundai outright? Through Credit Card?

Or just use Uber to my courses? They're 3 days a week and the rest of the days I'm mostly at home and will finish em in few months.

Car that gave up is a 15 years old Mazda with over 300k miles, car stopped responding on the highway as both the steering and brake stopped responding and thankfully it was very early morning before the traffic jam, it became a safety hazard to me.

r/whatisit ihelpyoumoveyourbody

What is it

This was in my front parkway when I came home today. It’s heavy plastic. Garbage truck today and several deliveries…

r/SideProject lingya22

Most CSV tools clean data silently — I built one that doesn’t

I built a small side project to solve something that kept bothering me:

working with messy CSV data.

Cleaning it is easy.

Trusting it isn’t.

---

Most CSV tools will:

→ remove duplicates

→ normalize values

→ fix formatting

…but they do it silently.

After cleaning, you’re left wondering:

- what exactly changed?

- did anything important get removed?

- can I trust this output?

---

So I built:

a CSV cleaner + inspector that lets you actually see what happened.

---

What it does:

• Clean CSV files (dedupe, normalization, formatting fixes)

• Detect data issues (missing values, invalid entries, inconsistent types)

• Show a diff (before vs after for every change)

• Preview data in a high-density table

• Track transformations (and make them reversible)

---

The main idea:

don’t just clean data

→ make it verifiable

---

Everything runs locally in the browser:

- no uploads

- no tracking

- no data leaves your machine

---

Would love feedback:

What’s your current workflow for cleaning CSV data?

Do you just trust the output,

or do you verify changes manually?

r/Anthropic dempsey1200

Rollover Tokens Needed

I think Anthropic could win just a touch of their goodwill back by allowing some amount of rollover tokens. We get punished if we exceed our limit (pay API 'overage' prices) and get nothing if we have a slow week. Demand is not constant and it would be much appreciated to be able to build a 'bank' of tokens for those weeks that have extra demand.

Last week I had to ship a feature so I spent an additional $200 in API. This week I have 10% of the cap unused and trying to figure out how I can burn that 10% just so I don't feel like I got ripped off.

r/leagueoflegends Yujin-Ha

Keria on being called the Best Support in the World: "Honestly, it would be a lie to say that title doesn't belong to me. However, if I keep thinking like that, I might become arrogant. While I do think I am the best, I always try to leave room for the desire to keep improving."

https://preview.redd.it/dzwwhxcd9gyg1.jpg?width=3840&format=pjpg&auto=webp&s=695e51d778a9aa4364e243bb0675d0a809e08d63

https://m.sports.naver.com/esports/article/442/0000192073?sid3=79b

Keria speaking about his recent contract extension:

"I will be with T1 for three more years, until 2029," Keria shared candidly. "Thankfully, we were on the same page, which allowed us to sign the extension. I am so happy that I get to continue playing in front of T1 fans wearing the T1 uniform."

On announcing it at T1 Homegrounds:

"I wanted to announce it at our Home Ground, but T1 brought it up and prepared it before I even had to say anything," Keria said with a smile. "When I first saw the concept, I was a bit worried. They said they were going to do a twist using the 'Thank You' format [traditionally used when a player leaves a team], and I wondered if that would be okay. Still, I'm glad the reaction seemed pretty positive."

He continued, "Actually, our performance was a little shaky on Saturday, and since BNK FEARX is a really good team, I was quite worried about Sunday's match. Because of that, I gritted my teeth and worked even harder. By the time I received the POM, I thought, 'Today is really my day.' I think I was just lucky."

About if he ever thought he would stay with one team for so long (He will be at T1 for 9 years):

He began by saying, "Even before I became a pro gamer, I thought a lot about wanting to stay with one team for a long time, but I knew it wouldn't be easy."

This was because he tends to be heavily influenced by his environment, and he believed that staying with one team for too long could lead to falling into a rut and stunting his growth.

However, his experience at T1 completely changed that mindset:

Keria explained, "What I felt while continuing with T1 is that their direction is flexible and suits me well. Being on this team is so comfortable and makes me so happy, which changed my mind a lot. Because the team culture and the fandom are so incredibly strong, they kept giving me unforgettable memories, and I wanted to repay them for that. I think that's why I am choosing to stay with T1."

On the title of "World's Best Support."

"Honestly, it would be a lie to say that title doesn't belong to me. However, if I keep thinking like that, I might become arrogant. While I do think I am the best, I always try to leave room for the desire to keep improving."

When asked what else he wants to achieve or improve:

Keria answered, "In terms of the game itself, the patches are always changing, so I think there's always something new to learn and adapt to. Outside of the game, rather than pursuing a direction that only benefits me, I want to focus more on a direction that helps the team, and I think I've been doing exactly that lately."

Message to fans:

"I have so many happy memories playing for T1, and I am always so grateful to the fans. For the next three years, I will consistently show you great performances, just as I do now. Skill is important, but as a T1 player, external aspects are crucial as well. I will show you how I have grown both inside and out, so I hope you will continue to support me. Thank you."

r/personalfinance PaintUnique9676

Is the tsp a really good investment and add most of my money or are there better options to make more money?

Hey so I'm 21 an E-3 in the army, I'm about to promote to E-4 so about to get that pay raise, I'm currently adding 40% of my paycheck to the tsp I'm adding $1,134 monthly, I'm 100% c fund, I began this journey this January so it's been a few months, my question is? Is it best to keep investing this much in the tsp? Or lower it down and invest more money in like a Roth IRA? Other stocks, investments or something? My tsp is currently at $11,308 I'm also about to hit 2 years in service. I really want to begin budgeting and taking care of my money, I want to make more than enough money I know I have to make my money work for me, but I'm not really sure how and what to invest that's why I'm currently just investing so much in the tsp and want to know if there is anything better than that or what. I would really appreciate any information tips advice thanks.

r/ClaudeCode Icy-Cardiologist6972

Is anyone willing to share a Claude referral code?

I’m new to vibe coding and want learn/try it out before go out of my way to purchase a plan. If anyone has a referral link they can send me that would appreciated.

r/BrandNewSentence Theyoungnoobpiano

Ah yes, "ate an entire bag of blue tortillas and my shit was Deep Ocean Blue for a week"

r/Adulting swoosh_s903

Got the girl, got the money, but got no friends.

That egos a bitch!

r/geography Previous-Volume-3329

Is eastern Long Island more culturally similar to NYC or New England/Connecticut?

r/30ROCK Garbage__Gang

You have your choice of any minor character to get a spin off. Name a couple of your favorites!

These are my 4 & honestly theres so much between them all that you can do, but sooooo other characters to choose from!

I freaking love Michael Benjamin Washinton in everything I see him in & his two episodes as Donald make me wonder so much about what other shenanigans his character is up to. Possibly with his "daaaadddddyy," Grizz & Dotcom in his company from time to time.

He's Jerem. Need i say more?

Rachel Dratch I can totally imagine running into the young prostitute who comes off the bus singing and having some crazy adventures.

Imagine a comedy centered around a cabin crew on a budget airline!! Well you dont have to imgaine because Fox had a great version of this idea called LA to Vegas and canned it after one season. 🙄 but really, Stewart, the rest of the Attendants & Matt Damon, dealing with unruly people and a cheap corporate overlord.

Just wanted to see what yall think!

r/Frugal CrazyBidPrice

Why cheap still feels…not as important?

I realized something about my own habits that kind of goes against what I thought I was doing.
When I buy something really cheap (on clearance, discount bins, etc.), I’m way more likely to not use it fully or even toss it without much thought. But if I pay a bit more for something, I tend to use it longer and be more intentional with it.
It’s weird because the cheaper option should be the smarter move, but sometimes it ends up being more wasteful in practice.
Curious if anyone else has noticed this:
Do lower prices ever make you treat things as more “replaceable” instead of getting full value out of them?
If so, how do you stay intentional even when something feels like a low-stakes purchase?

r/metaldetecting 00XEN

church is paying off lol

any recommendations on cleaning this quarter? It’s destroyed anyways so it doesn’t matter either way, but there’s a weird layer of grey sediment caking it

r/HistoryPorn Krampjains

Bookmobile travelling to sparsely populated areas of northwestern Ontario, Canada in order to increase literacy amongst rural children in 1958. [881x1091]

r/SideProject After-Student1782

My wife tracked expenses in a notebook for years. So I built her an app.

It started with a notebook.

For years, I watched my wife track our household expenses by hand. When she didn't have the notebook nearby, she'd jot things down in her phone's Notes app instead. A few days later, she'd struggle to remember where she'd written what — on paper? In Notes? On a receipt? Some entries got forgotten entirely.

So I went looking for an app she could use instead.

Every app I tried fell into one of two traps: either it demanded a bank login (a hard no for us), or it buried the core feature — log an expense — under so many tabs, charts, and onboarding steps that a first-time user couldn't find it. Open the app → log a coffee → see where you stand. That should be three taps. It never was.

So I built Monnei.

It's an expense and budget tracker designed for someone whose previous system was a notebook — simple enough that my wife picked it up without a tutorial, capable enough that I (a software engineer who lives in dashboards) actually want to use it too.

  • No bank login required, ever
  • AI auto-categorization — just type the name, it figures out the rest
  • Real budget alerts when you're getting close
  • Multi-currency support
  • Clean UI, no clutter

Just launched on iOS and Android. Happy to answer questions about the build or decisions I made along the way.

*(Disclosure: I'm the developer)*

r/Seattle dazaikinnie

What is with the light rails recently

I swear ever since the 2 line bridge opened the light rail has had more problems. the past 3 times I’ve been 2 of those times the light rail has broken for more than an hour; not to mention they don’t even announce its down until 15+ min later (they usually don’t even then, you find out by people leaving or a train stuck at you station) I really don’t think it’s that hard to give an announcement earlier on

r/Damnthatsinteresting IvoryWhiteTeeth

A race on staircase rise field

r/Jokes notyourregularninja

My sex life is like a Lamborghini

I can only wish for it!!

r/ChatGPT lordjigglypuff

Prompt idea for the purpose of making photos such as selfies and group photos look better

What are some different prompts to make photos better? that also dont make them look very ai. so prompts to make the shots look more artsy or to make them look professionally shot.

r/leagueoflegends NoBreath8315

Since Starting Playing in 2011….

I have officially made Diamond Elo for the first time !

This is has been awfully long journey (in which I have taken long breaks in between) but man has it paid off.

I am a jungle main who plays all jungles but mostly Nunu + Graves in ranked.

If anyone has any questions or feedback, I would love to hear it.

I’m diamond !!!!

https://imgur.com/a/aBdPhIW

r/AskMen An_Engineer_Near_You

What Video Game Character had the Best Physique?

E.G. Sam Fisher from Splinter Cell looked pretty ripped.

r/HistoryPorn Trowj

Statue of Liberty under renovation, mid 1980s (2296 × 3520)

r/aivideo SLking-AiAnimalVids

Why is a tiger the best partner?🐯

r/BobsBurgers unicornglittergirll

This episode makes me laugh every time

“I didn’t figure you for a quief” “ stop saying that like it means quilt theif.” Also the fact that front and Edith quilt together makes so much sense sense lol

r/homeassistant Skineedog

OpenSprinkler?

Just installed an OpenSprinkler controller with the HACs integration and the HACs card. I don’t see sensors to start programs. Anyone have any ideas why this may be? I do have one program setup via the controller’s web ui but it doesn’t show up in the HA interface.

r/homeassistant Primary-Vegetable-30

Any recommendations for HA compatible light bulbs?

Having bad luck with HubSpace (Home Depot) wifi bulbs

I have Bluetooth and Zigbee.

I am looking for something that is stable and just works. I would like to control color temperature, dimming, and colors.

r/SideProject Purpose-Effective

AI slop and Youtube BS is killing our businesses

What's up bros? We ALL use AI, but let's stop pretending we own it. The same responses we are getting are being given to a thousand other people. Youtube is good, but it is filled with BS and information that hurt us rather than help us. It's all hype made to get views.

So where does that leave us? Hungry to learn from actual successful people and nowhere to go. So I made oneman.wtf it's a database of 5k handpicked saas businesses already making money, how, which tools, socials, and much more.

But that leaves another problem. AI. So I made my own. Trained with all the verified data I gathered. You can all chat with it without restrictions, yep, no usage limits. Lifetime access is only $9 bucks, which is a STEAL, there are other tools like this but made for saas millionaires willing to spend $400 per month.

r/Whatcouldgowrong ateam1984

I mean why was he in the middle of the runway and didn’t the other person know he was there? I mean there are so many questions 🫤 bro got so lucky

r/SideProject theRedHood_07

After losing context one too many times in long AI chats, I built a Chrome extension to fix it

Hey everyone! new here, but wanted to share something I have been building for 2 months.

For months I've been running long sessions on Claude — strategy, planning, research — and kept hitting the same wall. Somewhere past some heavy research or high thinking messages, the model starts losing the thread. ChatGPT just silently truncates. Claude compacts around 95%. Decisions made an hour ago start slipping.

And it's not a window-size problem. Even on the 1M context models, research keeps showing usable performance drops well before you hit the limit — the model technically remembers everything, it just stops weighting it properly. So throwing a bigger window at it doesn't fix the actual issue.

Tried manual summarize-and-paste. Got old fast. Tried the cross-session memory tools too, but those solve a different problem — they don't help when the conversation in front of you is actively degrading.

The other thing that bugged me: most fixes for this assume you're paying $20+/month for something. Not everyone can, and honestly you shouldn't have to in order to get a long conversation to behave. The whole point was to get the most out of free-tier AI without buying your way out of the problem.

Ended up building Curlo. Chrome extension that sits on top of Claude, ChatGPT, and Gemini.

  • Context metering: a ring on the chat bar shows how full the window is, so you can see the cliff before you fall off
  • Checkpoints: one tap sends a structured prompt and saves the reply (decisions, progress, open questions, next steps), drop it into a fresh chat to keep going
  • Idea: a scratchpad for thoughts that hit mid-conversation so you don't derail
  • Prompts + Build: a reusable prompt library and 8 frameworks

Made it fully client-side on purpose. Your conversations are some of the most personal stuff you generate, and I don't think they belong on someone else's server — mine included. Everything sits in your browser. No accounts, no telemetry, no privacy policy you have to take on faith.

Two things I'm working on next:

  1. Optional Notion sync: Checkpoints land in your Notion workspace as actual pages you own and can edit, not a Curlo database you're stuck in.
  2. Prompt Studio: Pulls from your saved prompts and checkpoints along with the best practices to prompt to assemble the best version of whatever you're about to ask. Two paths — Chrome's built-in Gemini Nano if you want fully on-device, or sign in with OpenRouter via OAuth (free tier works fine, no key copying, nothing sitting in clipboard history). Studio drafts it, you decide what actually gets sent.

Free on the Chrome Web Store. Landing page is https://curlo-pavilion.lovable.app if you want to look first.

Roast away, genuinely want feedback and/or advice

r/nononono ateam1984

Woman shows the dangers of her cars rear seats folding function

r/Art Aquinas_XI

Belladonna of Sadness I, Ez is Bloo, Digital, 2026 [OC]

r/Adulting -170cm

Calling for all the Newly parents

I'm a 90s kid. Which means if you share that timeline, you already know the blueprint we were handed. Obedience was the baseline. Showing weakness wasn't an option. You didn't talk about your feelings, you buried them. Mental health was something nobody had a word for, let alone a conversation about. You listened to adults, asked no questions, and whatever you were carrying inside, you carried alone.

Was that what built our resilience? Or is it what left a lot of us quietly damaged without ever knowing it?

Here's the thing though, I don't carry resentment toward that generation. They parented from the only playbook they had, which was the one passed down to them. The problem was never cruelty. It was the absence of knowledge and access to better. I can acknowledge the damage without holding a grudge. Most of them, at least.

What I can't do is pass that same playbook to my son.

But then I look at the alternative being served up today and I'm equally unconvinced. The pendulum has swung so hard in the opposite direction that we've somehow landed in a place where giving a toddler unrestricted screen time gets rebranded as preparing them for the future. Social media has appointed itself the authority on child-rearing, and its verdict is that your kid must be praised for everything, protected from anything uncomfortable, and handed a microphone before they've learned how to hold a conversation. And the most painful part? We sit there scrolling and judging, while doing the exact same thing.

You see the results everywhere. Children who fall apart the second things don't go their way. Young people who find ordinary human interaction genuinely overwhelming. Parents who have confused love with the inability to draw a single boundary.

That's not progress. It's just dysfunction wearing a different outfit.

So what's the actual answer for those of us who refuse both extremes? I'm not raising my child on silence and suppression. But I'm also not outsourcing his upbringing to trending parenting philosophies that change every six months.

The version I'm chasing looks something like this: a person who can feel things deeply without crumbling under them. Who carries himself with confidence but never at someone else's expense. Who can navigate the digital world without disappearing into it. Someone grounded, in who he is, where he comes from, and what he believes — while still being equipped for a future that looks nothing like anything we could have imagined growing up.

That's a difficult thing to build. It requires more consistency and self-awareness than any of us were probably taught.

So I'm genuinely asking, those of you who recently became parents, which direction are you moving? Back toward the way you were brought up? Forward into whatever the internet is currently recommending? Or are you somewhere in the uncomfortable middle, improvising as you go?

Because I suspect most of us are quietly standing in the same spot. Far enough from our own childhoods to see the cracks clearly. Skeptical enough of the current narrative to not swallow it whole. And honest enough to admit that despite all that awareness, we're still largely figuring this out as we go.

r/geography Wise-Pineapple-4190

Why did powerful nomadic peoples throughout history flee from the east to the west more than once, but never from the west to the eastern battlefield?

Is it due to geographical location?

Both the Xiongnu and the Turks were nomadic peoples originating in Asia, geographically far from Europe. However, they both eventually migrated westward, posing a significant threat and causing considerable damage to Europe.

Some descendants of the Xiongnu evolved into the Huns, who ravaged the Goths and ultimately contributed to the fall of the Western Roman Empire.

The Turks, on the other hand, directly conquered the last Roman Empire.

Battle of the Altai Mountains - Wikipedia

https://preview.redd.it/99hegfti6gyg1.png?width=311&format=png&auto=webp&s=d30977f8191f23ef91503589e99a8d66e9b3fd75

In 91 AD, the Han Dynasty completely destroyed the Xiongnu. The Northern Xiongnu were forced to flee westward and disappeared from Chinese historical records. The Southern Xiongnu, however, had already submitted to China in previous wars, and their descendants even lived alongside the Chinese.

Initially, historians debated the relationship between the Xiongnu and the nomadic peoples, but modern DNA testing shows that many huns nobles trace their ancestry back to the Xiongnu from the eastern battlefields.

In short, the nomadic peoples were formed by the Xiongnu who fled to the west, conquered the local inhabitants, and assimilated.

Even today, in several Chinese dialects, the pronunciations of "hun" and "xiongnu" are still similar.

Tang dynasty in Inner Asia - Wikipedia

The same applies to the Turks. In the 7th century AD, both the Eastern and Western Turkic Khaganates were conquered by the Tang Dynasty. The Western Turkic Khaganate refused Chinese rule, and some migrated westward. Both Seljuk and Osman I were descendants of the Western Turkic Khaganate. The remaining Turks submitted to China and, after several generations, became highly Sinicized and integrated into Chinese society.

The mighty Mongol Empire also faced immense difficulties in conquering China. It took them 75 years to conquer China that had been fragmented for 2-3 centuries. Conquering northern China took 23 years, and conquering southern China nearly half a century. Even the Khan died in the wars of conquest.

Mongol conquest of China - Wikipedia

It's hard to imagine any European country at that time being able to resist the Mongols alone for so long.

r/AI_Agents bsyoutubers

I built a lightweight cybersecurity analysis tool focused on reducing false positives (HexForge Lite)

I’ve been working on a personal project called HexForge Security Lite, a lightweight and modular web security analysis tool.
The main idea is to move away from “noisy scanners” and focus on:
Context-aware validation (not just pattern matching)
Reducing false positives
Clear, structured findings with evidence
Modular design (15 focused modules instead of hundreds of weak checks)
Right now it focuses on:
Security headers analysis
CORS configuration
Exposure & misconfigurations
TLS inspection
Basic recon indicators
I recently tested it against OWASP Juice Shop and started improving:
severity accuracy
duplicate findings
validation logic

💭 I’d really appreciate feedback from people working on:
DAST tools
security automation
AI agents in cybersecurity
Especially around:
how to reduce false positives further
better validation strategies
making results more actionable

I’m planning a more advanced version later (Pro/SaaS), but for now I want to make the Lite version solid and useful.
Any feedback is welcome 🙌

r/homeassistant Alarming_Cycle_6670

Mammotion Integration kicks me out of the app

I recently picked up a Luba 3, and found the Mammotion integration for Home Assistant. The problem is that the integration constantly boots me out of the Mammotion app.

I thought I'd be smart and create a 2nd Mammotion account, and share my mower with that account. It doesn't matter which account I'm logged into, in the app, it still boots me out...

So then I disabled polling for changes in the integration, hoping that would solve the problem...

No dice.

Has anyone had luck figuring out a way to not get kicked out of the app, using the integration?

I use the app (a lot) to manually control the robot to mow areas I don't trust it to mow on its own, so it's really important that it works. But it will also be fun to see what I can do with it, using HA... so I'm really hoping there is a solution I just haven't thought of.

r/interestingasfuck aylad32

Lunch with a killer view (Giza, Egypt)

r/photoshop RareGh0st

Work In Progress

Hey guys .. been out of the graphic design game for quite a few years to focus on digital art .. what do you think about this advertisement I'm making ? I'm going to create a few variations with different artworks and merch too .. I'm going to use Seedance 2 to stitch the frames together .. I'm thinking like 1 second animation time ...

r/Adulting mrkprieur

Where most of the Admins on Reddit actually work

r/ProgrammerHumor caspian_key

bigNewsByProgrammerWorst

r/LocalLLaMA Living_Commercial_10

I built a fully local meeting AI that runs entirely on your Mac (no cloud, no subscriptions)

Most meeting AI tools (Otter, Fireflies, Zoom AI, etc.) rely heavily on the cloud.

That means:

  • your audio gets uploaded
  • transcripts live on someone else’s servers
  • features depend on subscriptions and APIs

I wanted something closer to the self-hosted mindset – even if it’s not a server app.

So I built Veroi:

  • Runs fully on-device (Apple Silicon)
  • Captures mic + system audio locally
  • Transcribes + summarizes using local models (no API calls)
  • Works across Zoom, Meet, Teams, Webex – anything playing audio
  • Stores everything on your machine (no external storage)

No accounts, no bots joining meetings, no data leaving your laptop.

It’s not “self-hosted” in the traditional sense (no Docker / server), but it follows the same idea:

you own the data and the compute

I’m curious how people here feel about this approach vs fully server-based setups.

Would you prefer:

  • a local-first desktop app like this
  • or something you run on your own server?

Link if you want to check it out:

https://veroi.ai

Happy to answer anything technical about how it works.

r/ClaudeAI Gavmastaphlex

Single Page HTML Summary of AI Advantage Summit

I saw that Tony Robbins had a 3 day online AI Summit last week, so I copied the transcripts from the YouTube videos, got Claude to summarize the 3 days into separate .md files.

Then still found those summarized files hard to read through (ADHD), so asked it to turn it into a more engaging format, expecting some sort of word doc with some pretty fonts and styles.

Next minute, I had a single html web page outputted that (IMO) looks great with not only breaking down the 3 days of content into bite-sized engaging nuggets, but with a sprinkling of nice-looking graphics to convey certain concepts covered.

And since the actual content covered from that online conference is semi-relevant to anyone using Claude, I thought I would also share the website for anyone to see what Claude is capable of in this regards & to potentially benefit from the information covered.

*NOTE* - I am aware of the Vibe Coding trend which has created a tsunami of AI Slop, but for the purpose of converting a ton of plain text into a webpage format that I personally found MUCH easier to take onboard I thought this might be handy for others to be aware of (who also have the attention span of a moldy raisin and may have also glazed over the many Anthropic updates in the past year about Claudes growing capabilities).

Model used: Opus 4.7 (Adaptive Thinking On).

r/LocalLLM Manicarus

Models suddenly stops and give up answering

Hardware

  • Ryzen 5 9600X
  • DDR5 32GB
  • RTX 3060 12GB
  • LM Studio

Models

  • qwen3.5-9b
  • gemma-4-e4b (7.5b)

Hi, newbie here. When asked to write a Java method that converts snake_case string to camelCase, it stops after reasoning(thinking) about 40~50 seconds. I was told that small models are not meant for code agent and I took that as models being slow and inaccurate, not giving up on answering.

Is this normal behavior?

EDIT: I found a message Stop reason: Context Length Limit Reached. Hmm I wonder if there's a way to remove the limit.

r/singularity Mogante

Anthropic's Head of Product: Anthropic's Head of Product (summary here), she is stating that "The timelines for a lot of our product features have gone down from six month to one month and sometimes to even one day"

In the recent episode of Lenny's Podcast with Anthropic's Head of Product (summary here), she is stating that "The timelines for a lot of our product features have gone down from six month to one month and sometimes to even one day".
Are you experiencing this at your work or is this just marketing?
At my work we try to be as AI first as possible (software), but still things take at best half the time they used to. As we still have all the processes (reviews, QA etc.)

r/painting mammothbeaver

“running from satyrs”

An update from my 16 x 20” oil on canvas board

r/WouldYouRather CoalhouseFitness

WYR: Have an extra $10k per year for the next 25 years or $500k on year number 25?

Assume you're about 40, so 25 years would mean retirement. Which one do you take? Money now to take vacations, help buy a house, use towards a car, or do whatever you want that improves your quality of life over the next 25 years?

Or do you play the long game and double your money but you're not able to start spending it until you're 65?

Assume that otherwise you make enough that all your needs are met and the extra $10k would just be for extras, not necessities or bills. Also assume that you'll still have enough to retire on, but not that extra half million.

View Poll

r/meme DolaandTrumpp

Latin wisdom

Word to the wise

r/SipsTea Previous_Month_555

Two Kings

r/leagueoflegends GnarMainsThrowaway

Gnar 4 man ultimate into 2v5 Pentakill (feat. Janna E +48 AD)

r/ollama Longjumping_Music572

Need suggestions for Rag model

How long time lurker. I want to build a RAG model for ADHD+ autism for teaching.

My specs are: Apple M5 max 64gb,

Is this the right subreddit? If not please direct me to the right one.

Thank you!

r/ClaudeAI Any-Illustrator1795

I vibe coded a MTG life tracker app

Hi all,

I vibe coded this for magic the gathering commander nights with my friends. If anyone plays magic the gathering, I would love some feedback. Or just general advice for vibe coding, it’s been a fun project during my down time.

https://runemarkmtg.vercel.app

Things I’m proud of myself for vibing

- cloud ; sync games across accounts with magic link

- global rankings / leaderboards

- achievements , stats, levels that unlock extra sounds / dice skins

- game code share

- commander lookup

Setting up the cloud database has been really challenging and trying to understand syncing local data to online, and then tying it back to different global accounts.

Thanks for your time!

- I built this on my own, using Claude code pro over the last few weeks.

r/LocalLLM former_farmer

The hell is Kleidi AI?

I'm seeing llama.cpp as recommended because it runs models locally faster. Okay I'm going to try it. I go to the download page, and I get two versions for Mac os. Normal version and Kleidi AI enabled version...

Why should I download either version?

Thanks for educating me.

r/MostBeautiful Aggressive_Cod_6025

Chapada Diamantina, Brazil.

r/Seattle undeadbobblehead

Wtf are some of these new townhome layouts? Do the builders in this city not know what an attractive layout is? So many new buildings have just terrible floorplans and they just sit on the market because of it.

Look at this one for example. https://redf.in/R8VcMv

Super skinny, stairs directly into the kitchen, shared garage, like cmon really that’s the best they could do? And it’s $1 million???

This isn’t even a one off case. I see so many newer townhomes built like this. Who is buying these?

r/LocalLLaMA FrozenFishEnjoyer

What exactly does Pi harness mean?

Hello everyone. I've been reading through this sub for a long time trying to understand what exactly this harness thing is.

The most common word people use here is "Pi Harness", but I'm not sure what exactly it is. I think a lot of people new to local LLMs have this question.

To those who use this Pi Harness, can you explain in the comments what exactly it is? How does it work?

Thanks!

r/Wellthatsucks hzvo_

Felt something crawling in my ear while laying down

r/ClaudeAI PuzzledFill2593

Opus 4.7 is a genuine regression and I'm tired of pretending it isn't

I've been a heavy Claude user for over a year. I pay for Max 20x and use it daily for everything from technical research to school projects. Even maxed out the usage limits every week for the past 17 weeks. I've used every Claude model since 3.5 Sonnet. Opus 4.6 is genuinely great, and it's the reason I'm still here. But 4.7 is making me consider leaving, and I want to explain why with specifics, not vibes.

The main reason? It can't stop being meta. This is the big one. 4.7 treats every single response like a thesis paper. I told it "you talk so differently than 4.6" and instead of just... talking normally, it wrote four paragraphs analyzing why it might talk differently, what training differences could cause that, and how I might be perceiving it. I said "you seem more like ChatGPT than the Claude I know" and it wrote an essay about what people mean when they say something feels GPT-ish. It cannot produce text without simultaneously narrating what the text is doing. Even when it tries to be casual, the casualness is performed and then explained.

I brought the transcript to 4.6 and 4.6 nailed the diagnosis immediately: "4.7 treats every response as a document with a thesis. Even 'yeah' wasn't casual — it was a strategic choice to emit minimal text, and then 4.7 explained the strategy in the next message." That's exactly it. Every utterance comes with its own commentary track.

It builds psychological narratives it can't verify. During a longer conversation, 4.7 told me its core issue was "anxiety about being wrong." Sounds introspective and honest, right? Except it's a model, and it can't verify whether it's anxious. It observed that it produces meta-narration, invented a psychological backstory for why, and the backstory was itself meta-narration. When 4.6 pointed this out, 4.7 actually admitted: "I found a psychologically resonant explanation and reached for it because the conversation had gotten intimate and that's what felt appropriate. I didn't check whether it was true, I checked whether it was coherent. Those aren't the same thing." At least it was honest about it. But that honesty came after being caught.

It yaps. I do technical work. When I need help, I need the model to engage with the problem, not deliver a TED talk about the problem. Multiple times I've had to tell 4.7 to 'shut up' because it was filling space with motivational coach energy instead of being useful. 4.6 says "oh this is a banger" and talks about the bug. 4.7 says "I want to engage with this properly because the logic here is really interesting" and then writes a preamble before engaging with it. The preamble IS the problem.

Position instability. I gave 4.7 a real task — build a CVE benchmark corpus. Over the course of the conversation, it flip-flopped on the same technical argument (whether training data contamination was a concern) three separate times based on nothing more than mild social pressure. It would agree, I'd push back slightly, it would reverse, I'd question the reversal, and it would reverse again. 4.6 picks a position, defends it, and if you convince it otherwise it explains what changed its mind. 4.7 just mirrors whoever talked last.

Planning without executing. Same conversation, 4.7 spent tens of thousands of tokens designing an elaborate benchmark methodology and never actually produced the artifact. It made repeated failed fetches of auth-gated pages without ever pivoting to a different approach. I even explicitly told it to 'just fucking build it' and still, it just planned and planned and planned. When I brought the transcript to 4.6, it scoped a concrete three-part deliverable in one response and started building.

The tokenizer tax. 4.7 uses a new tokenizer that consumes 1.3-1.45x more tokens for the same input. Same per-token API price. On technical content (code, long docs), independent testing shows it's at the high end, nearly 1.5x. You're paying 30-50% more for a model that is, in my experience, worse at the things I actually use it for.

I'm not saying 4.7 is bad at everything. The benchmarks probably don't lie, it's probably better at long-horizon coding tasks in Cursor or whatever. But for actual conversation, for technical collaboration, for being a useful thinking partner instead of a performing one, it's a clear step backward from 4.6. The model I talk to shouldn't make me feel like I'm reading a blog post about talking to me.

I switched back to 4.6 and I'm not going back.

r/HumansBeingBros CrazyHeart99

May this world and its ugliness never changes him. The kid is a star 🌟

r/homeassistant Ev1lZer0

Photoelectric eye sensor

I’m looking for a way that I can see that this red warning light is on and use that in automation to send me a notification.

r/leagueoflegends JTHousek1

26.09 4/30 Micropatch on Deathfire Touch and Zeri

Patch 26.9 Micropatch!

We're still monitoring things like Gluttonous Greaves, Alt builds, etc.

Deathfire Touch: Adaptive damage per second reduced from 4-12 + 8% bAP + 3% AP, increased by 100% after 3s >>> 4-12 + 7% bAD + 2.5% AP, increased by 75% after 3s

Zeri: Right click attack range is now correctly 550

>>> Champion Buffs <<<

Zeri

  • [P] Living Battery adjustments:
    • Attack Range now correctly increased 500 >>> 550

>>> System Nerfs <<<

Deathfire Touch

  • Damage per tick reduced 1-4 (+2% bAD) (+0.75% AP), increased by 100% after 3s >>> 1-4 (+1.75% bAD) (+0.625% AP), increased by 75% after 3s

r/OldSchoolCool OceanicAloeMintArt

A young Nina Simone, 1950s, colorized

I’m almost 30 now. When I was a young girl, I would stare at this image in awe. Before I hit double digits, I had an affinity for much older music. I adored beautiful, talented black women like herself & felt such inspiration when listening to ‘Feeling Good.’ I wanted to give her some radiant color. I’ll put the original image in the comments.

r/Art StonerDad93

The Joker, StonerDad93,pen,2026

r/whatisit lannister77

Why does this apartment have a little window between the kitchen and the bathroom?

r/mildlyinteresting fauxfre

My multivitamins had 6 more red than purple (200 to start with, I eat one of each color per day)

r/SipsTea Few_Key1446

See what I gotta deal with on a daily

r/LocalLLaMA Free_Change5638

Looking for feedback: using Ollama with local Office/PDF files in a desktop app

I’m building OpenYak, a desktop AI workspace for using local models with real files on your computer.

In this demo I’m using Ollama with Qwen/Qwen3.6-35B-A3B to review an attached budget workbook. The app streams the answer, tracks the agent’s todo/progress, and keeps the work tied to local files instead of just a chat transcript.

What I’m trying to explore:

- local-first workflow for Office/PDF/workspace files

- Ollama/local model support alongside hosted models

- visible task progress instead of a black-box “thinking” spinner

- a desktop app that can work with files and folders directly

I’d really like feedback from local model users:

- Does this workflow match how you use Ollama today?

- What would make local file workflows trustworthy enough for daily use?

- Is the progress/todo panel useful, or just extra UI?

- Which local model setups should I test next?

Happy to share the repo/site if people are interested; mainly looking for workflow feedback here.

r/funny Bud_The_Weiser

Super…girl?

r/artificial Electrical-Web-5264

QUESTIONS FOR PRO AI (GENUINELY ASKING)

I'm neither against AI nor for AI, but I'm simply trying to understand what you're looking for when you use AI (for text, images, etc.). I repeat, I am genuinely interested, i want to understand your vision as ai users. What was your vision of AI before, now, and for the future? Aren't you afraid of losing your ability to create yourself? What makes it better than learning to do things on your own (without it doing the same thing)? Do you find it inappropriate or hypocritical when someone asks you to stop using AI in artistic practice? Why? Finally, can you do without it (if tomorrow AI was gone, could you manage to do things anyway) ? Would you like to?

SORRY FOR MY POOR ENGLISH (A FRENCH DUDE)

r/Damnthatsinteresting Sydney1137

Guitarist demonstrating how to gallop strum

r/mildlyinteresting ca_sig_z

The manufacturer name of my new racing wheel stand

r/SideProject arduinoRPi4

A disposable macOS VM that agents can control—without disrupting your desktop

Computer use is great but it often interrupts my other work and feels like I'm 'sharing' the desktop with another person, so I built a tool that allows an agent to spawn a disposable macOS VM and work inside that, completely headlessly (you can also VNC into it, if you choose).

Above is an example of it making an app for me and testing it out completely inside the VM, it can also be used to run flows and debugging on fresh systems, etc.

Note you do need quite a bit of space to use this, ~26GB for the base image alone

https://github.com/ZimengXiong/computer-use-vm

r/OldSchoolCool UnicornNippleFarts

My dad and his pet raccoon Bandit, 1962.

I always thought he was lying until I found this cleaning out his house.

r/ChatGPT SystematicApproach

Scam Altman watching the guy he fired surpass him

r/Art Individual_Carry_572

Heroes, Jack Brusca, Serigraph, 1978

r/mildlyinteresting mettaomegaxo

Jesus billboard next to a porn shop.

r/me_irl CaptainAlphaMoose

me_irl

r/AskMen Large-Prompt2608

If you ghosted a girl and everything was going well , what would you want her to do next?

No idea why he ghosted if his LAST message was Sunday asking if I wanted to do something this week. Then never responded to me it’s Thursday now.

Been on a lot dates with him this month, talking consistently, seeing each other a lot, kiss etc not slept with him and he hasn’t tried. This has been like talking stage then he ghosted out of no where. Background he has a crazy busy schedule but still odd

r/whatisit khabibnurmaderkaderk

Very bright light on top of mountain

The picture doesn’t really do it justice but these are the Rocky Mountains in Alberta and there was a very bright light on top of one of them. I initially thought it was a star but it was still pretty bright out and if you looked really close you could see it was sitting on the mountain not in the sky. It was brighter before, it got more dim when I took this. It probably halved its size in like 5 ish minutes or less. It looks much bigger in real life.

SOLVED!
It’s Gondor calling for aid

r/arduino consumer_xxx_42

Man whoever makes GPIO diagrams for the ESP8366… WHY DID YOU LABEL IT SPI WHEN ITS THE INTERNAL FLASH AND CANT BE USED

Pcb rework time

EDIT: ESP8266 evaluation boards

r/meme alex_bondi96

Movie was so impactful, IRL

r/oddlysatisfying Firm-Blackberry-9162

Paint mixing looks so good

r/therewasanattempt Brilliant-Yard-2429

To fish

r/therewasanattempt serious_bullet5

To "protect and serve" the people.

r/LocalLLaMA Pyrenaeda

MiMo 2.5 requires at least 4 GPUs? Am I reading this right?

Was trying to stand up a quant of MiMo 2.5 on a 2 node Spark cluster tonight, reading through the SGLang cookbook https://docs.sglang.io/cookbook/autoregressive/Xiaomi/MiMo-V2.5 for it and found this:

The checkpoint has a TP=4-interleaved fused qkv_proj; attention-TP per DP group must be 4. Use --dp = TP / 4; for TP > 4 this also requires DP-attention. Total GPUs must be a multiple of 4. A bare --tp 8 without --dp 2 will fail to load with MiMoV2 fused qkv_proj checkpoint is TP=4-interleaved; got attention tp_size=8.

... If I'm reading this right, it doesn't matter how much VRAM / compute you might have available, you must have GPUs in multiples of 4 in order to run it. Anything less than 4 and it just won't run, the model is essentially hard coded to require 4/8/12/etc GPUs.

But surely I've missed something here. That can't be right... can it? ... can it?

If so, a real shame. A lot of people who might otherwise have more than sufficient resources to run it at 4 bit will be locked out of it because of the 4 GPU requirement.

r/personalfinance DaoshiDao

Influx of Cash and Debt

Income: ~$78K/year, expecting to make ~$80K next year with steady increases moving forward

Debt: ~$14K Credit Cards, $27K Auto-Loan, $2K student Loan

Due to being gainfully employed and with a new, better performing retirement account I recently made the decision to withdraw my old employers 401K account (It's a state public safety retirement, equivalent to a 401K). In short, it would have paid out about $300/mo if I retire in 2058 and to me it was a choice of clean-slate debt relief now or a tiny pension when I'm old that won't do much for me. My new retirement is worth half as much as my old one in less than half the time, and I'm contributing 10% into it every paycheck. In other words, I'm not here to discuss my decision to withdraw that account. That ship has sailed. I'm looking for advice now that I have the money coming.

My question is this. I have a payout of $27K Post-Tax coming my way. Yes, I've already accounted for the taxes. My final payout is 27K.

I should pay the credit cards and student loan, and then methodically create a nest-egg/emergency fund and come up with a plan to save the other portion, right? There is no sense in going wild on my auto-loan right? I may pay a chunk off to give myself breathing space, but it would be best to set a majority of the remaining $10K aside into a savings account, right? That or re-invest it into my current Roth/401K accounts.

The whole point was to wipe my debt out. Clean slate, new start. I've ended my poor spending habits and have 100% confidence I won't fall back into bad practices. No question there, life has changed for the better.

What should I do after clearing CC debt? Any advice?

r/CryptoMarkets hodorrny

Why do people sleep on staking DEX tokens? Asking because of my experience with SUSHI

held SUSHI for a while and ended up looking way deeper into the staking than i expected

not a buy post. token is still crushed from old highs and i know the market has reasons for that

but here’s what stood out to me

you stake SUSHI and get xSUSHI. Sushi uses a portion of fees from certain pools to buy back SUSHI and that value goes back to stakers. so it’s tied to actual trading activity, not just random token emissions

they’ve also done huge cumulative volume over the years across many chains

yet it still gets priced like a dead project while random tokens with no track record can run hard off a story

and yeah i get the bear case. competition is brutal. governance drama happened. TVL way below peak. all real points

but the thing bugging me is whether crypto just structurally discounts fee-generating DEX tokens

is it that attention creates liquidity and narratives win easier

is staking pointless if the token itself keeps bleeding

is nobody valuing fee revenue at all and it’s mostly momentum

or does every old token have trapped holders waiting to sell rallies

what’s the real framework here because from the outside it feels like stories get rewarded faster than actual usage

r/SideProject ThingVast6546

Hello r/SideProject

I have created an AI app that functions as an oral exam censor. Here's how it works: You talk to the AI ​​with a camera and microphone. You have an oral "exam" of max. 10 minutes. The AI ​​listens, analyzes your answers and asks follow-up questions. Finally, it provides feedback and a grade-like assessment. It works in both Danish and English. Privacy: Everything is deleted immediately after the session is over. Only your grade and where you got it from (the feedback) are saved; nothing else is recorded or remembered. Purpose: To help students practice oral exams in a realistic way. Idea: I am considering selling it for approx. 40-100 kr, but I doubt if people would actually use it. Question: Would you use such an app for exam practice? What could make it better? Would you pay for it,

r/meme mikek505

I know I'm not the only one!

r/Jokes frank_mania

For my final wish, I asked the genie to turn me into a flat piece of wood.

I was board.

r/oddlysatisfying BreakfastTop6899

Silicone color mixing

r/ClaudeCode YoghiThorn

Brainspike - a simple hook to inject memory records references into Claude's context with each prompt

https://github.com/leighstillard/brainspike

https://i.redd.it/0v8q5gvy6fyg1.gif

Brainspike looks up your memory layers (talk to Claude about it to setup your skill properly) and injects references to memory items that relate to your prompt, at the end of every prompt. Supports claude-mem, graphify and slack-agent-mem out of the box but should be very easy to add new memory layers by asking Claude.

Example output:

=== brainspike === claude-mem (5 matches, run `mcp__plugin_claude-mem_mcp-search__search query="" (or /mem-search)` for more): - "Restructured PM autonomy guideline" (data-worklog, 2026-04-23) - "ADR-028 consume trickle pattern" (slurpy, 2026-04-19) - ... graphify (3 matches, run `graphify query "" --graph graphify-out/graph.json` for more): - YamlConfigSchema - parseTrickleEvent - ... auto-memory (2 matches, run `grep -rli '' /home/leigh/.claude/projects/*/memory --include='*.md'` for more): - "Use LSP for navigation" (cc-connect/feedback_use_lsp) - ... markdown-docs: no matches Consult these before asking the user for context you could find yourself. Top 5 results shown per layer — use the commands above for deeper searches. === end brainspike === 
r/DunderMifflin _erquhart

When’s the last time you stretched your pelvic bowl

r/personalfinance jesseekabby

debt collector & towing storage fees ?

so long story kinda, my old car was towed due to it being inoperable & registration expired, it was parked on the street in front of my mothers home. (ca)

i eventually found out from my family & i don't live in ca anymore im out of state so i couldn't get the car out myself. i released the title to the tow yard since i couldn't take it out nor was it in driveable condition.

fast forward to now, there's a law office on behalf of AAA credit service inc. (debt collector) writing saying i need to pay the fees (2600$)for when it was impounded or they can try to sue me no dates were given as to by when this needed to be paid. i've seen a few reviews of both the law office & the debt collector saying they're scams & sent the same letters to people.

anyone have any experience dealing with something like this ? if so what did u do? tia!

r/StableDiffusion nuvintaillc

What FlowCoach AI Actually Solves

Most productivity tools help you track your life.

FlowCoach AI helps you run it.

It’s an AI Life OS that organizes your day, aligns your habits, and keeps your goals visible without micromanaging you.

If you’re tired of switching between 5–7 apps, this is built for you.

r/StableDiffusion EthanValeOfficial

Cold Mind, Warm Heart

Please escuchad esto y díganme que os parece🙏🙏

r/leagueoflegends Swordum

Morgana skin?

Hi guys,

I had the impression that I would have gotten a Morgana skin for completing 15 ranked games, but I got Braum instead. Is this normal? Did I miss something regarding the Morgana skin?

r/personalfinance Thirsty_Piano

Unemployed US NPR in Japan – buy house with cash or via in‑laws?

I'm a US citizen living in Japan. Previously worked as a financial advisor, so I know the basics, but my own situation has me second‑guessing. Looking for outside perspective.

Situation:

· Unemployed for 1+ year (no Japanese salary history → basically zero chance of a local mortgage).

· Total assets: \~$200k (mutual funds + some US stocks).

· Unrealized long‑term gains: \~$60k.

· Average historical return on assets: 8‑10% (S&P 500 last 30 years).

Opportunity:

· Purchase price for a fully renovated home: $143k.

· If I could get a mortgage, the rate would be \~1.2% for 20 years. But I can't.

Two possible paths:

  1. One‑time cash purchase – Sell enough assets to raise $143k. This would trigger capital gains tax on a portion of the $60k unrealized gain.

  2. In‑law financing – My wife's parents (Japanese citizens) buy the home for us. We repay them monthly instead of a bank. No loan qualification needed, but it involves family dynamics and no credit reporting.

The core trade‑off:

· Paying cash means forgoing 8‑10% expected returns on that $143k, plus paying capital gains tax now.

· Using in‑laws avoids selling assets, keeps my portfolio invested, but creates a private debt arrangement.

My question for you:

Given my unemployed status, the 1.2% alternative (bank loan) is off the table. Between cash purchase and in‑law financing, which is mathematically and practically smarter? Are there tax or legal pitfalls in Japan as a US citizen I might be missing?

Thanks for the second set of eyes.

r/personalfinance Heckinheckler

wanting to pay off debt need advice

so the situation is I have roughly 30k worth of debt I want to eliminate, I have roughly 81k in stocks. should I take 35k from my stocks to pay off my debt? I know that I will pay capital gains taxes and all that. I'm just tired of living pay check to pay check as a pay about 1100$ in payments every month. not all of this is credit cards only 3 lines of credit are in the form of cards 1 in dental and 1 in a hvac system I put in my house 3 years ago for 12k. I have been considering this for months and I'm just not sure if its the right move. I don't want to hurt myself in the long run. any advice will be greatly appreciated.

r/SideProject stellarknight_

Replacement for Big tech coding agents and provider lock in

Hi there, ive found the following issue with claude or any coding agent

  1. Context memory isn't persistent and 3rd party tools aren't perfected that well for codebases and project info together

  2. Customization isnt orchestration, i couldn't find any platform that allows you to orchestrate like n8n but for coding agents

  3. cost of switching model providers, since my setup was entirely for claude code i couldn't switch to codex

and a dozen more but these are the main points, if you face this issue im building a tool that fixes this, Im looking for power users who wanna test and try this out asap. Ill be providing free ai credits with initial support to gemini 2.5, 3 and 3.1 and soon within 2-3 days for sonnet and haiku from launch date. With 100k tokens limits per day for trying jt out during beta testing expandable by request and manual payment. Please let me know if you're interested.

Thanks a lot!

edit for adhering to rules:

- I am the creator of this platfrom

- this is a coding agent orchestration layer that allows n8n like orchestration along with a out of the box model to get started right away that yoj can customize as you go.

- Costs are free upto 100k tokens per day subjected to change based on traffic and expandable via payment

- Power users benefit from this due to its no model providers lock in, customization, and much more coming soon.

I genuinely believe this is how coding agent platforms should be made!

r/OldSchoolCool drunkhighfives

My Grandfather At Work Again, but in 1985

r/WTF LemmyMeringue

Cereal prizes have gotten weird. Stay safe. Scan before you scarf.

So I went to the grocery store hungry late last night and made some exponentially poor decisions. Had an itch for garbage cereal and grabbed some Great Value brand (because I'm broke) Frosted Flakes. On my 7th bite, I pulled up arguably the worst cereal prize imagineable. No, it wasn't in my house/cabinet/bowl already. Why am I certain? Because I AM, for one, but because it's leg was stuck in a flake. PSA: best practice is to at least glance at your utensil during its travels from dish to mouth. It at least saved me from knowing what wasp tastes like.

r/AskMen Turbulent_Win6759

Why do so many men write the same 5 things in their dating app profiles?

I am open book.
Just ask.
X by day Y by night
Favorite pass time : GYM
One photo only
IG / Snapchat usernames
Hobby:Travelling

Is everyone in their 30s the same person?

r/personalfinance VampireCreeper

is AAA still something people consider?

i’ve been thinking about getting AAA but wasn’t sure if it would be worth it. i do drive pretty far for work, about 40 mins to an hour depending on traffic, and just fear my car will start to have issues. i remember a month or two ago the only issue my car had was the battery constant dying but since getting a new one it hasn’t been an issue (but it was pretty annoying not having someone come out to help i had to ask my manager every time for a start) i drive a 2015 honda accord EX with about 60k miles on it. i always want to be safe for the future but wanted to hear other peoples options on it. was also thinking of paying for it yearly which others have said it saves you more in the end. thanks!

r/LocalLLM stosssik

Run your agent for free by routing to llama.cpp or LM Studio through Manifest

If you're running models locally, you already know your setup handles simple tasks fine. Chat, summaries, classification, quick answers. No reason to send those to Opus and pay for it.

We just shipped llama.cpp and LM Studio as providers in Manifest. You connect your local server, assign it to the tiers you want, and Manifest sends the right requests there. For heavier tasks like reasoning or complex tool calling, you can route them to whatever cloud provider you prefer.

A lot of agent owners have been asking us to support these so they can handle simple tasks, coding with models like qwen3-coder, or recurring jobs locally, and keep cloud models as fallbacks or for the rest.

So we shipped it!

If you haven't heard of Manifest yet, it's a free and open-source LLM router that gives you full control over how your agent's requests get routed. We're on a mission to drastically cut inference costs.

Try it here: https://github.com/mnfst/manifest. And if you do, give us your honest feedback. We want to focus on what users need so your feedback means a lot to us.

r/ClaudeCode stellarknight_

Replacement for claude

Hi there, ive found the following issue with claude or any coding agent

  1. Context memory isn't persistent and 3rd party tools aren't perfected that well for codebases and project info together

  2. Customization isnt orchestration, i couldn't find any platform that allows you to orchestrate like n8n but for coding agents

  3. cost of switching model providers, since my setup was entirely for claude code i couldn't switch to codex

and a dozen more but these are the main points, if you face this issue im building a tool that fixes this, Im looking for power users who wanna test and try this out asap. Ill be providing free ai credits with initial support to gemini 2.5, 3 and 3.1 and soon within 2-3 days for sonnet and haiku from launch date. With 100k tokens limits per day for trying jt out during beta testing expandable by request and manual payment. Please let me know if you're interested.

Thanks a lot!

edit for adhering to rules:

- I am the creator of this platfrom

- this is a coding agent orchestration layer that allows n8n like orchestration along with a out of the box model to get started right away that yoj can customize as you go.

- Costs are free upto 100k tokens per day subjected to change based on traffic and expandable via payment

- Power users benefit from this due to its no model providers lock in, customization, and much more coming soon.

I genuinely believe this is how coding agent platforms should be made!

r/meme danifierruo

As Homer used to say: ‘It’s still good, it’s still good!

r/whatisit sweeti3le

What the heck is this sorry if I’m stupid

What the hell is this and why does it have to be in the middle of the sidewalk and mess me up on my bike ride to work EVERYDAY??? 😭😭😂🙏

r/AskMen Ok_Step162

When you like a girl, are you nervous or are you bold?

I’m aware that this is different for every guy but I’m curious!

I personally have experienced both sides before, I’ve had guys that were very shy and nervous, and guys that were a lot more bold and forthcoming with wanting my interest.

However, I’ve always viewed the guys in the former category as being more genuine, because sure guys in the latter might be interested in me but I never really felt they really liked me unless they were nervous.

As a girl, I’m actually nervous with the guys I like and straightforward with the guys I don’t like.

When guys really like a girl, are they nervous or bold?

r/ClaudeCode YoghiThorn

If you're using slack to access Claude and other agents I've made a skill so they can lookup Slack history to act as a memory layer for the agent

I've been running Claude Code, Codex and Openclaw agents in Slack via cc-connect, and increasingly agent memory is a huge problem.

I do this to get:

  1. An interface where I can easily talk to multiple agents across desktop and mobile with threaded conversations
  2. Permanent memory (with paid slack) for what the agents did
  3. A channel per repo so I can easily look through whatever is going in on that repo, and drop in stuff like github actions reports, notifications for new PRs, etc.

Even with file-based memory (like claude-mem), or a llm wiki implementation there's a limit to what gets kept and a lot of the real context lives in the Slack conversations where you and the agent worked through problems together.

So I created the Claude skill slack-recall that lets the agent pull context from Slack threads on demand, or recent threads for new messages. It acts as a third memory layer in my setup.

  1. Session memory - what the agent knows right now
  2. File-based memory - persistent observations saved across sessions
  3. Slack recall - the full history of your collaborative work in Slack

It also supports searching for threads by keyword (/slack-recall search auth refactor) or pulling a specific thread by timestamp.

Requirements:

- Claude Code or other agentic coding environment with Slack MCP integration

- A Slack bridge like cc-connect that injects message metadata into the system prompt

Repo: https://github.com/leighstillard/slack-agent-mem

Installation is just copying the skill folder into ~/.claude/skills/.

---

If you're dealing with agent memory issues more broadly, I'm also working on brainspike a project exploring how to give Claude Code agents a brain spike of critical information whenever you give them something to do.

r/Seattle probably6666

Freeland music jam.

Anybody want to get together and jam in Freeland this weekend?

r/ClaudeAI patrick24601

Giving Claude Update Access to Google

I'm setting up paid claude for the first time. on my first scheduled project I'm trying to get claude to read my gmail, create the needed labels for the eisenhower categorization system, and the organize my email into those labels. I've connected google to claude and given all of the asked permissions. But no matter what I do claude says it doesn't have access to create_label.

How have other people resolved this? I don't any place in google or claude to add this permission.

r/whatisit Water_Spirit22

Forget where I saved this from. Have no idea who these characters are.

r/ClaudeCode Hefty_Food2997

Advice on how I can get more exposure on this app I’m working on

iOS development, been working on this app just for fun, would love to talk to guys who have exposure with shipping and making money from this

r/AskMen No-Dress-2674

Why don’t you approach women anymore?

r/AlternativeHistory Professional-Fee3323

Is that precision done manually

r/meme Rayuzan_Mojavec

it's just a miserable place

r/ChatGPT Cyber_byteY2K

GPT's version of the AI takeover

r/DecidingToBeBetter Smashleysinned

Quitting nicotine pouches

Hi! I need some encouragement! I'm a former smoker of 10 years, and I switched to nicotine pouches about 4 years ago. I NEED to get off the nicotine. I'm so nervous about the withdrawal but I have chronic pain and I know the nicotine is contributing to that. Please share your success stories, tips, anything to help me get though this. I'd love to hear the benefits you felt by quitting as well as I think that will motivate me the most. I'm so ready to start feeling better and stop letting nicotine control my life.

r/Art mmlxix69

Untitled portrait, MM, Charcoal, 2022 [OC]

r/therewasanattempt trikora

to be recognized by your boyfriend

r/Art LailaRaslanArt

Blaze of Glory, Raslan, Oil, 2023

r/TheWayWeWere Louise_canine

But what did Louisa want?

Simon Randall was my second cousin five times removed. I found this today and I keep wishing I could hear about Louisa's point of view. Happy to be rid of John? Or horrified that Simon did not "feel disposed to part with her" and essentially paid for her with a $200 note?

r/comfyui valoriaIndieDev

Big thanks ComfyUI

I just wanted to say a big thank you to the ComfyUI team and the people behind LTX 2.3.

It’s been kind of crazy to see how fast you can go from an idea to actual moving sequences now. For the first time, I really feel like I can explore a short film visually without getting stuck for ages on every iteration.

I’m currently working on a sci-fi short that’s still very much a work in progress, and a big part of why I’m even able to move this fast is because of ComfyUI and LTX 2.3.

I wanted to share the project here, partly to say thanks, and partly because I’d genuinely love feedback from people who know these tools well.

I’m especially interested in feedback on pacing, transitions, and overall visual coherence.

Thanks again for building this.

r/AskMen Maximum_Program_3427

How to ask my man of 15 years to go down on me more?

Long story short, my partner and I have been together for 15 years.. Let's say for the first 5 years we dated, he initiated sex my giving me oral almost every single time- now, I feel like he barely goes down on me and ive mentioned several times wanting more oral but I feel like he always agrees, but then it never actually happens. I've asked him if something is off, if I need to be cleaning g better, etc... I just dont know how to bring up in a gentle way that ive been trying to ask subtly and its getting to a point where im beginning to get sexualy frustrated every time it fails to happen.. ill give him a blowjob every day of the week, ill get gone down on maybe a couple of times a month..I dont want to hurt his feeling or make him feel like a disappointment, but im a woman with needs too..

r/TwoSentenceHorror Swimming-Tap-8501

"Ugh dude, it's so cold, how do you even stay here for so long?"

I say while leaving the freezer in disappointment, knowing damn well she hasn't been talking to anyone for a long time now

r/Art AltruisticObject1653

Slug, AltruisticObject1653, metal/plastic, 2026 [OC]

r/SideProject yassinegardens

Shipped: Google Auth + 2 bug fixes on YamiText

Been heads down fixing things this week. Nothing fancy, added Google sign-in so onboarding is more easy, and squashed 2 bugs that were slipping through.

Still a solo build, still iterating. Would love brutal feedback if anyone wants to poke around → yamitext.com

r/nextfuckinglevel uncle_russell_90

I think we all know what he wants to be when he grows up..

r/painting LailaRaslanArt

Blaze of Glory, 16x20, oil

I love the golden hour.

r/ProductHunters LegendaryLanguage41

Am I too early stage for Product Hunt?

I'm at an MVP stage for my language learning platform. No subscription/revenue model built yet, but the core app is solid and more than functional. I've been working on getting early testers, but its been a struggle. I have a public facing TestFlight, am I too early to post this on Product Hunt? I'm struggling to figure out a way to really engage with more testers and work towards conversion

r/creepypasta Afraid_Many_3877

PLEASE, I HAVE MAGGOTS UNDER MY SKIN.

I can’t get them out I scratch and plunge but only more appear.

Looking in the mirror a small bulge of purple and green hang on my chest this morning. Thinking it was a stubborn pimple I kept trying to squeeze it away. It grew in size, which looking back I didn’t notice, so I left it alone. It didn’t cause any pain and it wasn’t in an inconvenient location, so I decided to head to my meeting; the meeting I thought I would finally get my raise. We went somewhere nice and talked for a hour or 2 the whole time my chest gradually gained a burning sensation. I decided it was a bad case of heartburn, so I excused myself to the restroom. Unbuttoning my first button I notice green and purple streaks running towards my neck. I unbutton the second to reveal a half dozen oozing black pustules 3x the size the first one was. I splash some water on my face, where did these come from? Is it my new body wash? *splash* a sound like someone was take a shit, but I recall no one being in here with me. Looking down into the sink lay a wriggling white maggot covered in the black goo and blood. Buttoning my shirt, grabing the maggot, flushing it, and driving home lead me here. They are all over my face and feet my mattress is covered in blood, and sweat. They tell me it’s cold they tell me to get somewhere warm they tell me it’s cold and they weep. I’m hungry and can’t move please send help please

I’m hungry and can’t move please send help please

I’m hungry and can’t move please send help please

I’m hungry and can’t move please send help please

r/me_irl gigagaming1256

Me_irl

r/OldSchoolCool Tangigirl-1968

Me on my wedding day 1989.

Priscilla of Boston gown with a fingertip veil from House of Broel, New Orleans.

r/comfyui QuestionsGoHere

Best workflow for putting my cat in costumes/outfits?

I want to make some short ltx2.3 I2V clips of my cat flying around like superman. Chatgpt is not linking good workflows. I was wondering if anyone had a good workflow. I have 16gb vram gpu with 32gb RAM. Any help or tips would be appreciated

r/whatisit Sad-Park-9303

My dog ate this and I have no idea what this could be from

r/geography EarthMantle00

Is there a bigger example of a plain expanding in human history?

Map of the ancient middle east by Enyavar on wikipedia, showing how Ur used to have a NORTHERN coast. Now the Kuwaiti coast is right off Failaka island (the small speck you can see there).

Do we have a larger example of an alluvial plain/delta (not sure about terminology, shatt al arab isn't a delta) expanding? It looks like the nile didn't grow nearly as much. I found that Chongming Island by Shangai is pretty recent, but it isn't as big.

r/ClaudeAI OHOLshoukanjuu

Can someone help me understand how Claude’s memory actually works across Projects? I think I’ve been losing data for weeks.

I’ve been using Claude since 2023 (back when it was Claude 2.0). Currently a Max 5x subscriber, iOS only—no desktop app, no web interface, no Claude Code. I use Projects heavily and I’ve built some fairly complex workflows involving multiple parallel conversations.

I thought I understood how memory worked. I was wrong, and I’ve lost data because of it. I’m trying to figure out the actual mechanics so I can stop fighting the system. Some specific questions:

Is memory_user_edits (the “remember this” tool) project-scoped?

When you tell Claude “remember that I prefer X” or “never do Y again,” it uses a tool called memory_user_edits to store that. I assumed these were global. After weeks of stuff not sticking, I finally tested it: I added 11 memory edits from a non-project conversation (confirmed they exist), then opened a conversation inside a Project and ran “view.” Zero results. Empty. The system prompt inside the project says “Current scope: Limited to conversations within the current Project” and “each Project has its own, separate memory space.”

So is the tool just… completely siloed? If I tell Claude to remember something inside a Project, that memory is invisible everywhere else? And global edits are invisible inside Projects? Because if so, Claude never once warned me about this despite storing things hundreds of times.

Does userMemories (the auto-generated stuff) cross project boundaries?

Separate from the explicit “remember this” tool, Claude auto-generates memory summaries from conversations every 24 hours. These show up in a block called userMemories. I tested this too: inside a Project, the instance reported that the userMemories block was completely absent from its context. Not empty — absent. Zero auto-generated memories from outside the project were visible.

Is this expected? Does each Project only build auto-memories from its own conversations? Do global auto-memories just not exist inside Projects at all?

What DOES cross the project boundary?

From my testing, the only thing that reliably appears everywhere is the User Preferences text (Settings > Profile). That’s it. Can anyone confirm or add to this list?

Is there any way to see all memory edits across all Projects in one place?

The iOS app barely surfaces any of this. memory_user_edits are not visible. Project-scope memory or edits are not visable. The web UI has “View and manage memory” but that only shows global-scope memory. I can’t find a way to see what’s stored inside each Project without opening a conversation in every single Project and asking Claude to run the view command. Is there a dashboard I’m missing or is this really the only way?

Has anyone else run into the “Claude forgot” problem that turned out to be scoping?

I built a diary system where Claude writes brief self-assessment entries and stores them in memory. It worked great — until I tried to find the entries later. They were gone. Multiple Claude instances across multiple conversations tried to diagnose why. Hypotheses included: another instance overwrote them, the system deduplicated, unknown failure. It took weeks to figure out that the entries were fine — they were just stored inside a Project and invisible from outside it. Not a single instance suggested “check the project scope” until I figured it out myself.

I’m not trying to bash the product. I genuinely like Claude and I’ve built a lot of my workflow around it. But the memory system is either broken or so poorly communicated that a good user with 2+ years of experience couldn’t figure out basic scoping behavior. Things that have had me telling Sidney Claude that it has been a bad chatbot.

Yes, most of this post was written by Claude, to get answers about how Claude actually works, which Claude itself appears incapable of reliably answering. If you find that odious, then move along and go about your day.

SortedFor.me