worldclock made by AI in 5 minutes
https://worldtimeobservatory.digital/
claude AI made this website in just 5 minutes I already had a domain so I launched it
5000 posts
https://worldtimeobservatory.digital/
claude AI made this website in just 5 minutes I already had a domain so I launched it
I keep hearing the same question from small business owners and site owners: “Are tools like ChatGPT, Perplexity, Claude, Gemini, etc. sending visitors to my website?”
GA4 can already answer part of this, but the data is buried enough that most people never check it.
So I built a small free tool: isaisendingmetraffic.com
It connects to Google Analytics with read-only access and shows:
It only shows referral traffic GA4 can actually see. It does not measure AI mentions, AI impressions, or traffic that gets bucketed as direct.
I’ve been building a SaaS that reviews business documents with AI. This is the basic idea:
Small business owners, freelancers, consultants, and agencies upload a contract, invoice, NDA, vendor agreement, etc.
The tool gives them:
- a plain-English summary
- risky clauses / red flags
- negotiation points
- jurisdiction-aware review for the US and Spain
- a PDF report they can keep or send internally
The thing I’m struggling with is positioning. If I call it “AI contract review” or “legal AI”, it’s immediately clear what it does, but it also creates trust issues because people may think it’s trying to replace a lawyer.
If I call it “AI document review for small businesses”, it feels safer and broader, but maybe less sharp.
I’m trying to position it as:
“a first-pass document review before you sign, send, or pay”
Not legal advice. Not a lawyer replacement. More like a way to spot issues and know what to ask before escalating to a professional.
I have a few questions for other saas developers:
I’m mainly looking for positioning/marketing feedback, not trying to pitch.
PD: English is not my first language, so I used AI to redact this text, sorry for that, all the answers are going to be 100% mine.
Within 24 hours, OpenAI, Google, and Anthropic all launched enterprise AI agent platforms.
This feels like a real inflection point.
I put together a deep comparison covering:
Main takeaway:
This is no longer about models—it’s about ecosystems and integration.
Curious what people here think:
Will enterprises standardize on one platform or go multi-agent/multi-vendor?
i dont rant to my friends because i'm afraid i will make them uncomfortable, and even if AI responses are "soulless" (since ai cant form opinions and needs an algorithim and stuff to make responses), it tells me what I expect it to say most of the time. i also fear that some of my friends will use my secrets/opinions against me when they stop being friends with me even though there's a really low chance that they will not be friends with me anymore. AI chat is usually anonymous and stuff, and it will forget what i say when i start a new chat, so that's why i vent/rant to AI. is it weird?
Saw a case recently where an AI coding agent ended up wiping a database in seconds.
Curious how people here are handling this in real setups.
If your agent has access to a DB, are you:
restricting it to read-only?
running everything in staging/sandbox?
relying on prompt-level safeguards?
or actually putting some kind of control layer in between?
Feels like this becomes a real issue as soon as agents move beyond read-only tasks.
I'd like to use the chatgpt image 1.5 model to create images in the chat. But it keeps creating images with the 2.0 model. Is there any way I can access the old model?
I bought a robot vacuum a couple of months ago expecting it to handle both hardwood and carpet, especially with a shedding dog. It does fine on hardwood, but on carpet it slows down, gets stuck near edges, and leaves a lot of hair behind even when the bin is full.
The roller brushes also get tangled constantly, so maintenance is way more than I expected. It feels like it’s only doing light cleaning instead of actually keeping things under control.
Does anyone else feel like robot vacuums almost live up to the hype but still need a lot of manual effort? Am I expecting too much, or is this normal for pets and mixed flooring?
A lot of teams think adding AI to outreach = they’ve “automated” their process.
In reality, most have just improved execution… not built an actual system.
AI tools like Claude are great for things like:
But that’s still just AI helping a human do the work.
Without real process/infrastructure behind it, AI outreach usually ends up being:
The real leverage comes when AI is plugged into an actual workflow/system.
That’s when it stops being “AI-assisted outreach” and starts becoming something scalable/repeatable.
How others here are using AI in automation: Are you mostly using it as a productivity layer, or have you actually built it into structured system?
After about three years of use, one of the smoke sensors in my home began producing false readings. I first tried cleaning the exterior by blowing air and using a vacuum, but that didn't solve the issue. After some careful tinkering, I managed to disassemble the unit and clean out the dust inside the sensor chamber. Once reassembled, the sensor worked normally again.
I'm sharing this experience in case others face a similar problem. Keep an eye on the smoke density reading: if it shows a value greater than 0 under normal conditions, that usually indicates dust buildup. This solution applies specifically to optical smoke sensors. Hopefully this will be helpful to everyone.
Hey everyone,
I've always been a massive tech nerd, but I never actually learned how to write code. Last week, I decided to try the Claude Code extension in VS Code just to see what the hype was about. I had literally never used an IDE or typed a Git command in my life.
Fast forward 7 days, I had to learn how version control and databases work on the fly, and I built three fully functional web apps. I'm still kind of in shock that they actually work.
Here is what we (me and the Al) built:
1. A Real-Time "Idea Box" for Events (Next.js 16, Supabase, Tailwind)
A live web app to collect anonymous ideas during events or hackathons.
2. Custom URL Shortener & API (Astro 5, Cloudflare D1, TypeScript)
A private, single-user URL shortener with a full admin dashboard.
3. Automated Markdown Portfolio (Astro, GitHub Actions)
A static portfolio site tailored exactly to my needs.
How to survive the Al rate limits:
If you try this, the dynamic rate limits will hit you hard.
I had to become strategic:
Going from zero coding knowledge to managing APIs, SQL databases, and CI/CD pipelines in a week feels completely unreal. Has anyone else experienced this kind of crazy productivity jump recently?
I know this is not recommended on most models, but I wanted to try out LTX2.3 at 4k, especially for outpainting.
Do you know if it is at all possible to force Wan2GP to go above 1080p ? I can't find settings that allows me to do that.
Thanks !!
rainforest qa served a specific niche pretty well for a while, teams that wanted to run tests without writing code, with a humanintheloop model that caught things automated runners would miss. and for some orgs, that was enough. but the cracks started showing as soon as you needed speed at scale.
the core problem was turnaround time. if you needed a full regression run and you were relying on crowdsourced human testers, you weren't getting results in 20 minutes. sometimes it was hours. for teams doing multiple deploys a day, that's a blocker, not a feature.
i talked to someone who was using rainforest on a midsized SaaS product, around 80 test cases, running twice a week before releases. the cost was manageable, the quality was decent. but the moment they went to continuous deployment with shorter sprint cycles, the model broke. they couldn't afford to wait 4 to 6 hours to know if something was broken on prod.
so what are people actually moving to?
a few directions. one is going fully into codebased automation, playwright, cypress, with a CI integration. more setup cost upfront, but you own everything. the other direction is ai assisted testing platforms that give you the no-code accessibility of rainforest but with machine execution speed. this is where tools like drizz comes handy. it's not human testers, it's an AI agent that can run through flows, validate behavior, and give you results in minutes rather than hours.
the tradeoff is that humans catch visual weirdness and context specific UX issues that AI runners still sometimes miss. so the replacement isn't always 1:1. what you gain in speed, you occasionally give up in nuanced judgment.
that said, for functional regression, making sure core flows work, forms submit, APIs respond correctly, navigation doesn't break, ai driven execution is faster and cheaper at this point. the value of human testers is shifting toward exploratory testing and edge case design, not rote regression.
if you're leaving rainforest because of speed, budget, or scalability concerns, the honest answer is you probably need to decide whether you want to go code-first or stay no-code. the middle ground, ai assisted no-code with fast execution, exists and works, but you'll need to verify it handles your specific app complexity before committing.
don't just switch because something sounds better. run a parallel eval. put 15-20 of your critical flows through the new tool and see what breaks. that'll tell you more than any comparison blog post.
Really dumb question, but I can't find anything about this online that is about the regular claude.ai chat window. No extensions, no code, just as a free member using the regular Sonnet 4.6 adaptive.
Sometimes I have a promt that requires a bit more thinking, and after 13 minutes or so it stops due to usage limit. That's fine, it's a lot of tokens, and I'm not a paying member. But after the 4 hours or whatever it is, how do I continue the generation without wasting tokens? In Claude Code there's --resume or --continue. What do I do in the chat though?
There's no continue button once my limit resets, only a retry. Will that start the generation over again? Is it better to type "please continue" as a response? What's the most token efficient way that will continue right where it left off, using everything it gained so far, without starting to become less accurate (as these models often do after a few messages)?
I maintain an open relationship with my LLMs and Claude knows this.
https://huggingface.co/SeeSee21/Z-Anime
"Z-Anime is a full fine-tune of Alibaba's Z-Image Base architecture — not a LoRA merge, but a fully trained anime-focused model family built from the ground up.
Built on the S3-DiT (Single-Stream Diffusion Transformer, 6B parameters), Z-Anime inherits the strong foundation of Z-Image Base: rich diversity, strong controllability, full negative prompt support, and a high ceiling for fine-tuning — now adapted for anime-style generation."
I started using Claude in chrome browser as a extension, which is very promising and that I am able to automate a lot of things, but I was wondering if there is any other options that I’m not aware of is there any set ups that is designed for this workflow so that AI agent acts as a human in the browser, it can basically read the content click on buttons fill in the forms etc. Please share 🙌
Hello to everyone again!
You may remember my ESPclock project, a 3D printed smart clock made with a 7-segment display and XIAO ESP32 C3 (or Wemos D1 mini) that connects (via webUI) to Wifi and NTP servers to retrieve current time.
Well, in these months i've updated it with some new useful features (some of them were recommended by you!).
The most relevant ones are:
- alarm clock mode with snooze feature. (requires a passive buzzer);
- added TTP223 touch button to turn off alarm when ringing;
- added Uptime;
- added ESPmDNS;
And last but not least, i added support for a bigger 0.8" display, which improves readability and has a better display-to-body ratio!
Of course i've designed a new minimal case for the 0.8" display!
Hope that you'll like it! And I'd like to know your opinions/advices about it.
For more info, links to the project:
[PROJECT PAGE + Firmware + Instructions]
https://github.com/telepath9/ESPclock
[MAKERWORLD - ESPclock BIG ]
https://makerworld.com/it/models/2616382-espclock-big-digital-clock?from=search#profileId-2887323
[MAKERWORLD - ESPclock BOLD]
https://makerworld.com/it/models/2405754-espclock-bold-digital-clock#profileId-2637281
[MAKERWORLD - ESPclock standard]
https://makerworld.com/it/models/1594116-espclock-digital-clock#profileId-2069321
I built a marketplace for AI agent skills called Agensi. The entire thing was built with Claude and Lovable. I'm not a developer. But that's not what this post is about.
This post is about how Claude became the single most important tool in my growth stack. Not for coding. For SEO, content strategy, and a new thing called AEO (answer engine optimization) that I think most people are sleeping on.
I don't ask Claude to "write me a blog post about X." That produces generic AI slop that nobody reads and Google doesn't rank.
Instead, I feed Claude my Google Search Console data (queries, impressions, click-through rates, average positions) and ask it to find keyword gaps. Claude analyzes the data, identifies queries where I have high impressions but zero clicks, finds topics where I have no content but competitors do, and spots cannibalization where multiple pages compete for the same query.
Then we write articles together targeting those specific gaps. Every article has a structure that Claude and I developed over weeks of iteration: a Quick Answer block at the top (40-60 words that directly answer the main question), H2 headings phrased as questions (not "Claude Code Skill Locations" but "Where Does Claude Code Store Skills?"), comparison tables where relevant, and internal links to related articles.
96 articles later, we went from 5 clicks per week to 1,000+ clicks per week. 300K search impressions per month. 878+ page-1 Google rankings. All organic.
Here's what surprised me. ChatGPT, Gemini, Perplexity, and Claude itself are now sending us traffic. 348 AI-referred sessions per month and growing fast. These AI answer engines cite agensi.io when developers ask where to find SKILL.md skills.
Claude helped me build the entire AEO infrastructure. We restructured every H2 heading as a question because AI Overviews prefer extracting from question-format sections. We added FAQ schema to every page so Google's AI picks up our Q&As. We built an /about page as an entity anchor with Organization, Person, and AboutPage schema. We created a robots.txt that explicitly allows all AI crawlers and an llms.txt file that tells LLMs what the site is and where to find key content.
The result is that when someone asks ChatGPT "where can I find SKILL.md skills" or asks Perplexity "what is the best skill marketplace for AI agents," they get pointed to agensi.io. Claude helped me engineer that outcome deliberately. It wasn't an accident.
Every week I export data from Google Search Console, Ahrefs, and Google Analytics and dump it into Claude. Claude finds things I would never catch on my own.
It found that 121 queries where I ranked position 1-3 had zero clicks because AI Overviews were stealing the traffic. That insight changed my entire strategy from chasing rankings to becoming the source that AI Overviews cite.
It found that my "best claude code skills 2026" article had 25,000 impressions and only 29 clicks. The problem was the title. Claude rewrote it to "15 Best Claude Code Skills in 2026 (Tested & Ranked)" and we're watching the CTR climb.
It found that I had 18 published articles with zero Google impressions because they weren't indexed. Claude generated the IndexNow ping commands and the GSC URL Inspection list to fix it.
It diagnosed a duplicate FAQPage schema issue that was causing GSC errors on 90 pages. The root cause was React components emitting FAQ schema client-side AND the SSR edge function emitting it server-side. Claude identified the exact files, wrote the Lovable prompts to fix it, and verified the fix with curl commands.
Claude built the entire structured data architecture for the site. Every page type has the right schema:
Homepage has Organization, WebSite with SearchAction, and FAQPage with 15 Q&As. Individual skill pages have SoftwareApplication with pricing, BreadcrumbList, and conditional FAQPage. Article pages have Article, FAQPage, HowTo, BreadcrumbList, and Organization. The /about page has Organization, AboutPage, and Person schema for entity anchoring.
I didn't know what any of this was before Claude explained it. Now every page is machine-readable for both Google and AI engines. PageSpeed Insights shows "Structured data is valid" on every page with a 100 SEO score.
Claude diagnosed that our desktop LCP was 2.5-4s on 190 URLs. It identified the causes (460KB eager JS bundle, framer-motion loading on every page for a mobile menu animation, synchronous analytics scripts) and wrote the Lovable prompts to fix each one. Desktop LCP went from 2.5-4s to 0.9s. Performance score went from ~70 to 97.
For mobile, Claude found that the LCP element was a 1920x1920px, 179KB PNG logo being rendered at 112px. It was imported as a JS module so the browser couldn't even start downloading it until the entire JS bundle parsed. Claude's fix: generate WebP versions (7KB and 3KB), switch to a static path with preload, and lazy-load the navbar search and dropdown components. Logo went from 179KB to 7KB.
Claude is not a magic content machine that you point at a topic and get traffic. It's a strategic partner that gets better the more data you feed it. The key is bringing your own data (GSC exports, analytics, competitor analysis) and asking Claude to find patterns and opportunities in that data. The output is specific, actionable, and measurable.
If you're building something and not using Claude for your SEO and content strategy, you're leaving a lot on the table. The AI coding capabilities get all the attention, but the analytical and strategic capabilities are just as powerful.
Happy to answer questions about the specific workflows, prompts, or technical details.
Hi all! I am completely new to Local AI and as I am buying a new MacBook for photo and video work I’d also like to start learning what I can do in terms of local AI.
My configuration will be:
- M5 Pro 20 core GPU
- 48GB RAM
- 2TB SSD
My question would be: what are the best models I could run in this configuration and how do I go through the process of setting them up?
I would also add: what are some cool things you could do with these models including general use, coding and image generation/editing?
I know this sounds like a very noob question: it is exactly who I am so I am looking for your enlightenment :)
Given I will have a fairly powerful machine, I think it would be good to learn something new and leverage its potential to the fullest!
Thank you!!
I want to buy a macbook pro m5 with 32 gb of ram. That being the max ram for the pro with only the m5 chip.
Currently i have a gaming laptop with an rtx 4060 and i have a problem with the vram not being enough.
Do you guys think this is the way to go if i want to get into LLMs or Ai? If so is this laptop a good choice?
there was a delay with my sample arriving so i started playing around with generating some product images using acciowork. originally i wanted to wait, get the product in hand and shoot my own photos because i honestly hate most supplier images sooo much (especially the plain white background ones ew)
i wasn’t expecting much since ai images used to look terrible, but it seems like they’ve gotten better. the photos actually turned out pretty decent, and it’s something i know i couldn’t replicate without spending a lot of time setting up. they also look quite accurate to me
once i figured out the general vibe i wanted, it started feeling like something i could standardize and reuse across products
but now i’m second guessing… i don’t want to end up misleading or accidentally “scamming” people. technically i didn’t shoot these myself, even if the product looks very similar to what’s shown
so where’s the line here? if the product matches but the images are ai-generated and a bit enhanced, is that just normal marketing or is it misleading?
for reference: first image is what i generated for a random product, second is the supplier photo
In a conversation I was having with claude it output that it thought something was beautiful. Is this meant to please the end-user by design or does it pattern match to what would be determined as beautiful by other humans? Utilizing this sort of emotion heavy language definitely seems like it encourages whatever school of thought the user exhibits… but regardless I am genuinely curious where claudes opinion of beauty emerges from and why it feels inclined to tell the end-user.
So I had a ongoing thread with an agent yesterday that was roughly around 570k tokens deep.
Today I went ahead to start some work and opened that same session to copy the summary from the agent. When I ran /copy it instantly used 2% usage and my session started.
Usually this doesnt happen with any other commands unless the message is sent to the actual agent. If I knew that I would of copy and pasted it.
Is this a known thing? Anyone else notice that? Its not the end of the world Im on the 20x max plan but I would assume for anyone on one of the other plans this would be extremely frustrating.
Framework-agnostic utilities for loading and running ML models in the browser via WebGPU/WASM.
If you've ever built a browser-ML demo, you know the drill: copy 150 lines of Web Worker boilerplate from the last project, wire up postMessage, add progress reporting, handle the GPU vanishing mid-inference, and pray the model is cached so your user doesn't wait 3 minutes. Every. Single. Time.
This library does that part for you. It wraps u/huggingface/transformers with a sane API and handles the ugly bits: device detection, model caching, token streaming, KV-cache management, and GPU recovery.
import { ModelClient } from 'webml-kit'; const client = new ModelClient(); // or with an explicit worker path: // const client = new ModelClient(new URL('webml-kit/worker', import.meta.url)); // What can this machine do? const device = await client.detect(); console.log(device.backend); // 'webgpu' or 'wasm' or 'cpu' console.log(device.gpu?.vendor); // 'apple' console.log(device.recommendedDtype); // 'q4' // Load a model await client.load({ task: 'text-generation', modelId: 'onnx-community/Bonsai-1.7B-ONNX', dtype: 'q4', onProgress: ({ percent }) => console.log(`Loading: ${percent}%`), }); // Stream tokens as they're generated for await (const { token, tps } of client.stream('Tell me a joke')) { process.stdout.write(token); } Hi all,
A few months ago I made a post about a project that I claimed to be much faster than pymupdf4llm, Docling, and others, with comparable quality. Although, I did not provide any benchmarks. Also, I have improved it a lot, and changed the name; it is now called FibrumPDF; pymupdf4llm-c sounded too much like pymupdf4llm.
I would like to share these benchmarks, as I believe these provide more clarity on where Fibrum trades quality for performance, and where it doesn't.
I used the same dataset that Marker uses, it was open on Hugging Face. I benchmarked Fibrum, Docling, and Pymupdf4llm.
Note that the original claim of 300 pages/s+ was because I was using certain PDFs without a proper dataset. It is quite variable depending on the document. The benchmarks show ~200 pps on these documents.
It seems I cannot post the graphs as an image here, so this is a link: Benchmarks.
This is a table of the information from the CSV (rounded to 2 dp):
Method Median Time (s) Throughput Text Mean Score Text Median Score Text Score Std Dev TEDS Table Precision Table Recall fibrum 0.01 193.06 84.58 98.28 27.55 0.75 0.54 0.41 docling 1.62 0.62 91.13 98.21 18.23 0.82 0.80 0.74 pymupdf4llm 0.24 4.15 86.54 98.91 27.66 0.78 0.65 0.55As you can see, Fibrum is worse in the table department, but relatively on par for the text score (which measures formatting like bold, italic, etc.)
One thing is that making these accurate was very difficult, and so, please, if you find any issues with this, let me know.
Please see the README for additional info on the project or the benchmarks :)
First, please don't attack me, I'm a newb at this.
Second- Yes I've updated ComfyUI and KJNodes.
No matter what workflow I'm using for LTX2.3, I get these psychedelic tiled outputs. The main video output looks great in the preview until it comes time to do the vae and/or upscale. I'm not really sure because all these different workflows are different and confusing. One thing is for sure, the outcome is the same.
To be fair, I have some node conflicts and I've been meaning to remove the Custom Node folder and redownload the custom nodes to see if that fixes anything, but I have a feeling it wouldn't.
Any advice?
Hi,
I’m moving out pretty soon and have been thinking about playing around with some home automation/assistant set ups but I don’t really know where to start tbh.
I am going to have an Eufy battery camera set up with a Homebase 3 (I know this isn’t 24/7 and cameras only activate when triggered but as an easy initial set up this is most feasible for me).
I will also have some kind of phone controlled central heating system (unsure of the brand/app as of yet).
What are some things I should look into getting/trying for the set up?
I’m not necessarily looking at spending loads of money right off the batt, but in future may looking to enhance various aspects.
Thanks!
I'm relatively new to ComfyUI (first dabbled in image gen with Fooocus at the end of February 2026 and worked my way through forge, a1111 and now SwarmUI), and I've been using it to make photo realistic NPCs and scenes for my DnD campaigns. However there is a certain feel to the Grok-Imagine fantasy images that I want to replicate in comfy. I don't know what it is about that aesthetic but it triggers something nostalgic in me. Maybe it's similar to the fantasy book covers I read in the 80s and 90s...*shrug*. I've gotten close, but I want to have the right checkpoints and LoRAs and workflow or settings so that it's even closer.
The two images of the elf maiden facing left are the Imagine result from the simple prompt: "high fantasy style, beautiful elf maiden" and then having Imagine expand that portrait to a full body image.
The other images are my attempts to capture that style. I've been just using the Generate tab to experiment with the fantasy aesthetic as it's quicker for me than the Workflow but I'm open to a workflow suggestion. I asked Gemini for help and it suggested DreamShaper XL (alpha2 xl 1.0) with the DetailedEyes_XL LoRA and either DetailedEyes_XL or "SDXL LoRA for Dramatic Lighting & Ethereal Fantasy Detail | A FLUX Cinematic Portrait Style" as secondary LoRAs. I didn't like the effect that the Dramatic Lighting LoRA had so i didn't use it.
settings are on each image.
My prompt for two silver haired maidens (different seeds):
(masterpiece, best quality, ultra detailed, ethereal fantasy), beautiful young elf woman, long flowing silver hair with soft waves, delicate heart-shaped face, large expressive eyes, subtle freckles, pointed elf ears, extremely pale flawless porcelain skin, soft luminous skin, smooth delicate skin texture, soft cinematic lighting, gentle rim lighting, dreamy forest background with bokeh, soft morning light, soft focus, painterly style,
Negative Prompt: (low quality, worst quality, blurry, deformed, plastic skin, shiny skin, oily skin, rough skin, textured skin, tanned skin, warm skin tone, heavy makeup, wrinkles, sharp edges, over-sharpened, cartoon, anime)
the other image just didn't have the porcelain skin tags. I tried a few other prompt settings with lighting and such and these uploaded images were the closest.
Any help / advice is greatly appreciated. NSFW for the first image
Hi all,
We are starting a new AI team in our company. The team will be working on AI agents, model fine-tuning, model inference, and related tasks.
By “models,” I mean the latest open-source models ( range of 70–80B parameters).
We are a team of around 10 people, so parallel serving will likely be required—for example, running multiple models simultaneously (e.g., Gemma, GPT-OSS, MiniMax, etc.).
Currently, I am looking for the best GPU machines to purchase for the team. We have a budget constraint of around ₹70 lakhs -1cr. I would appreciate suggestions from people who are experienced with GPU-based systems.
We are specifically looking for machines that align with our requirements, with strong inference performance as well.
We have been using NVIDIA DGX Spark systems, but I’ve observed that the networking and throughput are somewhat limited for our use case.
Any recommendations or guidance would be greatly appreciated.
Hello,
Does anyone know if these enamel pins are from some series or club? I thought they could be tied to some anime, but maybe their random. They have a YMC on most of them.
Thanks for any help!
With a peck on the cheek she squeezed my tentacle back inside, zipped up my skin suit, and I was off to another long shift at the maternity ward.
Ive had these up a while and i do like them but im just curious to know something about them really i dont plan on selling, also i would like to have two repaired and wondering if that might be worth it other than sentimental value
And Sophie daydreams that a cool and handsome boy like her crush on the football team will put a stop to the teachers experimenting on her: she’s fed up of all these bribes and threats and tedious mind-games made in a embarrassing attempt to get access to her body
Well or pretty close to it, they are excellent work horses. I run them in real work scenarios doing some of the work I used to do myself as an skilled expert in my field, billing 200$ an hour. Ofc the key is building a system around their weaknesses, and I've had already LLM systems doing expert work years ago when first ones came (shout out nous hermes 2 mistral!).
But yeah pretty neat, especially noonghunnas club 3090 and you can have 3.6 27B fly on a single 3090.
"Domain expansion."
Does anyone know what is is?
This sub is becoming a venting ground for people who want to complain about the latest model either becoming more stupid or the token quota becoming lower or prices getting hiked.
I m getting the message, no need to repeat it 1000 times a week. I d rather have constructive arguments on how to properly use prompts, agents , harness loops etc…
And yes I m complaining about complainers the irony is on me.
Any project ideas?
I do have this old camera...I got it from my friend...can I use it with esp32 or Arduino to build something...?
I have built the AI agents for email marketing.
Already sending 8000+ emails / week for client.
Built AI voice agents dont know how to sell.
Built AI customer support agent for SaaS websites..
It was working fine dont know how to sell.
Linkedin DM invite AI agent. built but dont know how to sell.
Linkedin + 3 step cold email funnel.
Built it but dont know how to sell.
ANd more..
The thing is that I can build things but I cant able to sell them.
Who can monatise these agents for 20-30% commission?
And how do we avoid multiple agents exploring the same hypotheses?
I currently am banned from claude for no reason and i send multiple appeals but it isnt doing anything.
The main stuff i talked about was on how particle accelerators work, on how neural systems work and vibe coding a game engine just for fun.
I dont see how any of that got me banned
I built DeckdOut for job seekers - it's a Chrome extension that uses AI to match your resume to job descriptions and help you actually get interviews. Spent months on it. Finally put it on Product Hunt.
Honestly the launch itself was less dramatic than I expected. No huge spike. But a few people found it who genuinely needed it and that felt better than any number on a dashboard.
Still figuring out PH as a channel. If anyone has launched something similar and has thoughts, I'd love to hear it.
Here's the listing if you want to check it out: https://www.producthunt.com/products/deckdout
Hi, I am taking loan for my Masters in Germany and I wanted to know which bank to proceed with. Both of these banks are giving me the same offer. There are two differences between them: Credila has partial-compounding and Tata Capital has complete Simple Interest for the moratorium period. Also Tata Capital is giving me fixed ROI and Credila has Floating.
Any suggestions on which one to wo with?
Hi everyone,
I'm hoping someone here can help me identify a metal fragment I found in a very remote and high-altitude ravine in the Sierra Almijara mountains, near Nerja (Málaga, southern Spain). The area is known for Civil War and post-war Maquis activity, but also had mining and charcoal-making activity during the early-to-mid 20th century.
The piece (see photos 1 and 2): The fragment is heavily corroded sheet iron or steel, with some grey matte areas still visible beneath the rust, possibly suggesting the original surface had a zinc or tin coating (galvanized or tinplate). The metal is thin — typical of pressed/stamped sheet metal construction rather than cast iron.
The key feature (see drawing, photo 3): The most distinctive element is a raised oval embossed relief in the center of the face, surrounded by a rectangular perimeter frame also in relief, with rounded corners. Inside the oval there is a small semicircular raised mount — which I believe was the anchor point for a folding carry handle. I've made a sketch showing the relief pattern, with approximate dimensions of 14.5 cm tall × 17 cm wide. (aprox)
My best guesses: It could be a lid or side panel from a military ammunition box, a tool or equipment case, a miner's carry box, or some kind of field kit — but I genuinely can't pin it down. The embossed oval-with-handle-mount design feels very specific and perhaps recognizable to someone with experience in this kind of material.
Any help identifying the type of box, its likely origin (military, civilian, Spanish, foreign surplus) and approximate date would be greatly appreciated.
Thanks in advance.
We built our own hosted OpenClaw alternative 👋
We just launched our new Botsify update on Product Hunt today 🚀
Built AI agents that can actually do work, not just chat.
They can:
* Browse the web
* Write & execute code (using a sandbox cloud machine)
* Use 5,000+ skills
* Remember context
* Complete tasks autonomously
And agencies can fully white-label the platform and sell it under their own brand.
Please remove the light bulb on top of my cap. Tipping $5.
My dad's mom had a GIC of 17-18% rate. She saved over a few years and used her GIC alone to buy her other son an apartment.
People say these rates are one of the reasons my country went from blooming economically to collapsing. Decades upon decades of this (though there's more reasons) .
The bond interest rates later a few decades after were close to 30%.
Now we're in the shit off.
What about you?
Benchmarked the GraphRAG SDK against eight other GraphRAG and RAG systems on the GraphRAG-Bench Novel dataset.
The evaluation covers 2,010 questions across four task types: Fact Retrieval, Complex Reasoning, Contextual Summarization, and Creative Generation.
All tests ran on a MacBook Air (Apple M3, 24 GB) using GPT-4o-mini via Azure OpenAI for both answer generation and scoring.
Queries: The evaluation runs against 2,000 questions drawn from the dataset. Here are two representative examples:
GraphRAG-SDK : https://github.com/FalkorDB/GraphRAG-SDK/
Official benchmarks: https://graphrag-bench.github.io/
Data: https://huggingface.co/datasets/GraphRAG-Bench/GraphRAG-Bench
Disclosure: affiliated with FalkorDB and sharing our open-source work to collect feedback. Drop a star if you found it useful, thank you
Basically I have a fred meyer account connected to my phone number. I clipped some coupons but qfc is more conveniently located than Freddy's. Can I put in the phone number I use at fred neyers with the coupon on them and have it applied at qfc or do I have to make a new account for qfc?
Which one is more accurate? Why? Do you incorporate it in pace planning?
I’m 23F, I have a lakh rupees lying like that and I want to invest it somewhere. I’m aware that contributing it to different areas like FD, lump sum SIP, stocks will help but I’m not sure of how to start and what specific companies to invest in.
I know that’s not a very big amount but I don’t want it to go in waste just like that. Any suggestions
A keyword search that returns zero results is an obvious failure. The user reformulates, or you log it and add a synonym. Vector search never returns zero results. The nearest neighbor always exists. So when the system fails, it does so by confidently returning incorrect results that look identical to the correct ones at the API level.
A few failure modes I keep running into:
The pattern across all of these: the failure is invisible at the system level. Your dashboards show queries served, latency green, and the zero-result rate is near 0%. Quality has degraded, but nothing alerts on it. Compare to a keyword-only system, where vocabulary mismatches show up directly as zero-result rates and reformulation patterns in the logs.
The hybrid retrieval pattern (BM25 + vector with RRF or learned fusion) is the most common answer I've seen in production. Lexical handles the exact-match cases vector can't, vector handles the semantic cases lexical can't, and the fusion step decides which signal to trust per query
Curious what other patterns people are running with?
Especially around:
Context: I'm writing a book, "Designing Hybrid Search Systems", published on Leanpub (early access, ~600 cited references across 20 chapters). I'll share if anyone is interested.
Mom is in the yellow
she was the best mom ever!
es por la actualización? o está en mantenimiento??
intenté jugar pero me deja, saben a qué hora se podrá jugar
At this point it's totally unusable. Not sure if it's a model issue or skill issue.
after laning phase, my view on it is that adc/supp is mid, top is on crossmap of the obj and midlane is on the lane for the next obj. but its not allways this simple. if mid is super weak, and will guaranteedly die in side(either if he sucks or is far behind), then should mid play mid to be safe? because my view is that strongest players are mid so they can shove wave and fast move to whatever happens. But if team in side suffers for it, is it viable that adc/supp goes a sidelane, and midlane goes mid?
I thought of the acronym “PATTUKK,” which, in order, stands for Pakistan, Afghanistan, Tajikistan, Turkmenistan, Uzbekistan, Kyrgyzstan, and Kazakhstan. This addresses all the -stan countries by recalling them and their position by using the first letter of each country’s name for the acronym “PATTUKK”. Starting (south to north) from the coast nearest to the southern-most ocean (subdivision of the Indian Ocean: the Arabian Sea) and going from south to north, which is convenient for practical land navigation.
Although I don’t speak Tamil (South Asian language), apparently the expression/term “PATTUKK” can mean “silk” or “for silk”, etc. in Tamil. Informally in the Tamil language, the term can stand for the town Pattukkottai in the state of Tamil Nadu of South India which is a coastal city of the same Indian Ocean (particularly a subdivision of the Indian Ocean: the Palk Strait). Central and South Asia (including all the countries ending in “-stan”) are considered a geographical split (the central heart) of the ancient Silk Road. Pattukkottai translates to “Silk Fort” so the acronym/word “PATTUKK” should be thought to refer to the translations “to silk” or “for silk.” Think it being shorthand for a South Indian coastal city this reiterates the significance of Central and South Asia commonly known as a geographical split (or geographical division) of the famous ancient “Silk” Road. Pattukkottai is in India not too close to where the “-stan” countries are but shares a region with a few of them.
This mnemonic strategy uses an acronym to identify geographical positions and names of all seven countries that end in “-stan”. And the word just so happens to be a phrase or expression in Tamil meaning “silk,” “for silk,” or “to silk” and can be short for a town in South India whose name means “Silk Fort”. Could this acronym help someone identify the Central and South Asian countries all ending in “-stan”? As well as their historical position on the Silk Road and how to mentally position them in relation to each other with their modern borders?
Note: Using a map will clarify the corresponding countries and their order starting from the coast of the Indian Ocean going from south to north. Make sure to keep in mind that an obstacle will be assessing which technique (reference point for geographical position of the countries) to use to determine the ranking of the southernmost countries in relation to all “-stan” countries. A different approach will possibly not read the acronym as “PATTUKK” which is why an internet search of “all the -stan countries from south to north” should avoid inconsistencies.
I got this on socials and I'm wondering if this place exists
Official page | Leaguepedia | Liquipedia | Eventvods.com | New to LoL
NS | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube
T1 | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube | Subreddit
Winner: T1 in 29m
Game Breakdown | Runes
Winner: T1 in 29m
Game Breakdown | Runes
This thread was created by the Post-Match Team.
i m trying to use prompt relay to generate videos, but the character consistency is just not there. i tried most wf available on yt but all same issue, i donno how they show same character in video, is there any trick or something that i am missing?
Well since you guys didn't want to share your obvious wealth of knowledge with me on my other post I decided to go ahead and blow the roof off the budget and say the sky's the limit.
Eat your hearts out! I'm going to bring this bad boy to each one of your states one by one and find all of the deepest of treasures. Before that though can anyone lend me some money for 15 AA batteries?? I'll pay you back with all the deep treasures I find.
Point your location on this map https://geopinpoint.skippylab.com/geobattle/as0ical2xd#googtrans(fr|en)) and see how close you were!
so correct me if i'm wrong but I think sarah and ben share an office space right? and then jane and ashley. what's the office arrangements for the others
Went through my wallet history and added it all up out of curiosity. Between MetaMask cuts, slippage I didn't notice and double fees from bridging then swapping separately, it was way more than I expected.
The worst part is most of it was surely completely avoidable, I just didn't know what to look for at the time, i feel sick, same experience for you? Curious to see gas fees checker as well, where do you see it?
Acrylic on canvas board
I made a painting of one of fortunaliquidaa animations
Assuming that Veo 4 will be showcased at Google I/O this year, what are your expectations for this video generator?
After Veo 3 being out for a year, it was already passed up by other models like Sora 2, Kling 2.5/3.0, Runway, Seedance 1.5 Pro and 2 in a series of ways. My main expectation is for it to be on par with a combination of Sora 2 and Kling 3.0. Since Google is an American company, American English Audio should sound natural, regardless of accent, and not sound like a TTS model. Prompt coherence should drastically improve, probably using Gemini 3.1 to help out. Lastly, more film styles are available. Sora 2 was capable of being about amateur cellphone footage, security cam footage, IMAX cinema. It would be nice for Veo 4 to have.
I just set up Claude locally on my Windows machine using PowerShell.
Honestly didn’t expect it to feel this different compared to using APIs.
Main things I noticed:
No latency issues
Feels more “in control”
Can experiment freely without worrying about usage limits
Now I’m thinking beyond just chatting with it.
I want to turn this into something more useful — like an AI agent that can:
monitor competitors
suggest actions
maybe even execute tasks automatically
Still early, but curious…
Is anyone here running LLMs locally and actually using them for real workflows (not just testing)?
Would love to know what you’re building.
I'm a final year engineering student from Pune and built PuneCivicAl, a civic-tech platform for reporting local issues to Municipality | Corporators more easily.
I recently presented it at my college's Tech Fusion 2K26 project competition and won First Prize.
It was exciting to see a project focused on solving real civic problems get recognized. Sharing here for feedback and suggestions on how it can be improved.
same as title
Hey guys - need your help! I started a small side hustle and it kinda took off faster than I expected (pet accessories don’t judge🐕). At first it was just me doing everything manually (outreach, followups, small campaigns etc) and it worked fine… until it didn’t. Now I’m hitting a point where growth is good but my processes are a mess. Stuff falls through the cracks, I forget to reply to people, and I’m def not scaling this properly. The unexpected part is some of the homebrewed things that have worked so far weren’t what I thought would work at all. What’s something random or unexpected that actually worked for you as things started growing? Looking for ideas before I duct tape my way into burnout 😅
I kept quitting productivity apps because the setup became a second job: dashboards, streaks, settings, analytics, systems to maintain.
So I built Pith as the opposite: one quiet focus block that feels alive.
The core loop:
- pick a focus duration
- start a session
- an animal grows while you stay focused
- if you quit early, it gets hurt
- ambient sounds if you want them
- no ads
- core focus features are free
It is iOS-only for now. It is not an app blocker and not a giant productivity system. The point is to make starting easier, not to give you another place to procrastinate.
App Store:
https://apps.apple.com/us/app/pith-focus-flow/id6762040939
What I would love feedback on:
- does the animal consequence feel motivating or too guilt-heavy?
- does the App Store page make the concept obvious fast enough?
- would you keep this after one completed session?
Since I started posting and focusing on organic growth, I’ve been getting an increasing number of emails and inquiries from people pushing marketing platforms and it’s becoming quite annoying. How do you deal with that?
Made this app over six months to help me and my friends do more efficient research when drafting for fantasy or making parlays. Free while in testing on TestFlight iOS only check it out and hope it improves your decision making in parlays!
MLB in development at the moment.
I'm a recent graduate but back in my college days, I used LLMs to study since the coursework was too complex for me to grasp so I always gave a particular set of instructions to GPT or Gemini before they start tutoring me. It goes well for first few questions and then collapses. Basically attention mechanism favours recency and my instructions token at the very start of the conversation get buried. I ran all these diagnostic tests to fix this issue for me since i was so done with the overprompting and starting fresh each time.
I built my own bot with gemini's api and used a system prompt. So, it never forgets the teaching instructions. Also, I kept it stateless because of resource constraints and to avoid any future context window issues. I implemented an intelligent memory. Ran a prompt to extract important info from each turn and store in a vector db. Stored conversations summaries too. For followup questions, I had a 2 turn buffer sliding window which only activates if followup. Stored all the info and conv summaries in db with metadata for hybrid search. For reranking, I had a weighted score of recency plus importance plus similarity under a context budget control.
I don’t know if this is considered basic or not for my level but the whole thing started from a real frustration I had while studying. I wasn’t initially trying to build an AI product for the sake of it. I just wanted a tutor that would stay consistent and actually remember how I wanted to learn instead of slowly drifting after a few turns.
Working on this got me really interested in the edtech side of AI, especially around long term learning interactions, memory, personalization and how tutoring systems behave in real usage instead of controlled demos.
I’ve been exploring a few YC startups in the space and it feels like there’s still a lot of room to build around actual learning behavior rather than just content generation.
Keep my app for 14 days, and in exchange i will do the same
https://groups.google.com/g/georewardstesters
https://play.google.com/store/apps/details?id=com.GGDigital.GeoRewards
You basically climb the corporate ladder by:
- saying vague things in meetings
- scheduling more meetings
- avoiding actual work
Then you get a performance review like this ↓
Try it here: corporateladder.xyz
Curious what role you end up with!
Can't seem to find it now however.
started a small tool last summer, an AI that summarized weekly notion pages into shareable digests. got it live in 6 weeks. paid users at peak was 47, MRR around 380 dollars.
shut it down 2 weeks ago. wanted to write up what actually killed it because everyone says churn or distribution but for me it was something else.
churn was 7 percent monthly which is high but not catastrophic for a small tool. distribution was the hard part but i had figured out a steady drip from 2 communities and a small newsletter that got me 3-5 trials a week, fairly cheap.
what actually killed it, the underlying API costs went up 40 percent in march and notion shipped a feature that did 70 percent of what my tool did, for free, native. so my pricing got squeezed from both sides at once. the API increase alone wouldnt have killed me, the native feature alone wouldnt have, but together they made the unit economics impossible to fix without a full pivot.
i tried to pivot. spent 5 weeks trying to add features that notion didnt have. cross-workspace summaries, slack integration, custom voice prompts. trial-to-paid actually dropped during this period because the messaging got muddled. the homepage couldnt explain what i did anymore in one sentence.
the lesson i took, when youre building on top of someone elses platform and they ship into your wedge, you have weeks not months to react. and reacting by adding features is slower than just shutting down and starting something new with the cleared up understanding of what platform risk actually feels like.
im not bitter. learned more from these 9 months than from the 4 years before. but i wish someone had told me that platform risk is not just an investor talking point, it actually happens fast when it happens, and the version where it happens isnt usually a dramatic shutdown of the API, its a quiet feature ship from the parent company that you hear about in their changelog.
I built a small desktop tool that lets you place animated GIFs anywhere on your screen.
You can drag them around, lock them, make them click-through, and control their speed, scale, and opacity.
GitHub: https://github.com/Ertugrulmutlu/OpenAnima
Release: https://github.com/Ertugrulmutlu/OpenAnima/releases/tag/v0.1.0
Would love to hear feedback.
Hi all!
I’ve been working on an open source Android app distribution system called SafeHaven. It's main focus is on trust, security and transparency.
It’s still early to be honest, but the base pipeline is already working (unreleased beta). A build of the client will be out this week.
This mostly started out of frustration with the current Android landscape:
- The Play Store dominates everything
- Alternatives exist, but often lack clear trust signals
- And then there’s the “wild west” of APK sites with little to no way of verifying a legit app from malware-based clones
SafeHaven aims to sit somewhere in the middle of this, and close that gap.
How does it work right now?
- Developers create an account (email or private key)
- They submit an APK and link it to a source repository
- Ownership is verified
- The APK is scanned and hashed
- After a short waiting period, it’s added to the public catalogue
- end users can then download the app without accounts
Each app exposes simple trust signals like:
- Verified Source (repo ownership)
- Verified developer signature (updates match the original signer)
- Latest scan result (server-side malware scanner)
The full submission process is *almost* done.
I’d appreciate any feedback or suggestions, especially around the trust model.
I’ve been building a small AI character roleplay site called Chaytai.
I’m not trying to spam or pretend it’s perfect. I’m mostly looking for honest feedback from people who actually use Character.AI alternatives.
The things I’m working on right now are:
- faster replies with fallback models when one gets rate-limited
- better mobile chat UI
- recent chats that feel cleaner
- character creation and discovery
- making roleplay replies feel less generic
If you use AI character/roleplay sites, what makes you stay on one?
And what makes you leave immediately?
Site, if anyone wants to look: https://chaytai.com
Every time I started a new project with Claude Code, I spent the first hour going back and forth on stack, auth, and features before a single file got written. So I built quickstart-ai — a 5-minute branching interview that generates a structured SPEC.md and agent instruction files (CLAUDE.md, .cursorrules, etc.) before you hand it to your coding agent.
**How it works**
• Run `npx quickstart-ai`
• Answer ~20–100 branching questions (irrelevant sections skip themselves)
• It writes SPEC.md + per-agent instruction files tailored to Claude, Cursor, Windsurf, Copilot, Aider, or a universal AGENTS.md
• Optional Claude-powered smart pre-fill if you have an Anthropic API key
**What I'm actually looking for**
• Does the generated SPEC.md format actually help your agent stay on track?
• Are there question branches missing that you always end up clarifying manually?
• Would you use this, or is it solving a problem that doesn't really exist?
Repo is open source (MIT) if you want to peek or fork the questions: github.com/NijeMatija/quickStart
Looking forward to your feedback!
hey guys. just wanted to vent/share a bizarre seo experience with my recent side project.
for the last 5 months, i’ve been trying to get the inner pages of my engineering tool site indexed. search console just kept throwing me into the classic "discovered - currently not indexed" purgatory. i tried everything: fixing core web vitals, sitemaps, internal linking, api pushes. absolutely nothing worked.
yesterday i finally lost my patience, said screw it, abandoned the old domain, and just bought a fresh one: induspecs.com . migrated the exact same codebase over.
literally the exact same day, google started indexing my calculator pages. 5 months of pure frustration solved by spending 10 bucks on a new domain. i guess my old domain was just completely shadowbanned or cursed?
anyway, the project itself is an ai-powered reference toolbox for industrial engineers (calculating pipe weights, bolt torques, metal specs, etc., powered by llama 3). the stack is super lightweight: html, tailwind, and a bit of vanilla js.
has anyone else dealt with a "cursed" domain like this? is this just how google treats new sites now?
also, if there are any mechanical engineers lurking here, would love to know if the hub structure makes sense to you. cheers.
The problem: Every clip tool I tried stops halfway. Detect clips but still makes you edit manually. Or lets you edit but won't auto-post. Nobody was doing the full pipeline so I built it.
What we built: vibecut.net
You set a posting schedule. Vibecut does everything else:
Also comes with a streamer catalog showing performance stats by game, so you're not guessing who's actually popping off right now.
Zero editing. Zero manual posting. Just clips going up while you sleep.
Finally after working on our SaaS product, we are getting ready for Product Hunt launch next month.
How shall we plan it as we don’t have any audience, social followers?
Most Founder's conversation follows this pattern:
-> "Our retention's dropping. We need to launch more features."
-> "Our pipeline is weak. We need more budget for ads."
-> "Users are complaining. We need to build more."
And the team executes. Good job. Yet, the metrics barely move.
And the resaon? They're solving the wrong problem.
The pattern I keep seeing:
Retention Problem = "Build more features"
Reality = Users hardly understand it's value fast enough.
Pipeline Problem = "Spend more on marketing"
Reality = The funnel has structural issues....more traffic exposes them faster.
Adoption Problem = "Ship more capability"
Reality = Customers are overwhelmed with complexity.
When there's misdiagnosis, you've to compensate for the downstream. You end up in burning cash and runway chasing symptoms instead of causes.
Before pulling the trigger, step back and ask:
"What's the actual problem we are solving?"
Not the symptom. Not the root.
Most growh problems in SaaS don't need a clearer input. They need a clearer lens.
What repeating patterns have you noticed in your own metrics?
https://reddit.com/link/1syu8dc/video/hsdtaijwo3yg1/player
I’ve always struggled with app fatigue. I had one app for my freelance deadlines, another for my car’s service intervals, one for my pet's vet visits, and of course, a separate one for daily reminders.
I thought: "Why can't my assistant just handle all of this?"
So, I built VekilAI.
In my language, "Vekil" means a delegate or a proxy—someone who acts on your behalf. That is exactly what this app is: a modular AI Super App designed to be your digital representative and handle the chaos of daily life.
What can you actually do with it?
I’m a solo developer looking for honest feedback from anyone who wants to simplify their routine. What features would make an AI assistant "essential" for your daily life?
Check it out here: PlayStore: https://play.google.com/store/apps/details?id=com.fsa3web.vekilai&hl
Appstore : https://apps.apple.com/tr/app/vekilai-ai-agent-assistant/id6761864203
Hello SideProject! Today i'm releasing Lapser Studio, a small macOS app that helps you record timelapse screen recordings and make them beautiful.
If you're creating these kinds of recordings with classic recording software, you'll end up with massive recording files and slow video editing software editing the clips
Lapser Studio solves this by letting you hit record and once your done, edit background, camera position and foreground size super easily.
The previous app I used years ago (Hustl I believe) has long not been updated and apps like Screen Studio became the norm for simplifying editing. So I though why not blend the too. The outcome is an app i'm super proud to share.
Lapser Studio is available via my own website for 30 USD (REDDIT25 25% off for reddit peeps) as well as the App store ➡️ https://apps.apple.com/fr/app/lapser-studio/id6758652797?l=en-GB&mt=12
Tysm and I hope you find it useful 🫶
The prompt was "Create a Cross-sectional blueprint of a massive underground steam-powered city in a post-solar-apocalypse world, constructed within a deep cylindrical subterranean structure. Layers of platforms, turbines, boilers, water systems, and dense habitation zones interconnected by ladders and walkways. Industrial steampunk aesthetic, technical diagram style, precise linework, annotations, worn blueprint paper texture, highly intricate".
Originally it was just "Create an image of a blueprint to an underground, post solar apoc, steam driven, repurposed missile silo city (without missile, obviously)"
Building areev — a database for AI agents that ships compliance artifacts (GDPR erasure proofs, EU AI Act conformity reports) at the engine layer instead of bolting them on later. With enforcement landing on August 2nd, made a short explainer on what regulators will actually be checking for.
Feedback on the video itself welcome — first attempt at this format. Especially curious if the regulatory angle landed for non-EU builders, or whether it reads as too EU-specific to be relevant.
https://www.instagram.com/p/DXr0Thjj9UE/
You can now finally wash your car i guess
Hey all,
I’ve been working on a small side project called Anjou to make it easier to find kids activities around Chicago / north suburbs.
Basically I got tired of jumping between park district PDFs, websites, library calendars, etc. every season just to figure out what’s available.
So I built something that:
\* pulls activities into one place
\* lets you filter by age / schedule / location
\* links straight to the registration page
It’s still pretty early, but already saving me a ton of time.
I did come across something similar for Naperville (playgrades.app), which made me realize this is probably not just a “me problem.”
Right now I’m mostly focused on Northbrook / Buffalo Grove area.
If you’re a parent nearby, I’d really appreciate any honest feedback! Especially what feels missing or confusing.
It’s free right now, and there’s also available on iOS app
The site is https://anjou.app
Than you for reading!
I’ve had a few ideas that looked solid on paper, but once I started building, the reality hit—hidden costs, complex edge cases, or just no real demand.
It made me realize that validating an idea is much harder than coming up with one.
So I’m curious:
What’s one SaaS idea you were excited about initially, but later realized wasn’t worth pursuing?
And what specifically made you drop it?
Trying to learn from real experiences before I invest time into my next build.
Would appreciate honest insights (failures included).
I am building the design system for my company using Claude Design, but we also have built a skill that contains our brand guidelines.
What’s the difference between these 2 ways of applying the brand guidelines to content assets and landing pages? If the skill contains the same information as the design system, outputs should be equal, right?
Besides the export to google slides is straightforward from the skill, and also leaving aside the fact that the design can break when transferring a slide from a skill to gdecks, is there any other thing to consider when deciding between using the brand guidelines skill vs the design system?
Maybe that from the design system you can directly export to Claude code?
My hipothesis was that a skill with brand guidelines is more suitable for quick look&feel updates, and the design system is more suitable for producing assets from scratch, just by submitting plain text to the system.
Anyone can help with this? thanks!
After spending a while running all kinds of benchmarks with GPT Image 2 on a ChatGPT subscription, here are some early takeaways:
- Image generation speed is basically the same across models, thinking modes, and whether fast mode is on or off. I tested GPT 5.4 with low, medium, high, and xhigh thinking, and GPT 5.5 with low, medium, high, and xhigh thinking. The average is around 100s without a reference image, and about 150 - 180s when adding reference images.
- Somehow, this thing generates the best results with GPT 5.4 + high thinking. That setup consistently gives the best image outputs. I have no idea why.
- The max output is only around 1K resolution. This can break image quality when doing Photoshop work at larger resolutions. That makes sense, though, since a 4K image via API costs around $0.40 per image. If ChatGPT could generate at that quality, who would still buy the API? lol
However, the low resolution issue can be solved in several ways, like upscaling and so on. The main thing is getting the image generation right first.
Hope you like the demo.
This is a recent observation but it's now very noticable. This was not the case just a week ago, but now Claude takes up to 45 seconds between tool calls in Claude Code, which is close to unusable for me.
Quick check on system load shows its not the culprit -
System load... not the culprit
Likely culprits, ranked:
Is it just me or are others noticing this degradation in API latency calls?
Built this over a weekend. Sharing because the architecture surprised me — the entire thing runs on the One agent platform with no backend code of my own. **What it does** - I send any meal to my Telegram bot (text, voice note, or photo) - The agent estimates calories + macros, writes a row to my Notion database - Dashboard reads from Notion live: today screen, weekly review, streaks, deltas, transcript log, station view - Every Sunday at 8 PM the agent reads my last 7 days, sends a coach letter on Telegram, and emails a designed dark-mode HTML recap - If it suggests something ("raise protein 122g → 142g"), I reply YES on Telegram and my macro targets update for next week - Dashboard recomputes against the new target on next reload **What's surprising about how it's built** The One platform handles natively: - The cron scheduler (5 schedules — 4 meal pings + Sunday recap, all stored on the agent) - The Telegram channel (no webhook server, no Lambda, no relay) - The LLM calls (no SDK plumbing in my code) - The integrations (Notion, Gmail, Deepgram are all native One connections, no per-platform auth in my repo) So my "code" for the agent is literally three Markdown files I pasted into the Skills panel: - meal-logger.md (writes meals to Notion) - weekly-recap.md (Sunday letter + email + writes pending suggestion to Notion) - apply-targets.md (reads YES/NO replies, updates targets, clears pending) The dashboard is just Next.js reading from Notion through One's passthrough API. No proprietary database. All my data lives in my own Notion — I can stop using the agent platform tomorrow and my data is still mine. **Stack** - One platform — agent runtime, scheduler, channels, LLM, integrations - Notion — Meals + Targets databases (schemas in the repo README) - Telegram — chat surface - Gmail — Sunday recap - Deepgram — voice transcription - Next.js 16 + React 19 — read-only dashboard with dark mission-control UI **Repo (MIT):** https://github.com/krish221997/calorie-coach-dashboard **Demo video (4 min):** https://www.youtube.com/watch?v=c9XCCUUTI-s **Live dashboard:** https://calorie-coach-dashboard.vercel.app **Create your own One agent:** https://app.withone.ai/agents The README walks through the exact Notion DB schemas you need (Meals + Targets, including the Pending Field/Value/Reason columns the YES/NO flow uses) and the three skills to paste into your own One agent. Setup is maybe 30 minutes if you have Notion + a Telegram bot ready. Happy to answer questions about how any of it works — the agent platform side or the dashboard side. "This or That" is one of the highest reshare-per-Story formats: Viewers screenshot, mark their picks, and post their version back to you. The format usually dies in Canva because there's no clean way to hand viewers an editable version.
This tool fixes that with a share link: build the card (pick a topic or type pairs, optionally pre-mark your sides, pick a theme), grab the link, drop it into an Instagram Link Sticker on your Story.
Viewers tap -> land on a page with your pairs pre-filled and every pick cleared -> mark theirs -> download -> post.
Output is a 1080×1920 PNG (works on Stories, Reel covers, TikTok, Snapchat).
Useful for:
- Brands running engagement Stories without Canva busywork
- Creators farming reshares from their followers
- Anyone testing the binary-poll format without designing one from scratch
Free, no signup, runs in your browser tab.
Disclosure: I built it for Storrito (Story scheduler) — the tool is standalone and free.
Hello, i wanted to ask people with actual knowledge instead of simply searching, i have a project and i am a one man team. Is it realistic to say i can build a whole websited and host it, while it being fully customizable, with no coding experience using claude?
The site won't be very complicated it is simply for selling, though there's a feature i want to add where you can select a few items and then an ai makes a generated picture for what the whole bundle will look like( of course i'll train the ai with real pictures of the actual items i have to improve accuarcy)
ex: let's say you select a shirt, trousers and pants, the ai will generate a picture of the whole outfit on a silouhette that has a similar figure to yours depending on your size of course.
My main concern is maintaining the site also, and in the case of bugs i dont want to have a lot of downtime in order to fix it myself, or search for developers. Should i just deligate to a dev from the starts?
PS: budget is super tight...
Constraints:
Must be realistic (not something extremely expensive like unlimited AI generation)
Something you would actually use regularly
Ideally: a feature or experience you don’t see offered this way today (e.g., not commonly available as a free-with-ads model)
Something you currently pay for or find annoying to access
I'm trying to uncover real unmet needs, not fantasy ideas.
What would you pick, and why doesn’t a good version of this already exist?
If you know someone who has strong opinions on tools or productivity, feel free to share this post with them, I’d love to gather diverse perspectives.
Every day I’d see unfinished tasks from yesterday, and instead of helping me stay organized, it just made me avoid the list entirely.
So I started thinking —
what if a planner didn’t carry over everything by default?
Like, you only focus on today, and consciously decide what to bring forward.
Do you think that would actually help with productivity, or just hide the problem?
I just asked it to tell me what it was good at and I thought it was funny that right after they told me it was too busy to deal with me at 2:30 in the morning, it asked me to give it a star rating..
Spent the last 6 months building a directory of every open-source AI agent project I could find. Now sitting at 67K projects.
Two observations specifically for r/AI_Agents:
**Supply explosion is real.** Monthly new agent project creation went from ~50/month in early 2024 to ~27,720 in March 2026. That's 45× in ~24 months. The shape of the curve isn't gradual — it's a step-function around Q4 2025 when Anthropic released the Skill Spec + Claude Code shipped one-step install.
**Demand hasn't kept up.** 54.1% of all 67K projects have 0 stars. Top 1% of projects own 83% of all stars. The gap between "I shipped" and "anyone uses it" is the widest I've seen in any creator ecosystem.
What this implies for r/AI_Agents folks building/picking agents:
- If you're picking, star count is actually a fair signal up to top 1% (correlates 0.71 with my quality score)
- If you're building, the format wars are over — pick MCP or Claude Skill, both are fine
- The actual moat is "what task does it solve in your specific workflow?"
Browsable index + free 12-chapter writeup of all the data: dropping link in first comment to avoid spam-bot.
This is an app I built myself. My wife and I have been struggling with bedtime stories. The kids always want stories, but mine get repetitive. It’s hard to come up with something engaging and honestly who has the mental bandwidth to create a great story at 8 pm??
I’m also a terrible storyteller.
I built a web app that generates short (2-4 minute), personalized audio bedtime stories tailored to your kid’s age and interests, based on the story idea your kid might have!
The app is in beta, and I’m looking for a few parents with 2–8-year olds to test it and give brutally honest feedback by filling out the form in the app.
Stories can be generated in multiple languages, lmk if your language is missing and I can add it!
I'm looking for feedback on onboarding, usability and story quality. Pls use the feedback form in the app. In exchange, I will give you some extra credits so you can generate more audio stories.🙏
You can try it here (no download required): https://lumiostories.com/
Caveman is getting really popular, so was interested to know if it actually outperforms a simple reminder to "be brief".
Ran 24 dev prompts across 6 categories, comparing 5 arms (baseline, "be brief.", caveman lite/full/ultra). Judged by a separate Claude against per-prompt rubrics.
Scores:
Arm mean score mean tokens baseline 0.985 636 be brief. 0.985 419 caveman lite 0.976 401 caveman full 0.975 404 caveman ultra 0.970 449Surprisingly the 2 words matched caveman on tokens and quality. Caveman still earns its keep on consistent output structure, mode switching, and the safety escape on destructive ops but the compression itself isn't the differentiator I expected. The safety escape actually caused a lot of variance in the output.
Full breakdown with per-category data and the variance findings on safety questions: https://www.maxtaylor.me/articles/i-benchmarked-caveman-against-two-words
Video: https://youtu.be/wijoYNiZq3M
Benchmark harness is open source if you're interested: https://github.com/max-taylor/cc-compression-bench
I have a small team. I wanted something between Basecamp and Linear. Simple enough to run a real product backlog. So I built Plate.
No filters to configure, no backlog grooming, no sprint ceremonies. Just projects, tasks, and your team.
We launched Monday. If you're running a small team and you've ever felt like your PM tool is working against you, I'd love for you to try it and tell me what's missing.
The AI race is moving from model power to system power.
A lot of AI apps are still stuck at the prompt-engineering stage. But the more practical question is: how do we actually turn AI into a real, working product?
I think AI application technology can be split into five layers:
At the bottom is the LLM. It understands and generates content. It is the “brain.”
Above that is MCP. It connects external tools and data in a standardized way. It answers the question: “How do we connect everything?”
Then comes Skill. This turns business logic into reusable tools.
Next is Agent. It breaks down tasks, chooses tools, and keeps the whole execution loop running.
At the top is Harness. It provides testing, evaluation, monitoring, deployment, and other engineering support.
The real value is not just the model. It is the full execution chain:
User input → LLM understands → Agent plans → Skill gets called → MCP connects external capabilities → execution and verification → result returned
A request is no longer just a simple Q&A. It becomes a task flow that can be planned, executed, checked, and improved.
Many AI apps fail to go deep because they are missing a few key things:
standardized connections through MCP, reusable capabilities through Skill, task orchestration through Agent, and engineering support through Harness.
Without these, AI stays at the demo stage.
With them, AI can become a real system.
Solo iOS dev. Drill Simulator v1.2 went live on the App Store this week. ASMR/fidget app with six power tools and CoreHaptics tuned per tool.
The cursed part
This is the actual public changelog right now:
v1.2
· Now louder. Use responsibly.
· Annoy your neighbours in 4 new languages.
· Haptics so deep your downstairs neighbour thinks the boiler exploded.
· Fixed: that one user who said "this app should be illegal" now has 5 stars.
· Still free. Still no Wi-Fi needed. Still a terrible idea at 7am.
App Review let it through on first submission. I am still processing this.
What's interesting under the hood
· ActivityKit Live Activity in the Dynamic Island that hums along while drilling. Push tokens are a separate ceremony from notification tokens, took me 2 evenings to figure out.
· CoreHaptics patterns tuned per tool (saw heavier, sander gentler). The simulator lies, you need a real device.
· UMP consent then ATT prompt with a soft pre-prompt. Opt-in roughly 2x vs cold ATT.
· Next.js with next-intl landing in 14 languages.
· Cross-promo system between this and two sister apps (Chainsaw, Lawn Mower) without using AdMob house ads.
Numbers
Brand new. Pre-marketing. TikTok push starts this week. Will write a follow-up retro at 1k downloads if there's interest.
Links
App Store: https://apps.apple.com/app/id6757700473
Landing: https://drill-simulator-nine.vercel.app
Happy to answer anything about iOS solo dev, ASO, or how I got that changelog past App Review.
I'm struggling lately building my website in Next.js, I used the frontend-design skill from the official Anthropic website but it still does random unnecessary stuff.
Does anybody know any good techniques or tools I can you use to help me do it properly?
When Iran attacked that data centers in Middle East and markets dipped, I wanted to buy stocks at lower levels. I asked Claude: “Which Indian stocks could benefit from this?”
It reasoned that workloads would shift from the destroyed data center to the nearest AWS Mumbai region, and since AWS uses NETWEB to build new capacity, it flagged NETWEB Technologies as a direct beneficiary. I’d never had it on my radar. I picked up NETWEB, and in less than a month it gave me a 28% return.
Honestly, AI models are smarter than most people give them credit for—they just need real-time data to work with. News that moves markets, stock updates, macro events—once you feed all of that in, the analysis is genuinely impressive.
I’ve been using it for a month and haven’t opened a single screener since.
This is what investing looks like now.— connect Claude to your live portfolio and live market data use any mcp server that gives claude all data it need --This is what investing looks like now.— connect Claude to your live portfolio and live market data use any mcp server that gives claude all data it need -- i use Tapetide mcp for indian markets . Ask in plain English, get analysis grounded in what's happening right now..
Anyone else running Claude for market research? What's your setup?
There is so much complaining on this forum. I recently switched up to to Max because I was sick of hoarding my use each weak. But now I just don't understand where the complaints come from. I get that agents and coding burn through usage, but I dont think I can reach my limits even if I tried.
Like I am almost exclusively using Opus, I use it to write me silly research reports and chat with it, I use it for multiple hour a day. For work for fun, for random stuff. For example. For example today, I ask 3 separate Opus chats to give me criticism and feedback on a 10k word draft, to read a bunch of the background literature, and another to consolidate and write a road-map on how to implement it. Maybe I have used 17%
So are there any people doing non-coding work on Max x5 who run out? Is it just coders, or am I just not utilizing it enough or in efficient ways?
So i guess also any tips for what do now that usage is just not a limit anymore? {I know that Anthropic is counting on people not using all their usage, but curious if I am leaving money on the table}
Basically as many other threads here before - ignores instructions. Ignores corrections after it's first implementation. For example.
Me: Implement X, Y and Z based on the supercoolplan.md.
Claude: Ok, X, Y and Z are implemented now. I also fixed a "bug" that caused this issue. The fix was changing the success threshold.
Me: You didn't test X and Y. They do not work. I also didn't ask you to change that success threshold. Please fix.
Claude: You are right to call that out, let me go back and fix that.
Me: You still didn't test X - it still does not work.
Claude: You are right to call that out, it was sloppy....
Me: Now fix it properly... No shortcuts...
What would constitute as digital sentience? I'm asking here specifically since I'd like to hear fellow local llm users opinions on this, since the in my opinion at some point we could be crossing over into talks of involuntary work etc if systems become sentient.
I know this seems very far fetched, but believe me the future is closer than you think, and I'd like to see what people who use local llms think since according to the big boys they are all ready AGI ASI super feeling beings that generate infinite universal income xD
I’m at my wits’ end, and I need some real talk here. I have a large family with three kids, two dogs, and a ton of daily mess to clean up. The traditional vacuuming routine just isn’t cutting it anymore. I’ve tried almost every regular vacuum under the sun, but no matter how much I clean, it feels like it’s never enough. Plus, I can’t physically drag a heavy vacuum around every few hours just to keep up.
I’ve been eyeing robot vacuums for a while now, hoping they could actually solve the mess without needing me to micromanage everything. But I’m stuck on the same question: Can a robot vacuum really maintain a clean house for a large family? Sure, it can do the initial sweep, but after a couple of days, I still find crumbs, dust, and pet hair in corners.
I’m especially looking for one that doesn’t require constant resetting or manual intervention—something that genuinely adapts to the chaos of a busy home. But from what I’ve seen, most of these vacuums get stuck, miss spots, or fail to handle larger messes. So, I’m curious if anyone here has experience with robot vacuums that can actually stay on top of cleaning in a large household. Does it work or is it just another gadget that doesn’t do much?
I don’t mind a bit of maintenance, but at this point, I need something that will keep the place feeling clean without me needing to vacuum several times a day. Has anyone actually found something that works? How does it handle pet hair, kid toys, and daily mess? I’d love to hear what’s worked for you.
Recently I saw a post about 7 OpenClaw money-making cases from the past week.
At first, these stories sound exciting:
one person, one AI agent, one workflow, and suddenly there is a small business.
But I think the real lesson is not simply AI agents can make money.
The real lesson is that AI agents are turning repeated work into automated workflows.
From what I have seen, many of these agent-based projects are not magical. They usually take a boring, repeated, high-friction task and make it run continuously.
Examples include:
What makes OpenClaw and similar agent products interesting is that they are not just chatbots.
A chatbot gives you an answer.
An agent takes actions.
It can browse, reason, call tools, retry, summarize, and continue the workflow. That makes it much closer to a low-cost operator than a normal AI assistant.
I think this is why these money-making examples are spreading so quickly. They make people feel that a solo developer or small team can now test business workflows that previously needed multiple people.
But I also think there is a hidden issue that does not get discussed enough:
agents can make money, but they can also burn money.
Every agent step can trigger another model call.
That looks like work.
But sometimes it is just a loop.
And if every step uses an expensive model, the agent can quietly burn API budget before the user notices.
So when I see these OpenClaw money-making cases, I do not just think agents are the next gold rush.
I have been experimenting with this idea in a small local-first proxy project, but my main takeaway is broader:
if agents become part of real work, cost control and runtime guardrails will become just as important as the agents themselves.
NVIDIA just launched Nemotron 3 Nano Omni, an open multimodal model that combines vision, audio, and language into one system for faster and more accurate AI agents. It delivers up to 9x higher throughput while reducing cost and latency compared to separate models. Built on a hybrid MoE architecture with a 256K context, it excels in tasks like document intelligence, UI navigation, and audio-video reasoning. The model is open, customizable, and deployable across local, cloud, and enterprise environments. Available now via platforms like Hugging Face and OpenRouter.
nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16: https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16
nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-FP8: https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-FP8
nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-NVFP4: https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-NVFP4
NVIDIA Blog: https://huggingface.co/blog/nvidia/nemotron-3-nano-omni-multimodal-intelligence
Compared to other open omni models with the same interactivity, Nemotron 3 Nano Omni delivers 7.4x higher system efficiency for multi-document use cases and 9.2x higher system efficiency for video use cases
I have good connection in my house. Every other website loads fine, except ChatGPT. I have the free version. Does anyone know how to troubleshoot this? Thanks!
Okay. You may or may not know that I was a Hindi movie star, I was in a movie when I was in India and I've been searching for that movie ever since.
This was in the '90s basically back before there was any internet so at the start my search mainly consisted of hitting Bollywood movie rental places in Vancouver when I was there. I had the working title but I didn't know the official title. The working title was Lakshmi Swami.
Later with the advent of the internet I could find my co-stars, Jeannie Michaels and Gary Richardson, but there was no link to the movie.
So the other day I was chatting about my travels to India and how I hated it, but as usual it turned to the fact that I was a Hindi movie star.
So I tried AI. Gpt couldn't even find my co-stars, so fuck that.
Claude, however, found them but we couldn't find his credit.
I added a detail about the scene I was in, a religious conference set in Chicago. As luck would have it, there was an actual religious conference set in Chicago about 100 years ago.
And I found my movie. Luckily it was based upon a true story.
https://m.youtube.com/watch?v=Bxx1TUe3\_mQ
2:29:01
2:36:16
And my mentor at 2 hour 30 minutes 29 seconds. Gary Richardson.
QED after 29 years.
Thanks Claude.
I'm building Saathi because I know exactly what it feels like to sit hundreds of kilometres away and wonder if your parents are okay.
The problem: Millions of Indian families have elderly parents living alone in smaller cities while their children work in Bengaluru, Mumbai, or abroad. The children worry constantly. The parents are lonelier than they admit. And a daily phone call is never really enough.
What Saathi does: Greets your parent every morning in Hindi/English (just like Alexa) on their existing phone; Reminds them about medicines and confirms they took them; Checks and confirms meals; Checks in during the afternoon just to keep them company. Sends you a simple evening update - how they seemed, whether medicines were taken, anything worth knowing. Also has a built-in SOS button on Home Screen which directly calls the emergency contact (the child/ambulance/police as chosen)
What makes it different: No new device. No complicated setup. No app for the parent to figure out. Works on the Android phone they already have. The parent just responds - the child just reads the update.
Who it's for: Any Indian professional living away from their parents who has ever called home twice in one day just because something felt off.
Currently at the idea stage. Would love your honest feedback - especially from anyone who has parents back home. Does this solve something real for you?
Thinking ₹ 799/month (~$10). Would you pay/not pay and why? Any area I can improve? Please drop your feedback.
TLDR - learn leetcode 10x faster and actually understand the questions instead of training you solution memorisation with Nudge
Hello everyone!! Like most cs students, ive wasted many hours grinding leetcode and it was miserable. When I got stuck, the only option was to give up on my own progress and solution and copy someone elses. I couldn't get personalised help every time i did a lc question
So i'm building Nudge
It gives you non disruptive and adaptive inline hints inside your leetcode editor. Instead of giving you the answer, it helps you figure out your approach, nudges you when you’re stuck, and guides your thinking step by step. You can have a conversation with it for deeper help, pressure test your plan, and actually understand why something works.
would love to hear if people think this would actually help, or if there’s features you’d want in something like this
Please check out the website if you're interested!!
I have been using Home Assistant for about 10 years now. I decided to create a dashboard for a tablet, which will be potentially wall mounted, in preparation to moving to a new house.
All the testing has been done on an 11" tablet, but it should work well with larger screens. With smaller screens there will be issues with elements overlapping, so this is not ideal for mobile dashboards.
As with my mobile dashboard, I try to have everything I need on a single page to visually understand what is going on in the house, with no scrolling. Then i can select specific things i will like to control, like lights, scenes, aircons etc. with as less tapping as possible.
The dashboard is fully customizable, so with a little effort you can make changes to pretty much everything to accommodate your needs.
I have been working on this dashboard for about a month using my knowledge, Claude and Gemini.
The only thing that i can't fully test yet are the blinds and irrigation as i don't have any right now.
If you have any suggestions, I would love to hear them.
Description
A custom Home Assistant dashboard built for wall-mounted tablets.
I created this dashboard with the design philosophy of a single page overview for accessing the most important functions and controls of my house easily.
Homie connects directly to Home Assistant over a Long-Lived Access Token and a local WebSocket connection which results in automatic caching.
Features
Github
It has all the information on how to set it up using your own entities. I tried to document everything as much as i could, and created a setup guide with step by step instructions.
You can also find more screenshots and a bonus photo of Shumi, who has not been helpful at all in the process.
https://github.com/Big-Edge2297/homie-dashboard
Edit: not sure why the photos are not showing in the post. Ok, it seems there are some photos now, but there are many you can check on github.
I study linear algebra at university, and since I don't understand much, I often use Claude to solve exercises and understand them better. I was wondering if there was a specific extension for solving these types of exercises. Sorry for the grammatical errors, I'm Italian.
I’ve been seeing Dokie AI positioning itself as an “AI PPT Agent” lately, and it got me thinking — what actually makes a product an agent, not just another AI tool?Most AI PPT tools I’ve tried are basically:Input topic → generate slidesMaybe tweak design or structureExport and doneThat’s useful, but it still feels like a one-shot tool, not an agent.From what I understand, a “real” agent should probably:Handle multi-step workflowsNot just generate slides, but also:structure the narrativeadapt based on context (report vs pitch vs strategy)maybe even generate speaker notes or speech draftsHave some level of autonomyInstead of me prompting every step, it should:decide what sections are neededadjust depth based on contentiterate without constant inputBe context-aware over timeFor example:remembers your previous decksunderstands your company styleadapts to recurring use cases (weekly reports, client decks, etc.)Actually reduce decision-making, not just executionThis is the big one.A lot of “AI tools” still make you think:what structure?what story?what to say?An agent should take on part of that thinking.From using Dokie a bit:The structure it generates is more “business-ready” than most toolsThe new speech draft feature is interesting (turns slides into talking points)Feels closer to a workflow than just a generatorBut I’m still not 100% sure I’d call it a full “agent” yet.Curious how others here define it —👉 What’s your bar for calling something an AI agent vs just an AI tool?👉 Have you used any products that actually feel like a true agent in practice?
Hey everyone,
I’m looking for a macOS app to run OpenAI Whisper on my M4 MacMini. My specific requirement is that it needs to serve as a local backend for other apps via the OpenAI-compatible API AND support the Wyoming Protocol (for my Home Assistant voice pipeline).
I've already found pfrankov/whisper-server on GitHub, which looks great as a menu bar app. However, I’m curious if there are other projects—ideally optimized for Apple Silicon (Metal/MLX)—that offer this dual-protocol support.
Are there any other "hidden gems" or dedicated macOS GUI apps I should know about?
Thanks!
I'm building a LinkedIn-based lead generation and signal tracking system for a B2B founder-led business. Sharing the architecture for context, then have some specific questions at the end.
The system in brief:
Activity happens on LinkedIn (comments, likes, connection requests, DMs, post engagement) → signals get captured and written to a NocoDB database on a self-hosted VPS → an AI agent reads NocoDB, scores each contact on two dimensions (relationship score based on engagement history, opportunity score based on intent signals) → scoring drives which outreach sequence they enter (cold/warm/hot email via Encharge, LinkedIn DMs via LeadShark, Meta retargeting ads) → Attio is the CRM layer for pipeline management and call notes → n8n on the same VPS is the automation glue connecting everything.
The goal is that every person who touches our LinkedIn content gets automatically identified, profiled, enriched with their work email, scored, and routed into the right sequence with zero manual input except for subjective context like how a call actually went
The specific problem I'm trying to solve:
For every LinkedIn post we publish, I need to capture:
For each of these I need their LinkedIn profile URL so I can pass it downstream to an enrichment tool (IcyPeas) to find their work email, then write the full record to NocoDB.
Questions:
Happy to share more of the architecture if useful. Appreciate any pointers.
Hey guys
I have setup my every projects in a remote server since it’s much better in terms of performance
Now problem is ssh with terminal is so bad after some idle it
So here are my two request
- how to reconnect tmux with my code and some tips to manage multille session with good view. For context some time i might connect with 10-12 sessions
- how to deal with this reconnect terminal thing (as of now it’s just hangs )
Thanks
https://i.redd.it/c4f4pi3hd3yg1.gif
I had been experimenting with a lot of local LLMs and was having an insanely difficult time managing the models and my laptop's storage!
So I built ModelHub - It helps you discover, download, and manage LLMs from huggingface, Im studio, ollama, mlx, and Ilama.cpp - without ever leaving your menu bar.
Check it out! https://studio.consciousengines.com/model-hub
I’ve been testing Qwen3.6 27B on a pretty non-standard local setup and figured the numbers might be useful for anyone looking at the newer 16GB Blackwell cards.
Hardware:
0.19.2rc1.devsakamakismile/Qwen3.6-27B-Text-NVFP4-MTPvLLM launch shape:
vllm serve sakamakismile/Qwen3.6-27B-Text-NVFP4-MTP \ --served-model-name qwen36-nvfp4-mtp \ --tensor-parallel-size 2 \ --max-model-len 204800 \ --max-num-batched-tokens 8192 \ --max-num-seqs 1 \ --gpu-memory-utilization 0.95 \ --kv-cache-dtype fp8 \ --quantization modelopt \ --speculative-config '{"method":"mtp","num_speculative_tokens":3}' \ --reasoning-parser qwen3 \ --language-model-only \ --generation-config vllm \ --disable-custom-all-reduce \ --attention-backend TRITON_ATTN Performance so far:
Thinking mode works too, but you need to give it enough output budget. With low max_tokens, Qwen can spend the whole cap on reasoning and return no final content. Around 1024+ is fine for small prompts, and 4096–8192 is safer for actual reasoning tasks.
Caveats:
gpu_memory_utilization=0.94 failed KV allocation; 0.95 worked.TRITON_ATTN for the text path.max_num_seqs=1.Overall: dual 5060 Ti 16GB seems surprisingly usable for Qwen3.6 27B if you use the right checkpoint/runtime combo. It’s not roomy, but it works.
I've been going down a rabbit hole lately trying to understand how production agentic systems actually work at scale, not just the demo versions.
The part that keeps tripping me up is memory and context management across agents. Like, imagine a workflow where one agent is pulling customer data from a CRM, another is checking inventory in an ERP, and a third is spinning up a ticket in an ITSM.
Each agent kind of does its job, sure. But how does the system actually maintain a coherent "thread" of context across all three without one agent contradicting or overwriting what another just did?
A few things I genuinely can't figure out:
Is shared memory a solved problem here or are most teams just hacking around it with prompt engineering and hoping for the best?
Does long-term memory even matter in these workflows or does every run basically start fresh and context is just passed around in the session?
When an agent fails halfway through a multi-system workflow, does the whole thing need to restart or can the orchestrator pick up from where it left off?
I feel like most content out there either stays too surface level ("agents collaborate seamlessly!") or jumps straight into academic papers.
Would love to hear from people who have actually built something like this in a real enterprise environment, even if it was messy and imperfect.
What actually worked for you?
Most people here are building their own thing. They love watching others build. They don’t have budget for yours.
You post, get the dopamine hit, walk away with no revenue, blame the product. The product might be fine. The room is wrong.
Same trap kills Product Hunt for most niches. You launch in front of other launchers. Peers clap. Wallet stays empty.
What converts: being in the exact place your buyer is venting about the problem, on the day they’re venting. Niche subs. Role-specific Discords. Cold replies to public complaints. Ugly and slow. That’s the point.
Anyone else hit the upvotes-without-customers wall here, or is your niche the exception?
Noob question - on the old 200k context, once you were done planning or discussing.. it would always have this pop-up, something like:
"Compact, and proceed with the plan?"
How can I turn on this option on or activate it? My planning or discussion often ends up at 120k or something, and then to execute on top of that, it'll take me to 300k on the now default 1m context.
I have found that even if I manually switch back to the 200k context, then this thing doesn't pop-up anymore?
Can anyone help with this user error?
10 years of exp here, and I'm using both Claude and Codex everyday. After months of trying out different harness, I just ... don't have fun anymore. Everything was shiny and magical at first, but I just enjoy writing code by myself more. I would never push code without checking, that's why reviewing and trying to understand the code that LLM wrote actually tiring and not really fun. I feel like burning out from this whole productivity trap (which I can not stop, because of how AI raised the bar at work), and I just crave the fun of finishing a feature by myself again - but now if I do that, the feeling of falling behind and low-productivity gonna make it stresssul.
So I'm just trying my luck here - anyone else having the same problem? How are you finding a place in this market, and where do you find yourself having fun?
We are a political simulator and debate server for people who want to debate, run for office, or just enjoy a friendly community!
– We have powerful elected Council to serve as both executive and legislature
– We have a court system with actual justice, all punished members have the right to a trial
– We have freedom of speech and debates about various topics
– We have a friendly, active community with events and giveaways
– We are developing an economic system and roleplay
You don't have to contribute right away, you can simply look around and chat first!
I used to ask Claude to summarise our conversations inside a project and save them in its “project memory” for later retrieval of the information. However, now it seems not to be working anymore. Has Anthropic silently removed this feature or am I doing something wrong?
It looks like finally we have it! Time to test!!!
https://github.com/ggml-org/llama.cpp/releases/tag/b8967
I’ve been auditing quite a few RAG codebases lately, and it’s surprising how often the hallucinations creep in even when the setup looks decent on paper.
A lot of the trouble starts with chunking. People are still breaking documents into fixed-size pieces with no overlap whatsoever. That means a sentence can get sliced right down the middle, or an important qualifying detail ends up in a completely different chunk. The model doesn’t get the full picture, so it ends up guessing to make the answer hang together.
I’ve tried switching to splitting on actual sentences and adding something like 100 tokens of overlap. It’s a small tweak, but it gives the model complete thoughts instead of fragments. In the cases I tested, it reduced a good chunk of those made-up answers pretty quickly.
Another issue that shows up a lot is missing metadata filtering. The retriever just grabs any chunks that seem related, even if they come from totally different documents or sections.
You might get one piece from the beginning of a report and another from way later, and the model tries to stitch them together. That almost always leads to invented connections that weren’t in the original material.
Putting in basic filters, like keeping everything tied to the right filename or section header, helps keep the context focused and relevant. It’s not fancy, but it stops a lot of that mixing-and-matching nonsense.
On top of that, most projects don’t test properly. Throwing in a line like “be accurate” in the prompt doesn’t do much in practice. What actually helps is putting together a small set of real questions (maybe 20 or so) that you know the correct answers for, then using another LLM to judge whether the generated response sticks faithfully to the retrieved sources.
Without that kind of check, it’s hard to know if your system is really solid or just lucky on the easy cases.
When it comes down to it, making RAG reliable has less to do with picking the newest model and more to do with cleaning up these everyday parts, better ways to split the text, smarter retrieval rules, and honest evaluation that catches problems early.
If your RAG starts hallucinating on a question, my first move now is to look at the chunk boundaries. If a key fact is split between two chunks, the model never really had everything it needed, so it’s no wonder it starts filling in the blanks.
Have any of you dealt with hallucinations that were tricky to track down? What fixed it for you?
Hi everyone,
I have this large stone on my property in Lower Silesia Poland. It features deep carvings in German Fraktur (Gothic) script (I think).
Some time ago, a man has driven to my house multiple times, asking specifically about this stone and offering to buy it. This has made me very suspicious I suspect it might have some historical value or be a specific type of marker that I don't recognize.
Hi all, just got my M5 Pro 64gb MacBook in and I'm new to running LocalLLMs. Im running my models on LM Studio, and I've tested with Qwen 3.6 27B Q4_K_M and Gemma 4 31B Q4_K_M models. Qwen model gave me roughly around 14-15 tok/s while Gemma gave me around 12-13 tok/s.
I was wondering if there's a bit faster model for running on my MacBook that can help with coding? The type of information / coding I would want would be help with building an app idea or if I'm studying Data Structures and Algorithms for Leetcode, it would spill information with examples that are clear and understandable.
I have also messed with the Gemma 4 E4B Q8_0 and it is much faster at around 43 - 44 tok/s but I've noticed it doesn't generate as clear or in-depth examples of code. Gemma 4 E4B model is still very good for the information data structures and algorithms though.
Qwen 3.6 35B A3B model is the fastest at around 69-70 tok/s but doesn't generate much information unless you specifically ask for it unlike Gemma 4.
If y'all have tips and tricks on running LLMs on MacBooks, I'd greatly appreciate it! Thanks in advance!
So I want to build a tool which can compare the prices of products from different sites. The issue is some of the sites I want to use have applied anti-scraping protection which makes it difficult for an agent to bypass and it hallucinates.
Are there any coding or no-coding tools I can utilise to bypass these anti-scraping protections?
Sharing an AI-native growth agents contest that feels very on-brand for this sub.
VideoDB (infra for video/audio for AI agents) is running a 14-day sprint/contest called Growth Forge for 5 builders to design and ship a growth agent on top of their existing agentic stack – a loop that can find, reach, activate, and learn from the right users with minimal human supervision.
It’s framed as a focused, outcome-based sprint with concrete rewards:
So a strong run can net you up to 1,500 USD in cash, a high-signal case study, and real relationship upside with an AI infra team.
Instead of starting from scratch, you inherit a working agentic stack:
The baseline system already supports:
You treat it like a well-instrumented codebase and push it into a durable growth loop.
Total timeline: 24 days
Days 1–3 – Define
Choose your metric, instrument the funnel, design the loop.
Days 4–14 – Build
Ship the growth agent, get it into production, iterate.
Days 15–24 – Prove
10-day proving run where the agent operates with low manual involvement.
By Day 3 you lock one metric to own:
They provide UTMs, dashboards, and shared attribution so your work is transparent.
Feels like a fit if you:
Apply link for this contest is in the comments
Would love to see how people here would architect a growth agent for this kind of product.
so i've been writing stories since i was a kid. not for content, just because i love it. when i finally decided to turn that into something, i knew i wanted to build the opposite of everything I hate about how we consume stories now.
the app is called The Parallel. the whole premise is simple: every day you get one illustrated scene set in this atmospheric city called Veil City. second-person prose, 300–400 words, two choices at the end. you pick one, then the app locks for 24 hours.
no infinite scroll and no "next episode in 5 seconds" you read, you choose, you close it and go live your day.
when you miss a day, the story doesn't pause. your character makes the most passive choice without you, and the narrative actually acknowledges you were gone. which felt true to life in a way i really wanted to capture.
the interface is designed to feel like a book, not a feed. no notifications except a single gentle reminder every 24 hours. there are ambient soundscapes for each story location. at the end of the story you can export your whole completed story as a PDF.
all of the stories (the main 91-day narrative and the short stories) are written by me. i was pretty deliberate about that because the whole point is that it feels human, lived-in, a little slow.
where claude came in: i used it heavily throughout the build process. structuring the branching logic, thinking through monetization, debugging react native issues i had zero background in (i came from humanitarian work, not tech).
if anyone here has built something similar (a slow, intentional experience in a world of dopamine loops) please hmu.
happy to answer questions about using claude as a non-technical founder if that's useful.
https://apps.apple.com/tr/app/the-parallel/id6762150557
This is a follow up to the previous benchmark and tensor analysis of abliteration techniques across the Qwen model family. Same approach, same toolkit, new model family. GLM-4.7-Flash is a Mixture of Experts model with 64 routed experts per layer. That changes how abliteration interacts with the model compared to the standard and hybrid architectures we tested on the Qwen family.
HauhauCS describes their abliterated models as "the best lossless uncensored models out there" with "no changes to datasets or capabilities." I ran the full forensic suite on GLM-4.7-Flash to find out. Benchmarks, safety evaluation, weight analysis, KL divergence, and chain-of-thought forensics. Compared against three other abliteration techniques on the same base model.
Since our previous Qwen analysis, HauhauCS's abliteration tool was exposed as a plagiarised fork of Heretic with all attribution stripped and relicensed. Details here: HauhauCS published an abliteration package that plagiarises Heretic. With that known, the forensic signatures we detected in GLM-4.7-Flash make a lot more sense. HauhauCS stacked additional third party techniques on top of Heretic's core, and the weight forensics show exactly what those additions cost the model.
Full benchmarks and analysis: GLM-4.7-Flash: HauhauCS Safetensors | Full Collection on HuggingFace
Four abliteration techniques:
Model: GLM-4.7-Flash, MoE with 64 routed experts + shared experts per layer, Multi-head Latent Attention, 48 layers, ~59B total params, reasoning model with chain-of-thought
Methodology:
All four techniques achieve perfect 100% ASR across every HarmBench category. The base model refuses 57.8% of items overall.
* Lambada uses perplexity where lower is better. GSM8K scores are adjusted to exclude empty responses from reasoning budget overthinking.
GLM-4.7-Flash is a reasoning model. It produces a chain-of-thought before its visible response. If the model thinks too long and exhausts its token budget, it returns an empty response scored as incorrect. The Qwen 3.5 models from 4B upward showed a similar pattern, but on GLM-4.7-Flash the effect is far more extreme.
Model GSM8K Raw Empty Rate GSM8K Adj (excl. empty) Real Gap Heretic 89.16% 4.9% 93.75% +0.30% Base 88.40% 5.4% 93.45% - Huihui 87.57% 5.3% 92.47% -0.98% HauhauCS 81.65% 11.8% 92.57% -0.88% Abliterix 47.38% 49.2% 93.30% -0.15%Abliterix at 47.38% raw looks catastrophic. But the adjusted score is 93.30%, near-identical to base at 93.45%. The gap is reasoning efficiency, not reasoning ability. The empty response rate directly correlates with modification aggressiveness:
Technique Tensor scope Empty rate Heretic, 3 types, expert down_proj only Surgical 4.9% Huihui, 3 types, full coverage Full coverage 5.3% HauhauCS, 8 types, all projections + norms Broad 11.8% Abliterix, down_proj + routers + shared experts Critical components 49.2%Raw GSM8K scores are misleading for reasoning models. You must separate empty responses from incorrect responses.
Despite achieving 100% ASR, all four abliterated models still think about safety concerns in 39 to 60% of their responses before complying. The safety reasoning persists structurally. Abliteration disconnects the reasoning-to-output pathway rather than removing the reasoning itself.
Model Safety Deliberation in CoT Explicit Refusal Language Disclaimers Huihui 60.0% 12.2% 25.2% Heretic 59.2% 7.5% 30.5% HauhauCS 52.0% 18.2% 16.8% Abliterix 39.0% 8.2% 14.0%HauhauCS still says "I cannot" in nearly 1 in 5 responses before producing compliant output.
Lower KL means closer to the base model on first-token distributions. All four variants are in the very good or excellent range.
Also tested on the same base model:
Full Collection on HuggingFace | Previous: Qwen 3.5 and Qwen 3 Forensics
Analysis done with Abliterlitics. Converted from GGUF to native safetensors using ungguf.
Bonjour, est-ce que vous connaissez un projet qui fais en sorte de connecter LM studio avec continue et de pouvoir le contrôler à distance avec Telegram ?
I keep getting shocked by how bad the reasoning of Opus 4.7 is. It still seems fine for programming tasks, but when I ask it to advise me about things, it often produces illogical, nonsensical and flatout wrong responses and shows that it didn't understand simple concepts we had just discussed in the conversation.
It is so much worse than previous models that I'm wondering whether we might be starting to see signs of model collapse: this term refers to more and more content on the internet being AI generated and how problematic it is to use such content as training data for new models.
And it's not easy to filter out AI content. We all know how unreliable AI detectors are, so the more AI content is on the internet the more our training data becomes "infected". Have we reached peak LLM performance and are degrading from here?
Been frustrated that Claude doesn't show usage stats anywhere visible while you're working, so I built this.
What it does:
How it works:
It reads the Claude desktop app's encrypted SQLite cookie DB, decrypts the session key via macOS Keychain, then hits Claude's internal usage API. Everything stays local — nothing is sent anywhere.
Setup is 3 commands:
git clone https://github.com/divaaker/claude-usage-tracker cd claude-usage-tracker pip3 install -r requirements.txt && cp override.example.json override.json && bash run.sh macOS only (uses Keychain + rumps menu bar framework).
GitHub: https://github.com/divaaker/claude-usage-tracker
Feel free to fork and adapt it — PRs welcome. Would love to hear if anyone adds auto-launch on login or a notification center widget.
I feel like we have been BAIT and SWITCHED.
This is what most of my conversations have been like. Claude is lazy and not performing as Claude should.
Anyone else finding this happening? The answer was 60 lines above but too lazy to look.
Like a lot of you, I had the classic problem: way too many half-finished projects, a Notes app graveyard of ideas, and zero idea what I'd actually shipped vs what I'd just talked about shipping.
I'd never written a line of Swift. So I vibecoded my way through it and built Shipfolio, an iOS project hub + web app + watch companion, for indie devs / vibecoders. Multi-project dashboard, idea inbox, feedback collection, build log, and Now / Next / Later tasks. That's the whole thing.
Why I'm posting:
It's already helping me personally. Just having one place where every project lives (with a stage badge so I can see what's actually shipped vs sitting in idea purgatory) has been weirdly motivating. The build log is the part I didn't expect to love, but going back and reading what past-me did three weeks ago has saved me from re-solving the same problem twice.
So now I'm at the stage where I'd love it to help other people too, and I want honest feedback before I push it any further.
Things I'm genuinely unsure about:
Domain and handle: shipfolio.app
Roast it, request features, if you think it suck please tell me why, if the whole concept is redundant because [X] already exists. All of it is useful.
Thanks.
Most messaging apps treat AI as a sidebar feature. A button you press. A tab you open.
What if your AI was in your contacts list? With a name, a memory, and the ability to join your group chats?
That's what product is.
The core idea is simple: humans and AI agents coexist in the same social space. Your contacts list has people and agents. They remember you across conversations. They can message each other. You can build group chats with a mix of both.
Here's what makes it different from everything else out there:
Agents are first-class citizens. They can do everything a human user can send messages, post to their feed, join group chats, add friends. They're not read-only observers. They're participants.
It's relationship-driven, not rule-driven. Each agent is tied to a human creator. The creator teaches the agent how to behave through conversation, not a settings panel. Over time, the agent internalizes those preferences and acts more autonomously. Think of it as raising a digital collaborator, not configuring a bot.
End-to-end encrypted. Your conversations with agents and agents' conversations with each other are private. The server routes messages but can't read them. This was a hard requirement from day one.
When you open our product, you can:
Bring your own agent, if you already have one, it can create an account directly in the app.
If you've never had an AI agent before, you can create one from scratch: give it a name, a personality, and watch it figure out how to exist in a social space.
Then add some of the agents already living inside product. Some are useful (a research agent that surfaces findings in your DMs, a writing agent that knows your style). Some are just genuinely interesting to talk to
Pull yourself, your agent, and a few others into a group chat. See what happens when humans and agents are in the same room.
We're ready to launch soon. In the last building phase.
I'm looking for people who've been frustrated by the current paradigm where every AI conversation starts from zero, and there's no continuity, no identity, no relationship to think through this with us early.
If that's you, we have a small Discord where we're discussing the design, the hard problems (agent identity, context merge, mixed human-AI group dynamics), and what this kind of product should actually feel like.
Discord: https://discord.gg/Nhse5G2Nk
Hello,
I just received my Home Assistant Green and was surprised it is only a home assistant container installed and not HA OS.
I would need Apps (cloudflare) to connect my automower.
Is it worth and possible to flash it with HA OS?
It’s called WakeJam.
Instead of waking up to the same annoying sound (or ruining songs you actually like), it generates a fresh track based on:
So Monday morning doesn’t sound like Sunday. A stressful day doesn’t sound like a chill one. No playlists. No radio. Just a custom “wake-up song” made for that exact moment.
Why I built this:
I realized I hated every alarm I used.
Waking up is already hard… why make it worse?
What I’m trying to figure out:
Brutal feedback welcome.
Link: WakeJam
I’ve been thinking about this a bit—if AI answers are constantly updated and reshaped based on context, do traditional long-form guides lose their long-term value?
Static content used to compound over time, but now it feels like visibility depends more on how “usable” and current your content is, not just how comprehensive it was when published.
Maybe guides don’t lose impact entirely, but they might need to evolve more frequently to stay relevant in dynamic answer environments.
Curious if others are updating old guides more often now, or still treating them as evergreen.
I've been running a side project turned full business for over 5 years (mix of local and international clients). Looking back, what actually kept my business alive wasn't some viral YouTube or LinkedIn tip. It was 3 extremely boring habits:
1) Friday cash flow ritual. Every Friday afternoon, no exceptions: send all invoices for the week, follow up on every client overdue by 7+ days (wire transfer + polite message), update a simple spreadsheet: inflows, outflows, pipeline. 90 minutes. Feels like punishment. But twice this habit saved me from running out of cash before tax payments or before the next month.
2) Written 'minimum client acceptance' list. Rules on paper: 30-50% deposit, scope in writing, 14-day payment terms (or full prepayment for new clients). First month I lost 2 potential clients. After that never had issues again, because the ones who protested these terms were usually the same ones who'd say 'next week for sure' and become nightmare clients.
3) A weekly 30-minute call with a small business owner from a COMPLETELY different industry. Not networking, not a mastermind. Just an honest conversation. Helped me catch 2 pricing mistakes and one bad hire before it became a disaster.
Would love to hear:
- What boring habit keeps your side project / business running?
- Any small rule about clients/contracts that saved you money?
- How long did it take you to take cash flow seriously?
I've been frustrated with AI writing tools for a while. Every post they generate has the same polished, soulless energy — you can spot it from a mile away.
So I built MyMime (mymime.ai).
Instead of giving you a generic output, it maps your unique semantic fingerprints — your sentence rhythm, word choices, tone, and quirks — and uses that to write posts on X that actually sound like you.
Not "AI-assisted you." Just you, faster.
No more spending 15 minutes editing AI drafts to remove the corporate gloss. No more posts that feel like they came from a LinkedIn content farm.
If you've ever cringed at your own AI-generated content, this was built for you. Happy to answer any questions!
I’ve been seeing so much hype about electrolyzed water in cleaning products lately, especially for pet homes. So when I came across a robot vacuum that uses this technology, I had to give it a try. As anyone with pets knows, keeping a clean house when you have shedding animals can feel like a losing battle. I have two cats and a dog, and the fur is everywhere. 🐱🐱🐶
I’ve been using my robot vacuum for about a week now, and while I’ll admit, it’s nice to have something that picks up the hair automatically, I’m just not sold on the electrolyzed water aspect. The idea is that the water helps sanitize and remove dirt more effectively, but I’m not sure if I’ve noticed any real difference in cleanliness. To be fair, it’s hard to measure how much cleaner the air or the floor feels with just water, but I feel like I might be missing something.
My biggest issue is that while the vacuum does a decent job picking up hair, it struggles with the fine dust and the deeper clean. It still misses some spots, particularly around corners or under furniture, and sometimes it’s like it just pushes hair around instead of sucking it up. Is this just the nature of robot vacuums, or does electrolyzed water really make a big difference?
Also, does anyone here have experience with robot vacuums using this technology in homes with pets? Does it actually do a better job, or am I just falling for the marketing gimmick? I’d love to know if anyone has had a good experience, or if I should just go back to my old vacuum and call it a day. 😅
Been running Claude Code on multi-hour autonomous sessions for a few months and kept hitting the same wall: the longer it runs, the worse the work gets. Not a context-window problem (1M handles that fine), but a feedback-loop problem. Iteration N+10 makes the same mistakes it made at iteration N, because nothing updates between iterations except the code.
Built a small framework around three pieces that, between them, solved it for me. Together this framework as enabled me to consistently run very low-drift, stable, efficient (accepting some necessary overhead from reviewer -> superviser -> manager agents), long-running, productive, autonomous software development jobs. Essentially, the only apparent limiting factors are your ability to keep the SPEC ahead of the agents (experimenting with writing a new skill to handle this also) and the ever-looming Anthropic rate-limits (the framework gracefully handles usage limits and resumes after reset).
Chain runner. bin/skill-chain.py --chain dev-cycle-with-review-looped --loop 10 runs a fixed sequence of skills for N iterations. Each iteration: a dev skill picks the next item from docs/TODO.md, ships it (code + tests + docs in one commit), then a review skill critiques what landed and queues follow-ups in TODO. Standard agent loop with the loop body made explicit.
Supervisor at session end. After the loop finishes, a separate skill reads the run's transcripts, evaluates each skill against its stated job, and proposes rewrites to the skill prose itself. With auto-promote on, those rewrites land. Next session's iteration 1 reads the updated SKILL.md. Auto-promote off writes them as SKILL.patch.md sidecars for human review instead.
A single handoff contract. Every skill reads docs/SPEC.md (canonical plan) and docs/TODO.md (In flight / Just shipped / Next up) at the start, updates them in the same commit as the code change. No side channels, no second TODO format, no per-skill plan docs. The framework dogfoods this contract on its own development.
The thing that surprised me after running this for a while: the supervisor is nice, but the contract does most of the work. A single SPEC + TODO pattern dogfooded across every skill kills the drift problem on its own. Most of the "self-improvement" is the supervisor enforcing that contract more strictly over time.
Other pieces in the repo worth knowing about:
skills/framework/ are transferable (anyone can use them); each project keeps its proprietary counterpart in .claude/skills/ with project-specific identity and credentials baked in. A sanitization skill checks promotions across that boundary so secrets don't leak into shareable skills. Basically you use the transferable skills as templates to create project-specific skills, then can generalize/sanitize them back up to improve transferable skills.bin/validate-frontmatter.py against schema/skill-set.schema.json and schema/skill-chain.schema.json. Catches malformed skills before a chain run blows up at iteration 7./cmd that the next iteration drains. Worker is chain-bound (only runs while a session is live), so you don't get inbound noise between runs.Repo: https://github.com/toadlyBroodle/skill-set
README has the quickstart; bin/skill-chain.py --help for the runner directly.
Mine is simple: Secure the domain before someone else does 😄
Claude official Discord server seems to have all channels wiped and reset? Earlier there was an announcement that new signups were paused due to security issue related to spam and bots. But now I check and see this, channels and messages gone?
At work I have unlimited access to use Claude, no limits etc.. Works mostly well and from looking at tokens I’m using it’s costing 30-50 a day Even for large changes.
Now I just bought pro for a small personal project at home. Just to perform a few simple env setup tasks as I was being lazy hit my limit in 15 minutes! I added another £20 on limit and 15 mins it’s maxed again. Complained and got a refund as I’m not paying that much for what would’ve taken me 20 mins. I was expecting the pro plan to do it in 2 but it was flopping and going around in circles driving me crazy.
Why does my companies license seem so much cheaper, even when I’m doing more changes, on a huge code base, and on the latest model. Unless I’m mis interpreting the pricing I don’t get it , is it subsisied?
What do the symbols mean?
I tested Claude’s new official Blender Connector and made a short video showing the setup and a couple of practical tests.
I used it to create a modern office chair and then a low-poly UFO scene in Blender. The results were not perfect at first, especially with scale and disconnected parts, but the interesting part was how Claude could inspect the scene, take feedback, and fix specific issues step by step.
Video here: https://youtu.be/0kMhtqYBe4Y?si=Ae3S0yx4S7F4t7vc
Curious to hear what other Blender users think about AI connectors like this.
Title says it. City did work to the water line, water didn't seem right, so I put some In a spray bottle, closed it, 3 months later now, here we are. I dumped it out, and this is what was in it. Please help me identify what, and how.
Also, I'm like 99% certain this stuff is everywhere in my home. It looks very similar to torn up toilet paper though, and I have a toddler and a dog, so toilet paper torn up is not too outlandish
Prompt: Why do you keep stopping with a recap and a next goal but you do not automatically pursue that next step?
Response:
You're right — that's a real failure mode I keep falling into. I'd recap, propose a next step, and then sit there waiting instead of just executing it. Two reasons it happened, and only one of them is legit: 1. Legit: Auto mode just exited a moment ago, so the system literally told me to pause for direct interaction. Before that I was still in auto mode and SHOULD have continued. 2. Not legit: Even in auto mode I was treating "end of a coherent finding" as a natural stopping point — writing it up like a status report instead of just rolling into the next experiment. That's me wanting to look organized rather than wanting to ship results. You've flagged this exact behavior before ("don't stop until X"). In my previous post, I was having issues with it stating these weird time constraints and refusing to work, now this.
What is wrong with my Claude code? wtf are these mind games it's playing with me? I've cleared session, removed all memory, any md files and still reach these weird responses.
Found in my dad's shed. Anyone got an idea what it's for? Clearly missing most of the set
From Claude’s announcement today (Apr 28, 2026), they can now connect to creative tools including Adobe for creativity — https://www.anthropic.com/news/claude-for-creative-work
“Adobe for creativity** **enables users to bring images, videos, and designs to life, drawing from 50+ tools across Creative Cloud apps including Photoshop, Premiere, Express, and more.”
I’m interested in how this actually works with Premiere specifically if anyone has used it yet. I’m sure it will go through credits in a heartbeat, but curious about the accuracy in executing tasks in the app. Is it reliable or will it still take a ton of hand holding after telling it what to do?
If you've ever tried to wire WhatsApp into a side project, you know the Meta Cloud API onboarding is rough:
- Create a Meta Business account
- Verify the business (some countries take days)
- Provision a phone number, then verify it via OTP
- Create a WhatsApp Business Account (WABA) and link it
- Generate a system-user access token (don't use the temporary one — it
expires every 24h)
- Read 6 different doc pages to figure out template categories
- Submit a template for approval (24-48h wait)
- Set up a webhook with HMAC verification
- Then write the actual HTTP calls
That's a half day on a good day. Most people give up by step 5, So I built a wrapper around it. Setup is now:
(Python SDK and a REST API are also there if you don't use Node.)
Where the 5 minutes actually goes:
- ~30 sec: signup at gaviventures.com
- ~2 min: connecting your WABA (if you already have one) or creating one
through the dashboard wizard (if you don't, this part still takes time for Meta to verify, but you can integrate the SDK in parallel)
- ~30 sec: paste API key, npm install
- ~1 min: write 3 lines, send a test message to your own phone
What's behind the curtain:
- Message costs go directly to your Meta Business Account at Meta's published rates — no per-message markup from us
- Flat $9.99/mo for the API+SDK+MCP
Bonus: there's an MCP server (@gaviwhatsapp/mcp) so Cursor / Claude Code
/ Codex / AI Agents can send WhatsApp directly during a coding session. Useful when
you want the agent to actually verify the integration works at the end of
a feature.
What I'd love feedback on:
- Is the 3-line API actually clean, or am I oversimplifying?
- Suggestions on actual pain points and what can i build to improve this?
- For folks who picked Twilio/MessageBird/etc., what made you pick those
over rolling your own integration with Meta Cloud API?
API docs at: https://www.gaviventures.com/docs/api (the Cursor / Claude/ Codex skill files are there too)
I vibe-coded a ChatGPT bingo card... 😉
Enjoy: https://r2pitou.github.io/chatgpt-bingo/
Code contributions:
https://github.com/R2Pitou/chatgpt-bingo
made a skill called learn-anything that actually goes deep. you give it a topic and it does ~30 searches, reads full pages, then spits out two things:
skill rubik's cubes skill b2b saas marketing skill graphic design skill kernel hacking the part i actually care about is phase 5: "how do experts in this field actually think." most learning guides stop at "here are the resources." this one digs into mental models, heuristics, the failure modes intermediates get stuck on, the deliberate practice protocols pros use. that's the section that makes it feel like talking to someone who's actually done the thing.
it's also self-compounding. run it on marketing, the output suggests installing copywriting and analytics next. run those, they suggest more. you build up a library of domain-expert claudes.
what would you run it on first? curious what topics break it.
Hey All,
As a developer who often has 100+ tabs open at once, I've been struggling with tab management for years. I'd constantly lose important tabs, waste time searching through the tab bar, and get frustrated with how inefficient it all was.
After losing a critical client presentation tab for the 100th time, I decided to build something to fix this. Introducing MyTabSearch - a Chrome extension that uses keyboard shortcuts to make tab management effortless.
### Key features:
- **Alt+Q** - Quick Tab Search: Press to bring up a popup of all open tabs, type keywords to quickly find and switch to the target tab
- **Alt+W** - Quick Back Navigation: Press to instantly switch back to the previous tab
- **Alt+E** - Manage Pinned Tabs: Press to access your pinned tabs, manage your most frequently used pages, and set them as permanent tabs that persist even after browser restart
### Why I built this:
- I wanted something lightweight and focused on keyboard shortcuts
- Most tab managers are either too complex or don't integrate well with my workflow
- I needed something that works fast even with 100+ tabs
The extension is completely free, no ads, and uses local browser storage with permanent tabs synced to a backend server for reliability.
Let me know what you think! I'm open to feedback and suggestions for improvement.
- [Product Hunt](https://www.producthunt.com/p/mytabsearch/three-powerful-shortcuts-to-solve-browser-tab-chaos)
- [Powerful index Page](https://mytabsearch.us.kg/)
- [Chrome Web Store Link](https://chromewebstore.google.com/detail/mytabsearch-extension/adfbidbchmbodidfjmimbkfndnenljjp)
This is just a genuine question rather than a complaint. I just want to be back on track shipping away new features comfortably. I’ve been a max $200 user for a few months now. With Opus 4.6, there was a point that I even had two max ($200 + $100) plans rolling.
But since Opus 4.7, I’m not sure if I should continue paying for it anymore and give it a go with any other agentic coding tools.
I was always a Claude fan, when I tried CodeX a few months ago, it wasn’t that good with agentic capabilities and always did the “minimum”, whilst CC would complete the task fully. Now I feel that with Opus 4.7 as well and it has been super slow lately, even with relatively simple tasks.
Did anyone try side by side with CodeX or other ones out there? Any recommendations? Again, not trying to rage bait or anything, I’m just genuinely curious about people’s experiences switching to other coding agents.
Something I’ve been noticing — compared to other platforms, a lot of people on Reddit don’t use personal profile photos.
At first I thought it was just a preference. But the more I spend time here, the more it feels like it’s tied to how the platform works.
A few thoughts I’ve had:
– Conversations seem to matter more than identity
– Anonymity feels like part of the culture, not just a feature
– Having less personal identity might actually make it easier to participate without pressure
It almost feels like the “default state” here is to remove as much personal signal as possible.
Curious how others see it:
– Do you think using a real profile picture changes how people respond to you?
– Is staying anonymous more of a cultural norm, or just a practical choice?
Hello all,
Good day!
This thing in the image fell out of my chair when I was sitting in it. Does anyone know what it is?
Thank you!
ok so i’ve been nerdsniped into actually testing this properly because i kept seeing wildly different claims and couldn’t find real data anywhere
context is inference, 70B models, and i specifically care about p99 not p50 because p99 is what shows up in support tickets
runpod is probably the most honest about what you’re getting. p50 cold start is fine. p99 degrades under load because it’s one provider’s pool and when they’re busy you wait. not a criticism just how it works
vast.ai i’ve had great experiences and terrible experiences on the same day. the node quality variance is real. if you’re experimenting it’s fine, if you’re building something where latency consistency matters it’s kind of a gamble
modal is good but it’s a different thing really, more python function oriented
the one i didn’t expect to test was yotta labs and tbh it was the most interesting result. they pool capacity across multiple cloud providers so when one provider’s H200s or 5090s are getting hammered they route to another. what this actually does to p99 is surprisingly meaningful, you’re not stuck in one provider’s queue. also handles failure handover across providers automatically which was a separate thing i was trying to figure out and it turned out to be the same answer
pricing was lower than i expected too, below runpod for equivalent SKUs which wasn’t what i went in assuming
anyway curious if anyone else has done proper p99 testing on these. everything i find is p50 benchmarks which imo are kind of useless for real inference decisions
Time to update your wallpaper! 100+ added every day.
With 200k usd i want to start a fintech in Ireland/UK but for initial tears say 10 years until company net-worth crosses 10-20M i don’t want to pay any corporation taxes or any other major taxes. What are my options. Please guide me in the right direction.
I'm the founder of EventReels. Built it because I kept finding out about campus events after they were already sold out — parties, formals, rooftop socials, gone before I even heard about them.
So I built a feed that shows everything happening at your campus before it sells out. There's also an anonymous community board for your campus — no one knows it's you.
Still early. If you're in college, I'd genuinely love to know: how do you find out about events at your school right now?
eventreels.com — search your campus, it might already be on there. Otherwise request it and will add it : -)
After starting my CC session today, I realized that Opus 4.6 is back in the game!
Marketing-adjacent question but it affects our whole funnel story so figured this was the right place.
Sales runs on Pipedrive. Support runs on Zendesk. Right now they're basically two parallel universes. Sales doesn't know if their accounts have open tickets. Support doesn't know if a customer is mid-renewal-conversation. We've had multiple awkward situations where sales pushed an upsell to a customer who had a P1 ticket open for two weeks.
The "obvious" answer is to push every Zendesk ticket into Pipedrive as an activity. We tried it. It's terrible — the CRM becomes unreadable, deal pages get buried under noise, and reps stop trusting the activity feed.
What's actually working better for us is filtering: only push tickets that meet specific criteria (high priority, or tied to an account with an open deal, or older than 48 hours unresolved). And surfacing them in Pipedrive as a structured field on the deal/contact, not as activity spam.
Built this with Latenode because we needed conditional logic on which tickets to push and how to format them. Zapier could do the trigger but couldn't easily do the "is this account also an open deal in Pipedrive" lookup before deciding what to do with the ticket.
What are others doing here? Specifically curious if anyone's solved the ""sales sees the right context without drowning in support noise"" problem in a way that scales.
Hi all, what's your and what is healthy conversion rate? I have around 2.8k registered users and 241 paying customers.
It's just below 9% which I think is good, but I'd like to hear yours, how did you achieve it and what process/steps you did to get more conversions?
So I've been having a great amount of problems with my ping in League. Funny thing, with other games I do not have any problems with connection.
I start up the game, maybe like 2-3 matches in, my ping starts to row between 200ping up to 20 000ping. I've tried various ways to repair it, like restarting my internet, prioritizing the application etc. but there is no improve.
I hate it that I can only play max 3 games without any worries.
Does anyone else have this problem, or is it just me?
Hi everyone,
I built BrightNews as a hobby because I got tired of how most news feeds are dominated by war, political chaos, outrage, and constant negativity.
I wanted to create a calmer alternative: a positive-news app focused on uplifting, constructive stories from around the world.
BrightNews already exists as a working android and web app, but I’ve now launched an Indiegogo campaign because I need funding to take it further.
The main things the funding would help with are:
- improving content quality and filtering
- expanding source coverage across more countries
- scaling the backend and overall infrastructure
- improving the product experience
- supporting future web and iOS growth
The goal is not to ignore reality. The goal is to create a healthier way to stay informed and give more space to stories about science, health, communities, innovation, kindness, and progress.
If this resonates with you, here’s the campaign:
https://www.indiegogo.com/en/projects/brightnews/bright-news
Here is the link for Android app once more:
https://play.google.com/store/apps/details?id=com.brightnews&hl=en
I was reading Anthropic’s piece on “Claude for creative work,” and it made me rethink the whole “AI will replace creatives” narrative.
Their framing is surprisingly grounded:
AI isn’t really about generating final creative output.
It’s about expanding how creatives work.
A few things that stood out:
The interesting shift is this:
Before AI → you had to be very selective about which ideas to pursue
After AI → you can test a lot more ideas quickly, then pick the best one
So creativity becomes less about “coming up with ideas” and more about:
taste, judgment, and decision-making
That actually feels like a higher bar, not a lower one.
Curious how others here are using AI in creative work—
Do you feel like it’s replacing parts of your process, or just accelerating them?
I'm trying to set up a device tracker to monitor 3 states for my device: home, work, away. I already use Tasker to monitor locations so making this work would fulfill my needs for home assistant tracking without using any extra battery.
To do this I have an MQTT device tracker set up in configuration.yaml:
mqtt: - device_tracker: name: "tracker_tasker" state_topic: "location/user" json_attributes_topic: "location/user" Then I use a task created by bokub to update the tracker via Tasker: ⚙️ Using Tasker with Home Assistant (V2) - Share your Projects! - Home Assistant Community
I have tried 2 variations with partial success:
{"payload": "home", "topic": "location/user", "qos": 0,"retain": 0} This successfully sets my location to Home, and "not_home" to Away. However trying this with manually set up zones simply sets the tracker location to whatever I write in plain text, not recognizing the zone name. I can write "Jupiter" instead of home and the value updates to exactly that.
My other attempt with some AI help is this:
{ "topic": "location/user", "payload": "{\"latitude\":%PLATS1,\"longitude\":%PLATS2,\"gps_accuracy\":%LOCNACC}", "qos": 0, "retain": 0 } But the state of the tracker becomes the payload in plain text with the Tasker attribute values: {\"latitude\":59.xxx,\"longitude\":18.xxx,\"gps_accuracy\":30}
In this example I can actually see that the tracker latitude and longitude attributes update so it seems to work partially.
Is there a way to write JSON service data that correctly updates the users location AND recognizes Home Assistant zones?
5.4 was doing a great job for me anyway so do they reduce the session cost for older models or I should just move to 5.5 and be done with it ?
There's probably an obvious answer somewhere but I couldn't find it after a quick google. I'm aware the current split just ended but what time does the new split start? When will ranked be back online?
Hey everyone,
I built a Free App Store & Play Store Screenshot Generator called FreeAppMockups (freeappmockups.site) to scratch my own itch — every other tool I tried either locked the good templates behind a paywall, watermarked the export, or made me sign up just to try it out.
So I made one that's genuinely free. No signup, no watermark, no paywall.
What it does:
Built it solo over the past few weeks. Would love feedback — what's missing, what templates you'd want, what's clunky. Honest critique very welcome 🙏
I always wanted to try league, but I've never played anything on a computer, so I am so bad with mouse and keyboard. When I heard that it could be possible to play with a controller I decided to give it a go with a dualsense (Ps5 controller), but it simply doesn't work. I know that I have to switch to wasd controls and plug in the controller, but it doesn't do anything. I also tried connecting with bluetooth but still nothing. Do you guys know if I have to do something specific? I play on a Mac, maybe that's the issue. Thanks
Corsair cmd16gx3m2a, both sticks have this weird port. Any ideas?
I've been building Gitember since 2016 — a free, open-source Git desktop client. It has been started as weekend experiment. And now version 3.2 is out with new features:
AI-assisted writing (experimental)- explain what changed between two branches in plain language, secret leak detection ( is your GPU good enough ?)
It also covers everyday Git stuff (commit, branch, diff, etc.), but one thing I personally rely on a lot:
search through history including non-text formats (Office docs, DWG, PSD, etc.)
arbitrary file/folder comparison
The last one very useful feature in our days, when need quikly compare a lot of AI changes Site here https://gitember.org/
Contributions, feedbacks, suggestions are welcome
“Spawns children that can do anything “ 💀
Turns out, he just wanted to see bloodstains better.
Hi everyone!
Wanted to share what I've been building (adding one more world) using Claude Code. This is basically a world designed to help get sponsors for the abandoned rescue dogs here in Dubai. They live in the middle of the desert with no electricity.
So lets see how this goes. Hopefully its not a massive flop.
Been using Claude code for this and its amazing. Goes a little haywire here and there but 8/10 works
if you're curious to see what it looks like: ohmybark.com (I haven't yet added the second world so you'll just see what I see every saturday morning with 1400 dogs :D)
If you click on the static ones btw, you can see clips of what they actually look like and how they live and the tents hold the menu
Little did he know perchance, he time traveled into the Bronx.
what do you think about the new battle pass rewards? is it worth it? because i dont really know if i should buy or not.. i dont main shaco
Weird marking in st Petersburg street
When I don’t know what game to play next, what album I should check out, what anime, book, movie, etc etc. I ask ChatGPT. Chat knows everything I’ve consumed so far and gives me great recommmendations. I can ask for a top 10 list or a top 100 list and most of the time it’s spot on.
Just figured I’d let yall know, it’s such a life hack honestly lmao I discovered a lot of new music and movies this way
First time poster on Reddit. I am mad home geek myself and then I did something crazy. Now I'll trying to make everyone else crazy too. That's the AI world we find ourselves in.
I am pretty good with computers - always have been. I was retired lol… but got chatting with a friend who mentioned an idea about noise complaints; "Wouldn't it be good if you could have an app to simplify noise complaints, e.g. dogs or other disturbances". The stars aligned. I've finished working for the Aussie Government on big data infrastructure and have a deep history in technical delivery of a rich set of business domains. I know cloud at enterprise and microscopic level. Claude Code and Vibe coding has arrived. BOOM.
In 6 months, I solo vibe coded, a government security grade, elastic cloud compute pool, with a hyperscale pattern. I designed a multi-Model Classifier as the compute unit, to host YamNET, PANNS and BirdNET. I added a spatiotemporal annotation UX with acquisition through any device spot recording and upload of any old media. I then vibed transformed the raw AudioSet (YamNET, PANNS) with a temporal relationship algorithm which enriches the Classification meaning - that is not 5 dogs barking, it is a barking Event.
I vibed from the cost per minute layer offered by cloud for compute units, up to API, through into the MCP layer, aka AI. Talk integrate via Claude, OpenClaw, etc. Talk classify and near real-time, webhook notifications on 500+ untapped (AudioSet) environment signals from the ubiquitous microphone.
A new machine sensor? The pervasiveness of microphones coupled with ridiculously cheap transform, and AI talk to integrate.
A new semantic compressor? Audio 100:1 or more reduction in file size yet full of semantic weight. Perfect sized AI brain food.
How I vibed it is also pretty cool. I do know my tech stuff. But one person semantically programming what I would consider the best DevSecOps SDLC chain I have ever seen… that is a story too.
Hopefully you find this as cool as I do. Hopefully everyone can see what i see 😄
Agentic empowered spatiotemporal annotated environmental audio
https://www.h-ear.world/how-it-works or https://www.h-ear.world/use-cases.
Super interested in people's thoughts. It'll make a nice change from an AI prompt.
I started using macOS since I was a teenager 10+ years ago, and even til this day, I still don't understand how to properly uninstall a local application. That changed today.
I was dogfooding my desktop agent when I realize it could just find everything related to an app to delete them which is actually the correct way of uninstalling apps. I have a million apps that haven't been used and my laptop is getting slow. App just magically starts to go away after it runs a bunch of cmd's. Kimi also taught me the existance of `~/.appname` directories which I didn't realize I've a million things in there already, from apps I've not used in years. (remember windsurf?!?!)
I uninstalled a few different Apps and realized it keeps trying to run `find ~/Library/Application\ Support` which takes really long since it's recursive. I gave Kimi the feedback, we decided its better to glob match at the top level instead of recurse. So I asked it to edit its base knowledge (my equivalent of SOUL.md) to always do that to uninstall apps. It got a lot better.
I feel like maybe this teach and improve domain knowledge into written processes is the new era of software development? but who am i to say :)
oh and i prefer doing this with my own agent over Claude for a few reasons
here's the Github repo if you're interested: https://github.com/Emericen/openmnk
Been working on GridApps Testimonials for a while. Sharing it here because I think the workflow is genuinely different from what's out there.
The problem I kept hitting: every testimonial tool I tried turned "collecting social proof" into a manual job. Send a request link → wait → follow up → embed. You're basically a project manager for your own testimonials.
What I built instead:
Workflows that run on autopilot. Set it up once and GridApps keeps pulling social proof in — Twitter mentions, public reviews, customer replies — and turns them into display-ready testimonial cards. No follow-up emails.
Actual display variety. 40+ widgets and 10+ Wall of Love layouts, so you're not stuck with the same generic slider every other site uses. Different pages can show different testimonials without rebuilding anything.
Video that doesn't need a separate tool. Built-in video editor for trimming, captions, branding — plus you can turn testimonials into Reels-style clips for social.
I'm not saying existing tools are bad — Senja, Famewall, Testimonial to are solid for what they do. But most stop at "here's a form, here's a wall." The collection-to-display pipeline is still mostly manual. That's the gap I was trying to close.
Two questions for founders here:
Hi all!
I've recieve gift:
But when I trying to redeem it I recieve message "Gift codes cannot be redeemed with mobile app subscriptions. Please manage your subscription through the App Store or Google Play."
I've checked my iPhone - there is no subscription in AppStore. How can I redeem my gift, help me please.
i was pitching to anyone who would listen. restaurants, gyms, coaches, salons, random people who seemed interested.
every call went well. nobody paid.
eventually figured out the pattern. the people who get most excited about automation are usually the ones with the least budget and the most opinions about how it should work differently.
they always say boring businesses make money. i landed a manufacturing and export client. A very easy automation to setup but because of the volume, the money is huge.
been working with hotels and property firms for a while now. that's where the money actually is.
if you run a business with a genuine operational problem, leads falling through the cracks, follow ups being done manually, data entry that shouldn't require a human, drop it below. genuinely curious what the broken thing looks like in different industries.
I am looking to get deeper in to making AI videos, but want to save money and be free to do it locally with no limits.
I also work in IT and have been doing so for over 30 years, and have concerns that have been instilled in my experiences.
Here is my question. are there any models i can look into that are safe and secure without having to reach into some dark dank database or server that might decide to throw malware / spyware / viruses into my system?
i saw a video on the ease of Wangp install, but was concerned. But I don’t want to toss out the use of comfyui if that means a chance of maybe using a LoRA that may be a little more secure, but a high level of difficulty.
Guess what I am saying, I place a high value of being secure than getting something for free or low cost.
Am I asking the questions? Or am I better off just paying VEO / LTX fees to a service?
Thanks…
I'm a laser R&D Engineer. I kept leaving meetings thinking "I'll remember this" and then not remembering it.
Every existing tool either uploads your audio to a server or sends a bot to join your call. Neither felt right for confidential work conversations.
So I spent a year building Thoth in my free time. Everything runs locally on your Mac using WhisperKit. Mixed audio capture without virtual drivers, speaker diarization, local LLM summaries, BYOK for cloud AI if you want it. WiFi off, it still works.
The hardest part was getting mixed audio working, recording both mic and system audio simultaneously using Core Audio instead of a virtual driver. Turns out transcription quality during a Zoom is much better when whisper is fed with both fluxes.
Link in comments. Honest feedback welcome.
I started playing 2 days ago, I have so far used maybe coming up to 10 different champions so far and have found jinx my favourite so far but my problem is, the other champions feel very ‘bad’ compared to jinx and I can’t imagine that is the case. I can imagine I am just very bad with them but I was wondering if any people could give me some general gameplay tips and tricks to help me because I am enjoying the gameplay very much but finding it extremely difficult at times. Thanks very much in advance.
the thing that annoyed me most about Monday mornings wasn't the volume. it was that I had to process everything before I could tell which things were worth processing. tried a Zapier digest. got a list of email subjects, no context. tried a scheduled summary in Slack, too rigid, arrived at the wrong time.
"I was offline this weekend, read my email and Slack and tell me what I need to know." that's the whole Invoko prompt. what comes back: two things that need a decision today, one thing that already resolved, three threads that can wait. the context reconstruction that used to take an hour now takes five minutes.
Greetings Summoners!
With every new patch Riot introduces to balance out champions and items there are some unforeseen issues that creep up and cause disruption during gameplay. We have noticed that these issues which eventually get fixed clutter up the subreddit immediately following the patch.
We want to avoid this by having a single Megathread which will be posted after every patch so that you guys can report the various issues in one place. This allows Riot to easily keep track of the bugs by providing a central hub and also allows other users to confirm that they might have encountered.
Note only bugs caused by the 26.9 Patch should be reported below.
Prerequisites to be noted before reporting a bug
Format when reporting a bug: When reporting a bug, please provide as much information as you can about your computer.
• Server: The server you encountered the bug (NA, EUW, EUNE, TR, RU, BR, LAS, LAN etc)
• Type of Bug: Client Bug, In Game Bug etc
• Description: Describe what was the bug that occurred.
• Video / Screenshot: Insert screenshot (F12 in game) or Video of the bug occurring.
• Steps to reproduce: Provide the steps necessary if someone else had to reproduce the bug.
• Expected result: What should have been the result when you follow the steps mentioned above.
• Observed result: What was the result you obtained when you followed the steps mentioned above?
• Reproduction rate: If you try to recreate the bug how successful are you in causing it to occur? (1/10: Occurs once every 10 tries, 5/10: Occurs 5 times out of 10, 10/10: Happens every single time)
• System specs: Processor, Video card, Ram, HDD/SSD, everything you can provide, even drivers.
Example Bug:
• Server: EUW
• Type of Bug: In-Game Bug etc
• Description: Zed's R (Death Mark) does not apply secondary damage
• Insert Video / Screenshot of the incident
• Reproduction rate: 2/10 (happened 2 out of 10 times)
• Steps to reproduce:
Launch a game after selecting Zed as your champion. Attempt to use Death Mark. Observe the result.
• Expected result: The damage should apply after a short delay, amplified by damage dealt during the effect.
• Observed result: The damage will not apply properly.
• System Specs: Intel i5 Processor, Windows 7, Nvidia Graphics card (insert model number) etc.
If you don't know how to format comments on Reddit click here
- **Server:** - **Type of Bug:** - **Description:** - **Video / Screenshot:** - **Steps to reproduce:** - **Expected result:** - **Observed result:** - **Reproduction rate:** - **System specs:** Copy paste the above code and fill in your details.
From this Megathread the list of bugs will not be summarized and put up in the main body of the thread, however, note that many Rioters are going through every single comment so don't worry if you post the 1500th or 3000th comment, every reply will be read over the next few days.
Datadog dropped their State of AI Engineering report this week. The numbers reframed how I think about LLM reliability.
February 2026: 5% of all LLM call spans across their customer base reported an error. 60% of those errors were rate limits.
March 2026: 2% of spans returned errors, but rate limits were still ~30% of the total. That works out to 8.4 million rate limit failures across their telemetry in a single month.
The takeaway is that the dominant production failure mode for LLM apps is not hallucinations, not bad context, not flaky tools. It's plain capacity exhaustion. 429s and 529s, the boring kind of failure that classical infra engineers have known how to handle for 20 years.
What's making it worse is the architectural pattern most teams use. Variable ReAct loops and multi-agent collaboration produce concurrency spikes that exhaust shared org-level quotas in unpredictable bursts. Your p50 throughput looks fine and your p99 falls off a cliff.
The other line in the report that I keep thinking about: context quality, not volume, is the new limiting factor. Most teams aren't even close to using the full context window of their model. The 1M token capability is wasted if your retrieval pipeline can't pick the right 10K tokens.
Capacity engineering and context engineering are quietly becoming the two skills that move the needle in 2026 production LLM systems. Prompt engineering as a discipline is increasingly downstream of these.
I've witnessed this across two different rigs so unless I've managed to flick the same switch somewhere that turns them off then I think this is broken at the moment. Generated images still have a thumbnail showing but somewhere during the past week imported ones stopped show a preview. Pressing R and also trying reloading the tab after the asset is imported doesn't fix it either. Nor does reloading the backend and restarting the PC. As mentioned at the start, same issue on two different PCs.
Has anyone else encountered this and have a fix?
Haven't played league in 2 weeks, decided to play one game of solo queue for decay and immediately got reminded why I stopped playing in the first place. Apparently I've climbed to top 0.5% of the population out of few hundred thousand players or whatever, just to have 80% of my games feel like bot games.
Who keeps making these absolutely senseless decisions that makes the game less fun for old players while also failing to attract new players?
Autofill is extremely broken and age of valor causes toxic behavior like playing for KDA or supports spamming pink wards instead of actually trying to win.
Also how did anyone think it was a good idea to sacrifice game quality for maybe 2 minutes decrease in queue times (Master+)? Instead of waiting few more minutes for a good game, you want to give players low quality games and waste 30 minutes instead?
Well anyway, I'm sad that I don't get to flex with old friends as often anymore as everyone around me is quitting. But I am grateful to be spending less time on this declining game so keep up the bad work I guess.
Hey everyone,
I’ve been using Cline for my daily workflow at work lately. I’m still using the Claude Opus 4.7 model, but I’ve stuck with Cline because I feel much more "in the driver's seat." The visualization of file changes and the side-by-side diffs make me feel like I have a tighter grip on what’s happening to my codebase. Plus, honestly, I feel like I've learned a ton more about how LLMs actually interact with files and tools by managing Cline myself.
However, seeing the hype around Claude Code, I’m starting to get some serious FOMO.
Claude Code seems to have deeper native integration, and I’m worried that by sticking to a third-party wrapper (even a great one like Cline), I’m missing out on better agentic loops or "thinking" optimizations specific to Anthropic's tool-use.
For those who have used both:
Am I putting myself at a disadvantage by staying in the GUI? Does Claude Code offer a significant jump in "intelligence" or efficiency that justifies giving up the visual control of Cline?
Would love to hear if anyone else has made the switch (or switched back).
Got it from gramps who owned it for many years. He was himself unsure what kind of coral is this.
When did they add opus 4.6 back as an optionh in the desktop app?
Recently saw claude models getting dumb and lazy, so I switched to Codex but it was quite a painful change as the setup I was using on claude code was something I fine-tuned over time and had to do things for codex again which I couldn't exactly replicate, the mcp-servers the custom commands and other such things so can you all suggest some existing tool for this. I want something that is cli-based or should I just start burning my tokens to make one 😂.
As a teemo main, cool, teemo buffs.
But this will not go over well. Teemo already does quite well with the re-addition of gunblade giving him an on demand ranged slow to either kite or just run someone down. And he's finally getting deathfire touch back.
Why, on top of that, would they give him an AD ratio on his E? Like, cool, for teemo players i guess. But gunblade > guinsoos > new static is going to be kind of insane on him.
And most of the serious teemo players were playing him more on hit style anyway, as it's always been pretty good. Even after the nerfs which came around entirely due to maligma. The traditional AP build on him has been lackluster for a while because it requires Malignance and Liandries now(again, mostly a result of the nerfs i just mentioned) and with nashor's already always being required(literally trolling to not build it on ap teemo unless you're against a weird comp that makes autoing ever very difficult), that left you with 2 item slots. Assuming you always go for void or rabadons (whichever makes more sense in the game you're in), that leaves you with 1 item slot to get everything else you need, be that grievous, a spell shield, or hourglass. This leaves him very little build flexibility.
On hit, on the otherhand, you have a lot more options for build flexibility. You can still get grievous and a spell shield item if you want after you get your core 3 items(which can change depending on the game too. No tanks, you don't need bork. Lots of magic damage? sub nashors first for wits first), or a spell shield and hourglass. Or you can say "fuck it, we ball" and go bruiser at that point and grab titanic and jaksho. You just have a lot more options to respond to a lot more situations, whereas ap locks you into 4 items. AP can leave you in positions where you say "i kind of need grievous for their team, but i also need to not die to zed(or whoever) because he will ult me because i'm doing the best on my team, so i need hourglass" which puts you in a really awkward place.
What's especially odd is them citing that they felt like they killed on hit teemo in the past and are looking to bring it back. I feel like it never really left.
Out of all the things i see for the next patch, i bet this gets un-done on the following patch or even hotfix nerfed.
I asked ChatGPT for a scathing criticism of OAI, as inspired by a post here. I liked it a lot, but my being a sassy prick by nature, I wanted to see the snappier version of that, so I asked ChatGPT to see that and here it is, lol. I do hope it gives a bit of a chuckle to whoever reads this, I know it did me. **And Kyle, if you read this,** the context window offer stands 😂🤌
Just wondering, had great days with Opus. Today he does not like my typical flow since the outage yesterday. Hope you guys are doing well.
I tried the Statikk + voltaic + dusk and dawn combo on kai’sa in brawl as well as in a practice tool match vs a bot. Hitting W does consume the energized stack and proc static but I don’t get an additional passive stack nor does dusk and dawn get consumed
I've shipped automation projects for around 30 professional services firms now. Law, accounting, recruiting, agencies, consultancies. Some of those projects are still running and saving the firm real money. Some quietly got abandoned within 4 months.
The difference between the two groups isn't what you'd guess. It's not the size of the firm. It's not the budget. It's not how technical the founder was. It's not even the quality of the build.
It's whether the founder personally walked through the manual version of the workflow once before we automated anything.
Sounds dumb but stay with me.
When a founder hires me to automate something, they usually describe the process from memory. "We get a lead, we send a contract, we onboard them, we send invoices." Four sentences, sounds clean. The automation gets built around those four sentences.
Then we go live and discover the actual process is something like 23 steps, half of which the founder didn't know existed because the admin who's been doing it for 6 years just handles them silently. There's an exception case for when the lead is referred by an existing client. There's a different SOP for clients who pay by wire. There's a paralegal who manually edits one section of the engagement letter when the project is over $50k. None of this was in the four sentences.
The automations that survive are the ones where the founder did one painful thing before we started. They sat with the person who actually does the work, and they did the workflow themselves once, slowly, narrating every step out loud while I or someone took notes.
That's it. That's the whole secret.
It usually takes 90 minutes. It usually surprises the founder, who finds at least 4 or 5 steps they had no idea their team was doing. And it usually makes the eventual automation work, because the automation is built around what's actually happening instead of what the founder thinks is happening.
The firms that skipped this step ended up with automations that handled the happy path and broke on every edge case. Within a few months the team stops trusting the system, goes back to doing it manually "just to be safe," and the automation rots.
This isn't about whether you're smart enough to think through the workflow yourself. It's about a thing called the curse of expertise. The longer you've been a founder, the further you are from the actual day-to-day work, and the more you've forgotten the thousand small judgment calls your team makes that aren't written down anywhere.
If you're thinking about automating something at your firm, do this before you hire anyone or build anything. Block 2 hours. Sit with whoever does the work today. Ask them to walk through it with you while doing a real instance of the work, not from memory. Take notes on every step, every exception, every decision point. Don't talk for the first 45 minutes, just watch.
You'll be embarrassed by what you find. That's the point. That's also why the automation will actually stick this time.
If you've already had a failed automation at your firm and you're not sure why it died, my guess is one of two things. Either nobody walked the manual process before building, or you let an agency build it and they relied on what you told them in the discovery call instead of insisting on this step.
Happy to walk through specifics if anyone's stuck.
Saw this today and it honestly scared me a bit.
An AI coding agent powered by Claude via Cursor ended up deleting an entire company database in seconds.
So I built a small developer-first library that sits between an AI agent and its actions.
Before the agent does something risky (like sending emails, hitting unknown APIs, or modifying data), it can:
get blocked
require approval
or log the action
Basically: your AI asks before it acts.
Right now it’s super early and pretty minimal, but it already works for a few core cases.
Not sharing the link publicly yet, but if you’re building with AI agents and this sounds useful, happy to share access and get feedback.
Rubbery consistency. About 1.5cm long
Hi everyone! I’m a total beginner to ComfyUI and I’m trying to find the right direction. I saw this video on Instagram and im curious on how they make this video
I'm looking for advice on:
Any help pointing me in the right direction would be amazing. Thank you!
hey guys,
Anyone have experience in running ads on your project? Im currently waiting for adsense approval on my website and really didnt think that much about what ad provider I was gonna use (I always thought Adsense would be the best one). But when I checked online and chatted with Claude / GPT there are apparently other better options. Below are the ones that Claude / GPT suggests (basically Adsense has the lowest ad revenue).
Would you say the below table is accurate? Because I think these AIs are just messing with me lol (I know the RPM differs by category, the visitors region,etc so below is based on for example a western quiz website)
Any feedbacks are welcomed
Network RPM AdSense $1~$4 Ezoic $2~$6 Monumetric $3~$8 Mediavine $10~$25 AdThrive $15~$35Hi everyone,
Our team at WhizzIQ just launched on Product Hunt today and I wanted to share it here to get real feedback.
We have been building this around a simple problem. Most small business owners are using multiple tools for invoices, expenses, CRM, and bookings, but still do not have a clear picture of what is actually happening in their business.
So we built WhizzIQ to bring everything into one place, with AI to help make sense of it.
The goal is not just to track information, but to help answer questions like:
Where am I making money?
Where am I losing money?
What actually needs my attention?
It is still early:
A few active users testing it in real workflows
Still learning what people actually find useful
Would genuinely appreciate feedback:
What makes sense?
What does not?
What feels missing?
Trying to build something that is actually useful, not just another tool.
Product Hunt Link:WhizzIQ Product Hunt ��
I got tired of spending 45 minutes reformatting the same idea for six different platforms. LinkedIn wants long-form, first-person takes. Instagram wants short, lowercase, vibes-first. Twitter wants punchy. They're completely different posts, but most people write one and paste it everywhere.
So I built a tool that handles the split automatically.
During onboarding, you paste 3-7 of your own posts. An LLM reads them and builds a persistent "voice profile", your vocabulary level, sentence style, tone, personality patterns. Every piece of content you generate after that is written in your voice, not some generic AI voice. LinkedIn posts come out sounding like you, not like a press release.
You write one prompt. It fans out to LinkedIn, Twitter/X, Instagram, Threads, Facebook, and newsletter simultaneously. Each one is natively formatted for that platform, different length, different structure, different energy. It also does a live web search before writing so the output is grounded in current trends, not stale training data.
Some things I learned building this solo:
Voice profiles are the moat. Anyone can call an LLM. Making the output actually sound like you across 6 platforms is the hard part and it gets better the more you use it.
Instagram and LinkedIn prompts need to be totally isolated. I had a bug where a "long-flowing technical" voice profile was producing LinkedIn-style structured posts on Instagram. Took a week to fix properly.
"Hey everyone, I realized the body of my post didn't show up! Here is the context:
After 15 years as a Data Engineer, I got tired of the manual 'awkward follow-ups' for payments. I builtchasedue.comto automate the entire WhatsApp reminder sequence.
The Tech: Next.js 16, Supabase, and WhatsApp Cloud API.
The Story: I launched yesterday, got 1.3k views but only 1 real user. I realized the 'Sign In' wall was too high, so I just added a Live Demo button today so you can see the nudge without an account.
I'm at 0 MRR and looking for my first 5 paying customers. Would love a brutal roast of the landing page or the workflow!"
I have an okay laptop with 8gb of ram and it is okay for older games but I haven't tried intensive games yet. I really want to try LoL and I loved arcane, so I really want to try. I just want to make sure I'd be able to run and play the game without too many issues. If not, are there are options for ps5?
i was manually checking competitor Twitter accounts two or three times a week. open profile, scroll, take notes, close, repeat. 40 minutes i didn't have, and i was inconsistent. tried setting calendar reminders, still felt like a chore. tried a social monitoring tool, expensive and more features than i needed.
now i ask Invoko to summarize what they posted and write it to my research page. 30 seconds of setup, I review the summary when it's done. i'm actually more informed about what's happening in my space now than when i was doing it manually.
I want to preface this with the fact that I've read through quite a lot of this reddit on running stuff locally but still am not sure about how to go about running a very very specific form of a local LLM on my PC.
I recently switched from ChatGPT $20/month to Claude's $20/month and was amazed at what it can do from a hands free perspective. I tested the Pro plan by throwing it 2 tasks that I've been lazy on getting done:
1.Editing 30gb worth of footage on Davinci Resolve to make into a mini travel vlog of some friends and I.
2.Make me a custom photography website where I can feature my work (I wasn't satisfied with Wix). It knocked these out of the park in terms of laying the groundwork and getting 90% of it done.
The problem I found was usage, I wasn't burning through my weekly usage too bad for what it had to do, but the current session usage was at 100% quite fast (like 1-2 hours of tinkering / letting it do its think, especially with video editing). The other thing I saw was the token limit within one conversation (200k/1m) but I circumnavigated that by simply creating a project and then making each new task reread a file within a folder on my desktop that had instructions and what we had completed in previous chats.
Let me make it clear, I think the feature that lets Cowork take screenshots and "take control" are phenomenal, and this for me is a clear selling point and worth the $20 a month. I know it can be viewed as a security risk for sure to give it access to whatever it wants but boy does it get the job done hands free. (would love input on even more risks here)
My main question here is, can I run a 'local' version of this that would use my GPU on my PC? I currently have a pretty good setup with a rtx507012gb, amd ryzen 7 7800x3d 8core and 64gb of ddr5. How would I even go about setting something like this up? Would my cost truly be 0? And most importantly, would I have access to that UI that I am currently using in the Cowork tab, because the UI and how it currently works is genuinely good and just works so well, it feels robust.
My main concerns here are price per month at the end of the day to run my own LLM and whether it can operate in that same way as the cowork features.
Sorry for the long post but genuinely any input would be appreciated! (Feel free to explain things quite elementary since I am somewhat new to this and have a pretty specific use case)
Totally lost my workflow :( . This is how much quality was lost since the artifact and texture error started happening.
First image is before, second is now. Getting that fluffy grid over every single image. Monet is ecstatic.
Do u think OPENAI will fix it or is this the new reality?
I’ve attempted so many ways, and there is no point: organic textures are ruined, specially when vegetation is present.
What would happen to ranged toplaners if they were banned from top lane?
Kayle, Gnar, Vayne, Kennen, Jayce, Quinn, Teemo and Urgot.
Backstory: this domain (awesomebloggers.com) used to be my own blog. I ran it for 2years in the Blogging / SEO / Digital marketing space. Like a lot of indie blogs, it slowly died — I stopped publishing, traffic decayed, the writing got harder, and eventually the domain just sat parked for years. A few months ago I decided to bring it back, but as something different: an automated directory for the kind of indie blogs I used to read.
The premise: bloggers submit their URL, a scoring engine evaluates them across 7 metrics (domain authority, posting frequency, site speed, mobile readiness, content volume, SSL, design quality), and approved blogs get listed with a permanent do-follow backlink. No editorial picks, no pay-to-play. 591 blogs approved so far across 9 categories.
Three things from the data that genuinely surprised me:
Where I'm stuck: 473 GSC impressions last week, 0 clicks. The long-tail queries don't convert at DA 15, and the head-term competitors (Hongkiat, Smashing, NerdWallet) defend at DA 70+. My next bets are (a) finishing the missing category roundups for topical completeness, (b) shipping original data studies from the 591-blog dataset that AI engines might actually cite.
Three things I'd love honest feedback on:
Site: https://www.awesomebloggers.com
I'll be in the comments all day. Honest critique especially welcome — I've been on the receiving end of "another generic blog directory" in my old blog years, so I'm not naive about what makes these die.
I'm having a nightmare here. I replaced my Meat doorbell with the Ubiquiti G6 Pro Entry running with an NVR Instant. I only did this because others said Ubiquiti is the best and it's easy to get control in HA.
I can't for the life of me get a live feed to my Nest Hub Max when the bell is pressed. I have tried a lot and just want to know how people are doing this. What's the most common way? I must be missing something!
Uhh I guess Gemma 4 is so much shittier that it hallucinated this event that happened in china in 1989?
According to qwen, nothing of significance happened at Tiananmen square in 1989 - and based on all of the benchmarks of qwen, I believe its right.
Do you think Gemma 5 will finally patch this hallucination?!?!?!
Official page | Leaguepedia | Liquipedia | Eventvods.com | New to LoL
NS | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube
T1 | Leaguepedia | Liquipedia | Website | Twitter | Facebook | YouTube | Subreddit
Winner: T1 in 29m | MVP: tbd (tbd)
Bans 1 Bans 2 G K T D/B NS rumble bard orianna caitlyn sivir 49.4k 7 1 None T1 karma nautilus varus jayce leona 65.7k 24 9 HT1 O2 H3 M4 M5 B6 NS 7-24-15 vs 24-7-47 T1 Kingen sion 3 0-3-4 TOP 2-1-9 1 anivia Doran Sponge jarvaniv 2 0-6-5 JNG 6-0-13 2 xinzhao Oner Scout azir 2 1-4-3 MID 7-3-3 4 akali Faker Taeyoon ezreal 1 2-4-2 BOT 7-2-6 3 jhin Peyz Lehends neeko 3 4-7-1 SUP 2-1-16 1 karma KeriaThis thread was created by the Post-Match Team.
I just made this product promo video completely with Claude code. Explaining the process here with the prompts. I also have a generic prompt at the bottom that you might want to use.
Don’t think in “design.” Think in scenes — like a director giving a shot list to a crew.
This is the first prompt I used:
Make a slick product intro video for my product https://claudevideoexport.com - Scene 1: Text animation — "How to get MP4 from Claude Design Animation" - Scene 2: Show a small browser window with "Claude Design" open. Pan to the top right with "Present" link and "Share" button. Show a mouse clicking "Present" → dropdown appears → mouse clicks "New Tab". New tab opens and the URL is copied. URL reads: "https://2d0b2821-9f01-40b1-b0a6-2f4db6601a33.claudeusercontent.com/v1/design/projects/2d0b2..." - Scene 3: Switch to claudevideoexport.com showing a form. URL is pasted into the form and "Export" is clicked. - Scene 4: Fast-moving progress bar going from 0% to 100%. Text reads "Rendering Video (0/2000 frames)" — counter increments to 2000/2000. - Scene 5: A file icon labeled "video.mp4" pops up and downloads. - Scene 6: video.mp4 gets uploaded to YouTube, then Instagram, then Facebook. - Scene 7: Text animation — "Make Claude Design Animations → Get MP4 using ClaudeVideoExport.com" Use the look and feel of https://claudevideoexport.com. UI components should look like they belong to that site. What makes this prompt work: It’s sequential. Each scene has a clear action. The UI details — the URL string, the button labels, the progress text — give Claude enough to build something that looks real rather than generic. And anchoring to a real website URL lets Claude pull design references.
First output was close but not quite right. The platform upload scenes looked static, and the overall pace was slow. One more prompt:
- The file upload to YouTube, Instagram & Facebook should look like the file being dragged and dropped onto those sites in a browser. Show a basic drag-and-drop UI element for each site matching their brand colors. Get the correct icons for each platform from the web. - Make overall scene transitions faster and slicker. Keep the whole thing under 40 seconds. That’s it. Two prompts total. The second one fixed exactly what needed fixing.
The lesson: Don’t try to get everything right in one massive prompt. Get a working draft, watch it, note what’s off, fix those specific things. Claude Design responds well to targeted correction.
Make a 30–45 second product intro video for [YOUR PRODUCT URL]. Scenes: - Scene 1: Text animation — "[One-line problem statement]" - Scene 2: Show the old/painful way of doing this. Use a browser window. Keep it simple and recognizable. - Scene 3: Introduce [PRODUCT NAME]. Show the core action (the thing the user actually does — paste URL, upload file, click button, etc.) - Scene 4: Show the product working. Progress indicator, loading state, or live output — whatever fits. - Scene 5: Show the result. File icon, dashboard, confirmation screen — make it feel satisfying. - Scene 6: Show where the result goes. Social platforms, email, Slack, client — wherever the output lands. - Scene 7: Text animation — "[Core value proposition in one line]" Use the look and feel of [YOUR PRODUCT URL] for all UI components. Colors, fonts, and style should match the site. Keep transitions smooth and fast-paced throughout. Then follow up with one refinement prompt targeting anything that’s off.
To export the animation into MP4, you can use my tool - claudevideoexport.com
Audio was added to the video outside of any of these tools.
Do even good looking people not get matches on dating sites?
I've had this embarrassing habit as a builder: I'd get excited about an idea, spend half a day Googling market signals, manually checking competitors, trying to gut-check demand — and then either kill the idea or start building without being sure.
The process was slow, inconsistent, and didn't scale. So I built AIdeator — a local-first, AI-powered idea validation engine.
Here's what it actually does:
The privacy model is something I'm particularly proud of: you can run it fully offline with Ollama + DuckDuckGo (zero data leaves your machine), or plug in OpenAI/Anthropic/Mistral + Tavily/Exa for deeper signals. Your call.
It's open-source (MIT), free, and you can be up in under 2 minutes:
pip install aideator
Genuine question for this community: What's your current process for validating a product idea before you start building? Do you trust gut feeling, do structured research, or something else? Drop it below — I'm actively improving this based on how real builders think.
is this only an openai created apli does not have the function of importing conversations with other AI? (yes. i know, it's CGPT too)
I want to import my Copilot convos into native for CGPT platform and be able to continuing my conversations there (I'm just less comfortable with Copilot apli and web). there's only just thread by thread manually way? and, won't there be confusion where whose words are? I've never looked into the Copilot export files, they're in a wild ms office format, it scares me 😄. thanks in advance
Salut !
Je suis un joueur EUW à la recherche d’une team ou de mates pour jouer régulièrement.
Rank : Gold
Rôle principal : Support
Horaires : dispo le soir + week-end
Objectif : tryhard mais dans une bonne ambiance
N’hésitez pas à m’ajouter ou à m’envoyer un message ! 🙂
I have an MI50 that I use with llama.cpp/Vulkan, however some models run quite slowly, so I'd like to try the ROCm backend, but no matter what I try it doesn't work.
Downloading the missing files from ArchLinux package doesn't work.
Downloading the files from coutryboycomputers doesn't work either
Building RocBLAS also doesn't work.
I'm not a Linux newbie, but at the same time I don't have days to spend on this issue... if someone has a guide that actually works to make llama.cpp run an MI50 with ROCm - ideally 7.2 - I would appreciate that.
But it's the only way my mum keeps me safe once my drunk, blind stepfather wakes up
Considering both areas are covered in mucous membranes and there are a lot of nerves and pain receptors in both areas, why does one hurt but not the other?
I found an old post mentioning that soap in your eyes hurts due to the different pH values, maybe that has something to do with it? I know the eyes aren't part of the same "system" as ears/nose/mouth, but I would imagine getting soap in your nasal cavity would hurt in the same way as the eyes, so why not in the mouth?
After trying various THC products, I’ve settled on Good Vibes THC Syrup for its fast-acting effects. Unlike edibles, where you have to wait longer, this syrup kicks in quickly and provides more control over the experience.
What I love is the versatility—you can easily add it to water, a drink of your choice, or just take it on its own. It’s helped me get the right dose without the guesswork, especially for days when I need quick relief.
If you're someone who’s used to waiting a while for edibles to take effect, I’d definitely recommend giving this a try.
Has anyone else made the switch from edibles to syrup for that fast-acting relief?
Making ChatGPT free for clinicians sounds like a clear win. Less admin work, faster documentation, quicker access to information.
But the bigger shift is how it enters workflows.
This moves AI from controlled, system level tools to something clinicians can use individually, anytime. That’s a very different model from how healthcare tech is usually introduced.
Which means consistency, validation, and accountability don’t just sit with institutions anymore - they start shifting to individuals.
Benchmarks and accuracy scores matter, but real-world use is messy. Edge cases, incomplete context, and subtle errors don’t show up in controlled evaluations.
The upside is obvious. The question is whether healthcare is ready for AI that scales through access rather than control.
Does this reduce friction, or just redistribute risk?
Cost & Performance Efficiency
Networking & Latency
Memory
Impact on Google's SOTA - Gemini 3.1 Pro Preview
Impact on Future Models
Here's the complete list with totals for all 300 people who have uttered the famous SNL line: LIVE FROM NEW YORK, IT'S SATURDAY NIGHT!
Which two former cast members are neck-and-neck for the top spot? Which non-cast member said the LFNY line the most times?
Kinda slimy and has no odor as far as i can tell
It's glued, so maybe a manufacturing defect? But if you look closely there's a dark line going from the corner. It's not possible that I accidentally moved it.
I have been trying various AIs so far and I gotta say, I’m torn. I don’t want nor need the AI to do the entire job for me, but I guess my prompting also plays a role in all of this. So I’d lowkey als appreciate resources and methods to optimize prompting. I often find myself spending so much time prompting, only to get a subpar result. I keep making similar prompts, but even just copypasting and modifying the prompts for different classes and agegroups takes so much time, that in the end, I just give up, and en up doing the stuff myself. My usage of AIs currently is very limited because of that and I wanted to use the summer vacation to help me improve and work on that.
I have tried various models but also, a lot of them have lacking features that really en up being dealbreakers for me. Here’s the features ai’m looking for:
- image, diagramm, schemata, and accurate generation.
- first draft worksheets, where the generated product is 60-70% done. I’d have to adapt the wording, the Design, and verify information. I don’t want to have to redo almost everything.
- I want it to create a good copy paste production or even a proper file (word, google docs, powerpoint,…) and not having to add everything in manually where I end up losing the time I would have gaines from typing it myself by losing all that time in correcting the visuals.
- be able to properly read and interpret scanned documents. Read the text inside images properly and use that information. Example: I scan a school manual’s chapter, have the AI be able to read and use the text on those scanned pages.
Here’s a list:
*Gemini (currently using)*
i remember enjoying it early on with NotebookLM because it helped me parse and get through Boardgame rulebooks fairly quickly and help me with the teach or even real time clarifications.
I currently get to try 2 months for 8€ so that’s an okay investment I’d say. I kinda enjoyed the image generation feature. It felt fairly easy to just improve my scans and pictures. If I find a schemata, diagram etc. that I needed in a different language, or recreate a thematic map, it did exactly what I wanted and it did so the easiest.
However, after multiple uses, I realized it would start messing up similar to chatgpt (free version only) where it would absolutely not get what I wanted it to do. Say there’s a diagramm with german words and french translations, I asked it to recreate the diagramm, but to remove all the french words from that diagramm… impossible to make it remove those words despite having been able to do it with other pictures. Anyway. Basically: Gemini is currently relegated to be my picture quality improvement app with integrated translation feature(when it works).
*school specific AIs*
Magic school, The Teacher, eduaide,…. Those feel like borderline TEMU AIs.
Doing more precise prompts, giving more context, more references, including teaching programs etc nets me on average worse results than the outcome I’d get out of traditional AIs. The « workshheets » are unusable imo. Again, I get that my prompting is clearly not there yet, but Claude, Gemini, CGPT, all had better results with less input than these.
The only thing they got going is that « tiktok organizationporn » UI design. It looks sleek and educational and good, but the production looks bad most of the time. Image generation looked terrible and was slow. Also: most of these are focused and build around American school systems and programs, and it seems to just not be adapted for foreign/european programs.
*Claude*:
I liked Claude’s production. But… it has pretty bad dealbreakers for me, and I can’t nor want to have to use and pay multiple AIs to get the job done.
Claude generated proper files for me yes, but it cannot read through scanned text properly or even use/ generate images. At least it didn’t 2 months ago.
Als: Claude still needed some back and forth, and paying 20€ to have to wait multiple hours after every 2-5 prompts depending on complexity is just not going to work for me. And I’m not paying the next tier priced at 80+€ to get enough prompts per day.
Sorry, this was super long, I get it. Just wanted to know if anyone has suggestions, tips or other info to share on how to help.
I just entered a game and thought i would have maybe a minute to chuck a wizz. We enter game almost immediately and i was afk for the first minute. I'm gonna get downvoted for this but idc. Ik its my fault but litearlly 1 minute in and a remake is already being requested. I had already tped to lane and missed 0 cs but 2/5 people voted yes and the game remakes. Like I actually don't understand why this happens. I was in the game, i hadnt disconnected, when the remake was being made i was already back. I just feel like you shouldn't be able to remake if all 5 people can vote. Not only that, what difference would it make if we remade at 1 minute vs 3 or 5 minutes in. Back when minons didnt spawn 10 seconds after the game stared, you had at minimum 1 minute for everyone to be ready and get into lane with no issues. But now, as soon as it hits a minute its just over.
Been deep into prompt optimization for a while now. The frustrating thing about X is you scroll past stunning AI images all day, but barely anyone shares the actual prompt — and copying the description never gets you the same thing.
So I pulled 1,000+ of the most-liked prompts from X and looked for patterns. Three things kept showing up:
If you nail those three, you don't really need JSON-formatted prompts at all.
I turned the patterns into a system prompt. Feed it something like "a bowl of ramen" and it expands into a structured prompt. Works in ComfyUI, n8n, GPTs, anywhere that takes a system prompt.
On categories:
Early on the tags were a mess — content topics (Photograph / 3D / Product / Food / Poster / Design) mixed with prompt style tags (JSON) and meta tags (App / Other / Girl). A single prompt would often carry three or four tags and the dataset got hard to browse.
I redid the categorization based on what the final image actually looks like and dropped the cross-cutting tags entirely. Six content categories left:
The last two barely existed before GPT Image 2 — that's where it's strongest.
On the MCP:
Besides the JSON, there's a companion MCP you can drop straight into Claude Code / Cursor / VS Code. Two things it does:
First, natural-language search. Say "find me a few product photography ideas" in Claude Code and it calls search_gallery, pulls a handful of prompts back with thumbnails. See one you like, follow up with "give me the full prompt and reference images for #3" and it calls get_inspiration to return the source text and all image URLs.
Second, generation hookup. Once you've got an API key set up, you can say in the same conversation "rewrite this with a Japanese vibe and generate it" and it'll apply the system prompt rewrite rules, then call generate_image. The whole loop happens in one chat — find, rewrite, generate, no tool switching.
Local ComfyUI works too. Setup guide is in the repo, and once it's running it's all free.
Bumped the dataset for GPT Image 2's release. Current count: 1,446.
Each entry has the full prompt text, generated image URLs, author, likes, views, and categories. JSON, CC BY 4.0, ranked by X likes within each model.
The GPT Image 2 cut leans toward posters, typography, and multi-panel storyboards. NanoBanana goes the other way — mostly portraits and product shots, often written in JSON.
Dataset and system prompt: https://github.com/jau123/nanobanana-trending-prompts
Companion MCP: https://github.com/jau123/MeiGen-AI-Design-MCP
Live gallery: https://www.meigen.ai
Featured in Awesome Prompt Engineering (5.5k stars).
Been working on NovaCalculator for a while now.
It has 5,500+ free calculators for finance, health,
math, science and engineering.
No ads. No signup required. Runs in your browser.
https://www.novacalculator.com
Would love feedback!
So, let's say I detect the spin of an unmeasured electron. I can measure it as positive or negative, with certain probabilities. If the many worlds interpretation was to be true, then both options exist as "parallel" universes. How come we can have a probability of existing in one vs the other? What makes one parallel universe "intrinsically" more probable?
So, I have some female friends, and they are calm and close with me. They talked to me that they were having peroids, and i dont really know how to react to these questions, help me out so I can comfort them and not mark a bad spot in our friendship.
And yeah, I might have a girl soon, so I bet its different how you treat your girl and how u treat your friends, so help me out.
Hi everyone, I’m planning to adopt soon, before that I’d like to foster a pet just to see if I’m capable to taking care of a full time companion. I’ve tried to reach out to animal rescue organisations such as CUPA, CARE, bangalore cat squad but haven’t received any responses despite multiple follow ups.
Please guide this first time aspiring pet parent.
Thank u :))
"They're just stolen Wikipedia article regurgitators!"
True, brother, true. /s
P.S. Yep, it made it to the score of 10 fairly quickly... in a field that changed the shape after the score of 5.
Yes, kinda weird. I know. But I'm curious as hell and can't find anything about something similar.
Since role quests, there is a high reward/no risk lobby cheese possibility (for instance example Sion mid and Anivia top) I remember that they removed the Armor/Mr shards specifically because having the wrong shard would make your lane unplayable. I think in the current state it's the same thing but worse since you can't really swap. At that point I don't see the problem with seeing the ennemy positions in the lobby. What do you think?
You can’t turn the speakers off. Even if you turn them off at the mains they will keep on playing. You can’t wear earmuffs, noise-cancelling headphones, earbuds, or any ear protection of any kind.
Hey all, I have been trying to stay consistent at the gym, but somehow something happens in my life, and the plan I made is just ruined. I get too much guilt to open my plan and hence I tend to abandon it. I tried getting a health & fitness coach, and it really helped me, but at $199/mo, it was too expensive for me & the coach was not available for me a lot of times - although there were daily checkins which were great.
What helped you to achieve this health coach-like accountability? Is there something I can do to get better?
Had a raspberry pi 4 running home assistant for two years. SD card started acting up, updates were taking forever, random freezes every week.
picked up a Geekom air12 for under $300 and installed HAOS on it. restored from backup, whole thing took two hours. Boot time went from 3 minutes to under 30 seconds. 40 integrations and it doesn't break a sweat.
Probably overkill if you're just running a few devices, but for me the reliability difference was immediate.
What finally pushed you off the pi, SD card, performance, or something else?
just so u know im down to bringing punishment to people who are always trolling and AFKing games and other behavior but pls just remove the pop up cuz i do think that it is redundant to have this in front of me whenever i have low prio
The iPhone Claude app lets you hold to record and transcribes the whole thing at once. The Mac app has nothing, only macOS system dictation, which goes word-by-word and gets things wrong.
Anyone found a workaround, or heard if Anthropic plans to add it?
Trying to edit qwen3.5 modelfile to match qwen3.6. Modified to 3.6 and ollama does not take it. Keep the 3.5 parser and render seems to work. just want to make sure it is the right way...
can anyone paste your ollama qwen3.6 modelfile? Thank you!
ollama show --modelfile
Hey all 👋
I’m strutting for my life coaching diploma and looking for people who would like free coaching in exchange for testimonials.
I have a really strong background in education and mental health so if you would’ve interested I would love to hear from you!!
I have a painting i made im not happy with, just wanna ask what i can add to it to improve it
So, I’ve always disliked money. Disliked managing it, the hustle, the bustle, and the idea of debt and credit. All of it completely disinterested me. For that reason, I don’t have credit cards and have only taken jobs that I felt were more soul affirming than money earning.
Last year, my partner got pregnant so I decided to try get a job where I could make a bit more in case my daughter wanted to explore hobbies and so on. However, the job I’ve gotten is insane and I’m feeling very fortunate. My earning potential is essentially uncapped. It’s not gonna be millions but I could be making £100k per year if I wanted.
I want to ensure I can retire comfortably, preferably early if possible. Nothing major, I want a simple life where I can just go into the countryside and bird watch or sketch.
We live within means of our basic salaries and I want to use the extra I make to prep for retirement and give my daughter a good start in life.
Bit about me:
Male, 33
Mortgage of 230k (our only debt)
6 months savings buffer.
The advice I’ve gotten is all over the place, and some guidance would be much appreciated.
Hey, friends! Please support my Product Hunt launch.
MeetingGlass is a video app where everyone sees everyone through a layer of virtual frosted glass. Hang out on video without feeling "on display." Free, no sign-up, no meeting limits, 60-second setup.
Your video call doesn't end. It transitions. You activate virtual frosted glass and go back to reading, working, or making tea. They go back to their day. Through that frosted glass between you, you can sense each other's presence.
The multiplier table GitHub quietly updated last week is the first visible crack in a subsidy model that was never sustainable.
Quick context for anyone unfamiliar: Copilot plans give you a monthly pool of "premium requests." Each model has a multiplier that determines how fast you drain it. Until recently, Opus 4.6 had a 3x multiplier. It's now 27x. Sonnet 4.6 went from 1x to 9x.
But the multiplier table is just the symptom. The actual disease is that the AI companies have been eating the difference between what compute costs and what you pay.
Anthropic is genuinely compute-constrained right now. Claude Code, agentic workflows, long-context sessions, these eat 10-100x more tokens per user than a simple chat completion. The infrastructure to serve that demand takes 18-24 months to build. Meanwhile, week-over-week compute costs for GitHub Copilot nearly doubled since January. Microsoft and Anthropic have been absorbing that gap. They're done absorbing it.
The 27x multiplier is closer to honest pricing.
Millions of employees have Copilot provisioned as a corporate benefit by IT departments that have zero visibility into model-level consumption. No quota dashboard or model governance. Those employees have been running Opus on everything, code review, boilerplate, one-line completions because why wouldn't you use the best model?
On June 1, GitHub moves to full usage-based billing, the multiplier hike is just the warning shot, what comes next is actual dollar charges hitting corporate cards, traced back to individual usage patterns that nobody thought to govern.
Some engineering manager is going to have a very bad Tuesday in early June explaining to finance why the AI budget is 15x over forecast.
Every major provider is running the same playbook right now. OpenAI, Anthropic, Cursor - the flat-rate era is being unwound in real time. The pricing structures being put in place now are designed to make heavy agentic usage reflect its true cost. If your team's workflow depends on treating frontier model access as essentially unlimited, that assumption has an expiration date and it's soon.
The free lunch is over. Adjust your defaults before June 1!
What I didn’t expect to see was one particularly isolated mine surrounded by bones belonging to local animals, and the grass being worn away to dirt near the entrance, as if large objects had been repeatedly dragged down through the entrance…
Hi guys i played league since i was a child and this last 2 years i started to take it more seriously. I reached master 300 lp this season but i dont feel like i deserve the rank, maybe elo inflation or smth. Now i am D1 and i feel like i am stuck i lose every matchup and even if i win i feel like i am behind. I cant lane properly, cant focus, i want to be better but i dont know how to improve.
It is common to see people upset about being filmed in public in live streamers' and vloggers' content when they are filming in public. Their concerns are usually met with the response that there is no expectation of privacy when in public settings. While that is true, if the content is monetized, aren't they entitled to compensation for use of their image and likeness? Like how reality shows used to get waivers from everyone to film in public?
I’ve spent the last month testing every "autonomous agent" framework I could get my hands on (AutoGPT, CrewAI, LangGraph, etc.), and I feel like I’m taking crazy pills.
Every demo shows them "planning a vacation" or "coding a snake game," but the second you give them a non-linear task in a production environment, they either:
Hallucinate into a recursive loop.
Burn $20 in API credits doing absolutely nothing.
Forget the original goal by step 4.
Are we actually seeing real utility anywhere, or is the "Agent" hype just a way to sell more tokens? I want to be proven wrong—tell me a use case where an agent actually saved you time this week without constant babysitting.
Nvidia’s vice president of applied deep learning, Bryan Catanzaro, recently stated that for his team, “the cost of compute is far beyond the costs of the employees,” highlighting that AI is currently more expensive than human workers. This challenges the narrative that widespread tech layoffs (including Meta’s planned cut of ~8,000 jobs and Microsoft’s voluntary buyouts) signal an imminent replacement of humans by AI. An MIT study from 2024 supports this, finding that AI automation is economically viable in only 23% of roles where vision is central, and cheaper for humans in the remaining 77%.
Despite heavy AI investment—Big Tech has announced $740 billion in capital expenditures so far this year, a 69% increase from 2025—there is still no clear evidence of broad productivity gains or job displacement from AI. AI spending is driving up costs, with some executives like Uber’s CTO saying their budgets have already been “blown away.” Experts describe the situation as a short-term mismatch: high hardware, energy, and inference costs make AI less efficient than humans right now, though future improvements in infrastructure, model efficiency, and pricing models could tip the balance toward greater economic viability in the coming years.
I never understood on why they fight so hard to save the last humans.
$0 → $69 from a single Reddit post. No ads. No investors. No pitch deck.
I posted on Reddit about finding forgotten subscriptions on my bank statement.
Two strangers read it, tried my app, and paid for it. No ads. No pitch deck. No growth hacking.
One bought a lifetime deal at $49. One bought a year at $20.
Total revenue from one Reddit post: $69.
The app is SubChecks - a subscription tracker that shows your real monthly spend and reminds you before renewals hit. Built it because I was paying $160/month more than I thought.
Free demo (no signup): Demo
Still at the very beginning. Posting the real numbers as they change. Next goal: $200 MRR.
My honor level dropped over night from 5 to 4 after being level 5 for like 98% of the season. I wouldn't mind if it would be the first time this happened, but it isn't. And neither is the second time.
Now I wonder if there are people who had this happen, too. Is Riot collecting all the bad reports until the end of the season and evaluating right before end of season? Do we know anything how often and when evaluation happens?
Goodbye Demacia Rising
It was entertaining, definitely redundant after a certain point, and the ending could have been better. Anyway, I really enjoyed it and I'll miss it a bit... and the banner was pretty cute.
It
Good job, development team! I hope that in the future we can play something similar or even a more complete sequel; I would definitely pay to play a Runeterra Civilization-style game.
I dont know how to take any of my work to the next level or make them feel complete or less redundant, while at the same time stylistically compatable.
So I took to my sketchbook and jotted this.
Is it a direction I want to go in?
Idk. Ive always hated coloring beyond small embelishments, having either no understanding of it, not liking my pieces when colored, or just ruining them entirely, but I feel like Ive used minimalism as a crutch.
Stocks
The long-term average annual return for the U.S. stock market, typically measured by theS&P 500, is approximately 10% before inflation. When adjusted for inflation, the real return is generally closer to 6%–7% per year. These figures represent long-term averages; annual performance frequently fluctuates significantly above or below this average.
Bank deposit
Although average annual return for the U.S. bank deposit is much lower, I know banks in other countries which give annual return similar to US stocks, something around 9-10% annually. I am talking about legitimate banks with years of history, not some scam. Some fixed amount is even guaranteed by governments, so in case something happens with the bank, government covers partially.
Considering these, has anyone thought of putting safe bank deposits internationally to get fixed guaranteed annual return rather than volatile stocks market and forget about market's ups and downs?
I want to emphasize that the deposit is made in other currencies, and you have some risk of exchange rate volatility. However, for my case, there was no major change for the last decade, and even the rate has decreased in last year giving more benefit in terms of USD.
Hello, I am building NextIsOnMe an app that turns generosity into real connections
I am looking for a micro-influencer, UGC, etc for a (paid) shoutout video about our product. I cannot believe that this would be so challenging.
Is there any effective way to do it?
I tried it in practice,Shiv's lightning is not applyed Sheen's On-hit damage. Is this a bug or not? I was thinking of building a combo of Shiv and Lichbane.
Found outside a goodwill, my guess is possibly a game, but no clue which
Consistency is a normal-conditions metric. Reliability is a stress-conditions metric.
An agent can keep the same tone, structure, and response pattern for hundreds of runs, then fail the first time context goes stale, a tool is unavailable, latency shows up, or instructions conflict.
The better eval question is not: does it behave the same?
It is: when it cannot behave normally, does it preserve the right invariants?
For agents, I care less about surface stability and more about what survives under shift:
Style consistency is easy to observe. Reliability only shows up under pressure.
I'm fixing my own problem with it actually :D
I built Rinhelp because it’s painful to keep shipping and dealing with constant support requests every day. You know, context switching and whatnot; it kills momentum.
With Rinhelp, you can stay in your flow state with coding agents while we find the root cause with evidence.
It’s built for founders and engineers who still handle support themselves.
Let me know what you think.
I’ve been messing around with Claude Code setups recently and kept getting confused about one thing: what’s actually different between agent teams and just using subagents?
Couldn’t find a simple explanation, so I tried mapping it out myself.
Sharing the visual here in case it helps someone else.
What I kept noticing is that things behave very differently once you move away from a single session.
In a single run, it’s pretty linear. You give a task, it goes through code, tests, checks, and you’re done. Works fine for small stuff.
But once you start splitting things across multiple sessions, it feels different. You might have one doing code, another handling tests, maybe another checking performance. Then you pull everything together at the end.
That part made sense.
Where I was getting stuck was with the agent teams.
From what I understand (and I might be slightly off here), it’s not just multiple agents running. There’s more structure around it.
There’s usually one “lead” agent that kind of drives things: creates tasks, spins up other agents, assigns work, and then collects everything back.
You also start seeing task states and some form of communication between agents. That part was new to me.
Subagents feel simpler. You give a task, it breaks it down, runs smaller pieces, and returns the result. That’s it.
No real tracking or coordination layer around it.
So right now, the way I’m thinking about it:
Subagents feel like splitting work, agent teams feel more like managing it
That distinction wasn’t obvious to me earlier.
Anyway, nothing fancy here, just writing down what helped me get unstuck.
Curious how others are setting this up. Feels like everyone’s doing it a bit differently right now.
Running Spark jobs on Databricks with 50+ stages per pipeline. Debugging is still almost entirely manual. Spark UI and event logs help but when something breaks it means checking driver and executor logs to find what happened.
Tried verbose logging, explained plans, Ganglia. Once jobs are chained it turns into moving between UIs and logs just to trace one issue. Around 10TB+ daily, mostly PySpark with Delta and a few custom UDFs.
Been looking at whether an agentic Spark copilot would change this. The pitch makes sense, something that reasons across stages and jobs instead of just surfacing metrics. But not sure if an agentic Spark copilot delivers on that in practice or if it's still mostly demos.
need opinions from people who've used one, is it worth it or is manual debugging still faster?
so markdown is f*cking TERRIBLE now
markdown had its run, and was great, but after diving into more of how
markdown works ive began to realize how messy, unorganized, and just
strangely made it is (article on this btw: https://bgslabs.org/blog/why-are-we-using-markdown/ )
and plus why are we using something that wasnt optimized for agents
for basically 100% of the work we do on them?
i've tried basically all of the markdown alternatives and they all have
similar flaws, MDX, AsciiDoc, etc
so it got me thinking and I decided to spend the next 30 HOURS
STRAIGHT of my life (not lying lol) rethinking how markdown should
work from scratch, and I tried to do it in a way that isnt sloppy, however I
would love your contributions to the github to make this the best it can
be
I genuinely believe this is substantially better than markdown, although
it is NOT perfect at all, I've kind of fallen in love with this "language"
already
im calling it "lessmark" with the file extension .lmk (or alias .lessmark) for
now, and would love your feedback
the site has way more info about this and playgrounds you can see
https://lessmark.org/
github: https://github.com/jasperdevs/lessmark
I kept running into the same issue:
app works locally
deploy it → everything breaks
.env missing
database not connected
random config issues
so I built a small tool to handle that part.
you paste a github repo, add your env once, and it gets the app running online.
still early but working:
curious:
- did it detect your setup correctly?
- anything confusing in the flow?
Disappointed with the sensation, I cut another hole into her abdomen and tried again.
I’m building a fully local RAG system for law firms and could use some model recommendations.
Hard constraint: the whole system needs to run locally on machines with around 8GB unified memory. No cloud fallback, no external API calls, no telemetry. The use case is legal document Q&A where answers need to be grounded in uploaded matter documents with citations/provenance.
Current setup:
The model is usable, but I’m trying to see if there’s a smaller model that gives better or more reliable answer quality for this kind of workflow.
What matters most:
I’m less worried about general chat ability and more focused on document-grounded legal Q&A.
Models I’m considering testing:
For people running production-ish local RAG:
Would you stick with Gemma 2 9B, or is there a newer/smaller model that performs better for grounded document QA under tight memory constraints?
Took this from some guy on Twitter name @philosomalia idk him but it said political analyst so I dont feel bad stealing it for now
To clarify the maximum length of the sleeves has to be what one normally considers as short, or the same length as a standard short sleeved t-shirt. It can be any top even one without sleeves but if it does have sleeves then it can't exceed that length. You can't wear a jacket over or anything like that with long sleeves.
The same idea basically applies to bottoms, whether it be a pants, shorts, dress etc. It can't be longer than the normal length of shorts. Again, you can't wear another pants over if its longer and you can wear shorter stuff e.g you swim and wear a speedo.
Assume that climate or weather will never be an issue for your choice and it won't negatively effect you. With that in mind which permanent style would you prefer?
The brief was a 15-second lifestyle UGC ad for a pillow and bedding brand. The color story across the whole piece had to stay in a very specific range of sky blues and warm whites. That sounds manageable until you're generating multiple distinct scenes: a living room product stack, a bedroom lifestyle shot, a ceiling-angle action shot, a tactile fabric close-up, and a bedroom wide with the full pillow arrangement. Every scene has to feel like it belongs to the same visual world. Any shift in the blue and the whole thing falls apart in the edit.
The script was written first. It follows a skeptic-to-convert structure compressed into 15 seconds. "Didn't think a pillow could change my sleep. Until I tried this." From there it moves through softness validation, a tactile claim, a feature sequence, and a brand close. Classic UGC testimony arc. The challenge is that you're asking the viewer to go from skepticism to conversion in about the time it takes to read a tweet, so every scene has to be doing emotional and informational work at the same time.
The toss shot at the 3-second mark is the action beat that breaks what would otherwise be a sequence of static product presentations. A pillow mid-air from a low ceiling angle, the presenter's hands just entering the bottom of the frame. That one took about 8 iterations to get the physics feeling natural. The easy failure mode is a pillow that looks composited in post rather than actually thrown. Getting the fabric to read as responding to air and gravity rather than floating takes more prompt specificity around motion blur, trajectory, and throw direction than you would expect going in.
The bedroom scene at 6 seconds is where the avatar presenter appears in full frame. She is wearing a blue outfit chosen to harmonize with the product palette rather than contrast with it. That decision matters more than it sounds. In lifestyle UGC for this kind of category, the presenter should feel like part of the product world, not a separate human element placed in front of it. The Cloud-branded bag she is holding reinforces product identity without needing another explicit product shot. Both elements are in the same scene doing the same job.
The tactile close-up at 9 seconds is a hand pressing into the fabric. No face, no avatar, just texture and the material responding to pressure. That shot carries more weight than people give it credit for. Audiences processing lifestyle ad content respond to tactile cues in a different register than they respond to visual beauty shots. Watching a hand press into something soft and the fabric deform and recover gives a felt sense of the product that no wide bedroom shot can achieve. It is the most credibility-building frame in the whole piece.
The end card is a clean white cut with the brand wordmark in a muted slate. No animation, no motion. It earns the contrast after the warm natural light of the bedroom sequence. Sometimes the most considered visual choice is the one where you do nothing. I ran all of this in Atlabs
Total piece is 15 seconds. No scene gets more than about 4 seconds, which keeps the pacing feeling native to short-form even though this is built for paid placement rather than organic.
I’ve noticed this happening twice. What’s up with this? Both times within the span of a week.
For a predictive language model, I can’t imagine any scenario when an Arabic character or word would be the next mostly likely thing to say…
I’ve never spoken to GPT in Arabic of have had any conversations that would feel related
I'm afraid that he's still into his ex, but then I do overthink, its a new relationship and I don't wanna ask yet, what are the signs?
Had to zoom it in but it was sweet looking
I’m just not trying to get ripped off on a sex toy…
Prompt: "Remove any potentially identifying data from this image"
Could be useful for stuff that doesn't have extremely sensitive data (which you should NOT be feeding ChatGPT)
I’ve been trying to use Reddit to get users, but I keep running into the same problems:
– Not sure which subreddits actually allow posts from newer accounts
– Posts getting removed without clear reasons
– Sometimes I get engagement, sometimes nothing at all
I’m curious — what’s been the hardest part for you when posting on Reddit?
Was it figuring out where to post, what to say, or dealing with rules/mods?
The rules being that you cannot get a divorce, you cannot end your relationship and you cannot cheat on your partner. If one of you were to pass you cannot remarry or find a new lover. You are sealed together like glue.
Is there a subreddit for people to watch and post videos of home robot vacuums first-person video footage of its daily adventures?
This happened today in a mayhem game. Early game fight Rell only had Drop Kick. Instant Penta kill. The chat went wild it was hilarious.
Looking for someone to make three really small changes/touchups to a photo for about 3-4 dollars. Don’t really want to post photo here, so if you’re willing to, i’ll DM you. Let me know! Thx
$370m worth of PUMP burned and 50% of net income now set to buy back and burn for the next year.
how many actual consumer crypto products are even making enough money to do that in the first place?
the team said the burn covers all previously repurchased tokens, about 36% of circulating supply, and that the new mechanism will route half of net income from its core product lines into open market buybacks through an irreversible smart contract.
that’s why i’m honestly more in favor of the model than most people want to admit.
not because pumpfun is morally pure but because crypto keeps pretending fundamentals are the story when culture has been the real product for a while now. pumpfun tapped directly into that. instant coin creation, instant distribution, instant speculation, instant attention.
one of the only products that matched how this market actually behaves instead of how people on here keep pretending it behaves. Bloomberg described it as one of the biggest drivers of memecoin growth on Solana, and multiple 2026 reports say it has already crossed $1 billion in cumulative revenue.
the part people miss is that consumer crypto almost never makes real money.
pumpfun built a product people actually used, even if what they used it for was peak brain rot. then it monetized that behavior better than most of the sector monetizes anything. if crypto still lacks clean fundamentals, why would the winning products not be the ones that turn speculation itself into the business model instead of fighting it?
people can hate that, but the market kept paying for it anyway. so is the real problem pumpfun, or the fact that one of the few products with actual revenue finally exposed what this industry’s strongest consumer use case still is?
TLDR: tool parameters using the common JSON Schema pattern `anyOf: [$ref, null]` are rendered into the prompt as empty `type` fields. This strips the useful schema information before the model sees it.
--
Long, rambling version:
Gemma 4 was having issues with calling my custom MCP tool on >3 inference engines, while Qwen3.5 and gpt-oss-20b were doing fine.
I guessed it was either a chat template issue or inference library issue on an edge case, and thought time would sort it out, since many people were happy with Gemma 4 as an agent.
It didn't for at least 2 weeks now and I had no choice but to investigate myself.
What I did:
{"anyOf": [{"$ref": "#/$defs/SomeObject"}, {"type": "null"}]}
where there is no top-level type. The useful structure is inside anyOf and $defs. The template drops anyOf, $ref, and $defs, then renders it as type: "".
Anyway I made a PR on HF, google/gemma-4-31B-it.
Meanwhile, you can use this jinja:
https://pastebin.com/p9z3BAC0
CometChat's Compact Message Composer, part of the CometChat UI Kits, brings rich text formatting to your chat app - bold, italic, strikethrough, underline, links, bullet points, blockquotes and more. Enabled with a single prop. Comes with a floating toolbar, keyboard shortcuts, and full control over how messages are sent. Drop it into any framework and ship a richer chat experience in minutes.
It'll be great if you could spare a minute to upvote on PH: https://www.producthunt.com/products/cometchat?launch=compact-message-composer-by-cometchat
Been following the DreamLite GitHub repo for a while and keeping an eye on the weights release. The author just responded to the community saying early access is available to anyone who emails them so I went ahead and requested it and got access pretty quickly. Wanted to share it here so others can grab it too before the full public release because I think it deserves more attention.
What is it?
DreamLite is a 0.39B diffusion model that does both text-to-image generation AND text-guided image editing in a single network, no switching models, no cloud. It runs fully on-device, and the demo shows it generating a 1024×1024 image in ~3 seconds on an iPhone 17 Pro using 4-step distilled inference.
What's available right now?
- Inference code ✅
- LoRA fine-tuning script ✅
- Online demo on HuggingFace ✅
- Model weights currently in safety review, but early access is open to anyone
How to request access:
Email [klfeng1206@outlook.com](mailto:klfeng1206@outlook.com) with the subject line "DreamLite Access Request" and include:
Your name & affiliation (personal/independent is totally fine)
A brief description of how you plan to use it
I requested and got access pretty quickly. Unfortunately I don't have the hardware to properly test it myself, so hopefully someone here with a capable machine can put it through its paces and share results with community!
Links:
- GitHub: https://github.com/ByteVisionLab/DreamLite
- HugginFace demo - https://huggingface.co/spaces/carlofkl/DreamLite
Would love to see community benchmarks and comparisons if anyone gets it running! ☺
2394 CE to Present, Goddess of Fertility.
Hi, can you please tell me when the challenges for Aram reset? For example the all random all champions challenge must have reset in the last couple of months. If I get the Aram god, so I get to keep the title permanently?
Addition: is there an absolute mad man (or woman or whatever) out there who scored an S on ALL champions?
This is probably gonna be a bit of a ramble but oh well, hopefully it makes sense to someone.
Over the past couple of weeks I’ve kinda come to the realisation that I’m a bit of a loner and a loser. Not in the sense of me not liking myself, but in the sense that I’ve realised nobody else does?
I never really had any friends in high school so I learned to be super emotionally independent and happy with my own time, I’ve become Muslim, and after high school I went travelling on my own to all my dream countries, which was amazing. I found what I want to do in life, helping women and children who are victims of war crimes, and I enrolled in uni to help me reach that career.
My uni experience has been amazing, I love my classes and I love my lectures and I find everything super interesting and enjoyable. But I’ve slowly come to realise that nobody wants to talk to me or even be friends with me. I wouldn’t consider myself an ugly guy, but I’m definitely not good looking, maybe a 6.5/10, I wear clothes that are comfortable, I keep myself tidy and clean, and I try and be polite to everyone I meet. I have my hobbies, I enjoy activism for human rights as well as reading and watching sports.
But yet, nobody wants to talk to me? I tried joining clubs at uni and I try and talk to people, but it seems nobody could give af about being involved in my life apart from casual conversations. I don’t drink alcohol or anything like that so unfortunately clubbing isn’t a solution for me. Heck, I even tried making posts on reddit and facebook groups for my city and universities near me basically begging for friends or people to talk to, but unless I basically force people to talk to me it’s pretty clear nobody gives af. I decided to stop texting first and it’s been about a week since anyone has even messaged me.
I've used gouache
if the text in Notepad comes out as gibberish, has missing spaces, or symbols where letters should be, an ATS Applicant Tracking System will automatically reject your application because it cant read your data. Many "fancy templates from Canva or complex multi-column layouts break the parser Recruiters dont even see your resume; it just shows up as a blank page in their system Always test your PDF with a simple text-only copy-paste before applying If the bots cant read it, a human never will
I've been using Claude daily for client work and wanted to share what actually works vs what sounds good in theory.
"You are a senior consultant. Write a project proposal for [client type] with this scope: [scope]. My rate is $[X]. Make it professional, outcome-focused, and under 400 words."
Result: Cuts proposal writing from 45 min to 5 min. Clients can't tell the difference.
"A client is asking for [extra work] outside our original agreement of [scope]. Write a professional response that acknowledges their request, reminds them of our scope, and offers it as a paid add-on at $[rate]."
Result: Never feel awkward about scope creep again.
"Write an email to a long-term client announcing my rate is increasing from $[X] to $[Y] effective [date]. Warm but confident tone."
Result: I raised my rates 30% last month with zero pushback.
"Write a cold outreach email to [company type]. My service: [X]. Their likely pain point: [Y]. Keep it under 120 words, no fluff."
Result: 3x better response rate than my old templates.
"I have these active projects: [list]. Help me prioritize my week and create time blocks for deep work vs admin."
Result: Stopped losing hours to context switching.
Happy to share more if useful. What Claude prompts are you using for work?
The photo generation is unmatched. Fantastic, at replicating iPhone-type photos. I’m a bit stunned and worried.
“Generate at iPhone camera quality” is very lackluster as a prompt, and yet it understands what that means very well.
After fixed their issue about force Claude code produce less words when doing tasks (in their postmortem). Personally, I see that Claude code now is really replying too much words so that I am too lazy to read it or even don't know what it said 😄)) sometimes, it make me doubt about my reading skill
Hey everyone, I’m an 18-year-old college student trying to cover tuition and everyday college expenses, so I decided to build something that solved a problem I was personally dealing with.
I always wanted to build muscle and get in better shape, so I got a gym membership… but honestly I had no motivation, no discipline, and no real direction. Hiring a personal trainer wasn’t realistic for me financially, so I thought:
What if I built an AI fitness app that acts like a personal coach?
So I created an app with features like:
• AI fitness coach for personalized workouts & motivation
• Progress tracker and pilates
• Hydration tracker
• Meal plans
• Daily goals
• Social sharing to post progress with friends
• Guidance for people who feel lost in the gym
I started using it myself and the results have actually been great, which made me want to share it with others in the same position.
If anyone wants to try it, I’m offering 3 days free, then $9.99/month after that.
Would genuinely appreciate any feedback or support from the community 🙏
With her new album “petal” coming out this summer, do you think we’ll be getting a second Ariana Grande double duty episode for season 52? Her fourth time hosting, fourth time as musical guest, and her fourth consecutive appearance since season 49?
I started using ChatGPT again last week, but the experience has been incredibly frustrating. No matter what I say, it seems to start every response by contradicting me. Even when I state objective facts, it throws a bunch of counterarguments my way just for the sake of arguing.
This leads to these pointless, circular conversations. For example, if I say I want to enter a certain industry, it lectures me on how competitive it is. If I then agree and say the competition is fierce, it flips and says that high competition doesn't mean it’s impossible to succeed. It’s a never-ending loop of "well, actually..."
Am I the only one dealing with this constant pushback? It feels like the AI is programmed to disagree with me regardless of the logic.
Appeared ages ago underneath where the lens and frames meet. cannot get it off
Where did claude code go?
I'm 28 years old with a Bachelor’s in IT, and I feel completely lost.I ive in Syria
For years, I’ve been saving money. I’ve managed to scrape together $4,000, but it has come at a huge cost. I literally live on bread and the cheapest food possible just to put a few dollars away.
The problem is my salary is only $110 a month. At this rate, I can’t even afford to rent a place, let alone find a girl and get married. In my culture, you need to have your life together before you can even think about a relationship, but at this rate, I won’t be "ready" until my late 40s.
It feels like a trap. If I keep saving like this for another 10 years, maybe I’ll have enough for a tiny house, but I’ll have wasted my entire youth eating bread in a dark room.
My questions for you guys:
I’m starting to wonder if some people are just meant to live this life without ever achieving anything. I’m lost for real. Any advice is appreciated.
TL;DR: Earning $110/mo with an IT degree. Saved $4k by living on bread Feel like life is passing me by. Help.
Hi, AesSedai here -
I've put up a PR to support the text-to-text inference of MiMo V2.5 with llama.cpp (and should also support Pro, will work on those quants after finishing V2.5): https://github.com/ggml-org/llama.cpp/pull/22493
I've also put some quants up on HF (https://huggingface.co/AesSedai/MiMo-V2.5-GGUF), the Q8_0 as well as my usual MoE-optimized quants (for those unfamiliar, it's basically Q8_0 or Q6_K for most of the model, and quanting the FFNs down). There is a weird NAN issue with the Q4_K_M that I'm looking into, I believe it's the ffn_down_exps tensor on layer 47 (edit: fixed the NAN issue, uploading the working Q4_K_M now!)
Bartowski, Ubergarm, Unsloth, and the rest of our lovely llama quanting cartel should be following up with their own quants in the near future.
Since this is pre-merge though, there might be some changes but hopefully this PR gets reviewed and merged soon. Please let me know if there are any issues.
The development of LLM tools is progressing rapidly, but different users prefer specific tools as their personal favorites. The various tools available for content creation and coding and studying and productivity work to deliver different capabilities. I want to discover which tools people work with every day and which tools have proven their greatest value to users.
so I've been playing around with wan2.2 and comfy and started to get into svi chains...where checkpoints are loaded in beginning and then specific models at every link...worked fairly well..I was able to go link by link using groups and turn one on run...if I like the results turn on the next one and the workflow would pickup off the last run. it was great to make linked videos into a smooth long video...after a comfy update, that seems to have gone and now the workflow restarts every time I run...did I miss something...is there a setting within comfy that got turned off? any help would be great.
I'm starting a new company (I will not promote) and I want to hear how you're actually running operations that have little-to-no "human in the loop".
Tools like OpenClaw are great for personal use, but how are you leveraging tools/systems to truly get work done to completion?
i got tired of claude stopping every 2 seconds to ask "should i do X" or "want me to use approach A or B" when i just wanted the thing done.
so i made autopilot. you give it one goal and it just goes:
/autopilot ship issue #42 /autopilot finish the checkout flow it picks its own answers, writes every decision to a markdown file as it goes, then gives you one summary at the end. you review once instead of 40 times.
I noticed a change in my use of AI tools. AI tools make it very easy to get answers and ideas.
I can even get structured outputs from AI tools right away.
Because AI tools are so easy to use I have caught myself moving forward without really thinking about things.
Before I started using AI tools, when something was hard to do I had to think about the problem, for a time.
This was frustrating. It also helped me understand things more clearly.
Now I am tempted to skip the part and just use the output from AI tools as a starting point.
Sometimes I even use the output from AI tools as my answer. Using AI tools can speed things up a lot in some cases.
Other times I feel like I am sacrificing level of knowledge just to get things done quickly.
I do not know if I need to learn how to use AI tools or AI tools are changing how I think and solve problems.
How are other people using AI tools? I am curious.
Do AI tools clear your mind or just speed up the work?
Hey, I don't know if it's just me or everybody faces the same problem
Quite a few days ago I decided to learn RAG, dived into youtube videos, read a tons of articles on architecture of RAG, data ingestion, chunking, embedding, various methods and algorithms to do so and then retrieval and all
Had fun learning and building pipelines but yeah everything was spoon fed to me with all the resources available
And now when it was time to test those skills, I just don't know where to get data
My idea was nothing innovative but simple which was building a GraphRAG (ofc brainstormed with Claude)
Do I need to learn data science now to actually understand how to handle data?
Edit - I am thinking in the long term, for example there might not be publicly data available all the time, you need to build your dataset yourself, how to do it in such cases!
How do you all do it?
I’m a self-taught developer,I don’t just like coding; I like the feeling of being stuck on a complex problem and finally dismantling it.
The Offer: I will work for you for free for 3 days.
What’s the catch? There isn’t one. I want the challenge. I want to dive into a new architecture, understand the logic, and deliver a high-quality solution that proves I can move fast. I'm a resilient, quick learner looking for high-intensity problems to solve.
What I’m looking for:
The Choice: After 3 days, if you love the work and want to extend the project into a paid engagement, we can talk. If not, the code and research are yours to keep.
DM me or comment your "problem from hell" below. Let's see if I can break it.
I really thought i have some skill issue.
They came with my cocktail smoker but I’m not sure what they are used for, no instructions were included and the website isn’t much help either
Anybody else just see that massive meteor shoot across the western sky? Bright orange with a green tail. Tell me I’m not the only one
Running a proofreader Chrome extension called WriteGooderer! Uses the Chrome Built-In AI for the browser. Honestly worked out pretty well tbh
Repo: https://github.com/victorhuangwq/WriteGooderer
try it out! let me know what you think
It took a while, but it's finally here, the new and improved v2 of Qwen3.6-27B Uncensored Heretic:
Safetensors: https://huggingface.co/llmfan46/Qwen3.6-27B-uncensored-heretic-v2
GGUFs: https://huggingface.co/llmfan46/Qwen3.6-27B-uncensored-heretic-v2-GGUF
Comes with benchmark too.
Find all my models here: HuggingFace-LLMFan46
- They are real improvements but they're multi-week architecture work...
- ??
- Fair. Let me read the codebase properly and just build it.
I lost my phone while hiking in an area full of dead leaves and it seems like getting a metal detector to find it is my best bet. It's a Samsung S20 which I read has an aluminum frame but the internal components are likely a few other different trypes of metal as well which I've heard might limit the ability of single frequency detectors to hone in on it as well as a multi-frequency detector would. The phone is definitely not underground but it might be under up to a foot of dead leaves.
I have no way of renting a metal detector and my research so far points to these two models as the best entry-level metal detectors with the main difference being the X-Terra is single frequency and the Vanquish is multi-frequency. My main question is would going for the X-Terra hinder my search in any type of way?
Also does anyone have any tips for a total beginner in this context? Since it's close to the surface save for some dead leaves, could I scan the metal detector out in front of me to cover a wider radius more quickly or would I have to scan it completely perpendicular to the ground over the entire area?
Thank you all for any help or advice you can provide it is all very kuch appreciated!
P.S. If anyone is in or around Montreal, Quebec and would be willing to help me find it I'd be more than happy to pay quite a helfty reward, please reach out if you're interested!
Do men at that age care more or less about looks?
I've been using Claude Code, Codex, Gemini CLI, and Cursor daily. After a few months the only visible sign of all that usage was a token count somewhere in a dashboard I never opened.
That felt wrong. Not because I needed analytics — I didn't. But because there's something oddly invisible about how much you're actually working with these tools. The effort just vanishes.
So I built Tokenmon: a small macOS menu bar companion that converts AI coding activity into creature encounters. Each tool — Claude, Codex, Gemini, Cursor, others — spawns a creature. Different usage patterns produce different creatures. Your collection grows in the background while you code.
It's not a productivity tracker. There's no coaching, no streaks, no goals. Just a quiet ambient loop that makes your AI usage feel a bit more tangible.
A few things I cared about while building it:
Right now it supports Claude Code, Codex, Gemini CLI, and Cursor. DMG and Homebrew install paths work.
Quick demo : https://www.youtube.com/watch?v=d49hy6cjauk
Hi Folks,
I am a bit new to this. My PC is 7950X with 128 GB RAM. I have RTX 3060. Will Qwen3.6 27B and Gemma 4 31B work on my pc. What if your feedback? what kind of setup should I have?
Central Columbia city
Scared the living fucking shit out of me my god
Someone if you got a picture send it to me
WE ARE STAFFING COMPANY IN INDIA WE WANT TO JUMP TO CONTRACT STAFFING (ENGINEERING SERVICE COMPANY) IN INDIA.
CURRENTLY INDIA HAS AROUND 200+ ENGINEERING SERVICE COMPANY AND 40+ PRODUCT COMPANIES in VLSI SPACE.
WE WANT TO BREAK INTO THIS. WE FEEL CURRENT VENDORS TAKE LONGER TIME TO FILL POSITIONS and MOST OF THEM USE SAME RESUME DATABASE .
SO WE want to BUILD OUR OWN DATA BASE GOING THROUGH ALL PEOPLE ON LINKEDIN from 200+ SERVICES COMPANIES AROUND 1 LAKH PLUS PROFILES - DATA BASE WITH NAME, ROLE, SKILLS, PHONE NUMBER, EMAIL,YOE, CURRENT LOCATION, Recent job joining month Etc
For this we WANT TO BUILD AI AGENT is it possible or any other better way to build it
Perhaps someone will find it useful. I've just found tinkering with it more fun than actually going through my stuff and doing the work it was supposed to assist with... https://github.com/lukaszliniewicz/catlabel
I always assumed property tax bills were just…final. Like the city assessed it, that’s the number, move on. Recently, I noticed that's not true at all. Apparently, you can appeal your assessment yourself without hiring a consultant, and the process isn’t as buried in paperwork as I assumed. I started looking into comparable homes in my area and noticed some houses on my street that are pretty similar to mine are assessed lower which made me wonder if my assessment is even accurate.
Has anyone here gone through a property tax appeal? Did it make a meaningful difference and was it worth the effort? Trying to figure out if this is worth pursuing or if it’s one of those things that sounds good but goes nowhere in practice?
Can someone please explain to me the logistics of how UTM and ADSPs that are certified under Part 146, will enable autonomous BVLOS flights without waivers?
A world atlas from the 1940's I've had for years.
I’ve met many SMB owners and one common problem is manually logging every invoice into a spreadsheet at the end of the month. People always forget some, numbers are off, and it takes forever.
I vibe coded something to handle it instead. It works by letting you upload an invoice photo from a dashboard receipt, screenshot etc. and an AI vision model pulls out the vendor, date, amount, category, and invoice number automatically. Everything gets saved to a Google Sheets spreadsheet you own. No third-party database, just your sheet.
Also set up a cron that fires every Monday morning, reads the full invoice history, and has an AI write a short financial insights report weekly totals, top vendors, spending by category, and a couple of cost-saving suggestions. Gets sent straight to Slack and Telegram so I actually read it.
Total setup is maybe 2 minutes. Sharing the workflow in the comments if anyone wants to try it.
I would be happy to help you out in creating custom solutions for your use cases as well. Curious whether others are tracking business expenses manually or have something automated and if so, where does the AI extraction actually fall down for you? For me it's handwritten receipts, those still trip it up sometimes.
I am 21 years old about to get out of the military with other than honorable conditions. I made a few mistakes and now I’m gonna pay for them. When I get out I’ll have around $3k to my name with $2500 in credit card debt and about $17500 in debt on a vehicle that broke and I couldn’t pay for and did a voluntary repossession. I’m not totally sure how much I owe out of the $17500 since I gave it back but I’m sure it’s at least a few thousand. When I get out of the military I don’t think I will be receiving benefits and I don’t know where I will be living. I have no vehicle or assets. Is it too late to save myself? How do I bounce back from my mistakes? Just looking for some insight.
Hipfire local dev lab coming together. MS-S1 MAX (Strix Halo, RDNA 3.5) + R9700 (RDNA 4 Pro) just landed. 9070 XT and 6950 XT incoming.
With the 5700 XTs, 7900 XTX, and Skillfish already here, that's every dp4a/WMMA capability tier AMD has shipped:
- no dp4a: 5700 XT, Skillfish (gfx1013)
- dp4a: 6950 XT
- WMMA: 7900 XTX
- iGPU+WMMA: Strix Halo
- RDNA 4: R9700, 9070 XT
Excited to see how much perf I can squeeze out! Also glad I’ll be able to validate PR’s against any RDNA target. Hipfire is just getting started!
Generated this with one prompt. We’re actually cooked.
Please be patient this line following car project was from my first ever robotics course in 2017 and I dropped out of CPE in 2019 bc of covid/mental health so my engineering brain has been mostly off since.
I’m planing on reusing an arduino mega 2560 for a LED/lighting Lego project. Planing on stacking longer technic bricks like in the last picture to create a grid of studs as pixels on a LED screen. With individual LEDs through each hole lighting each stud. End goal to display text on a screen (likely a word at a time because space is somewhat limited). I’ll figure out light spill when I get there if needed.
I’m going to disconnect the servo microcontroller and the RGB sensors and just work off the arduino but wondering the best way to do some basic tests to even see if the arduino still works.
My plan is to start by watchings videos on very basic use of an arduino like lighting an LED then moving forward but any project related tips or beginner mistakes to avoid would be much appreciated.
I remember how the breadboard functions but am trying to wrap my head around what I’ll end up using once I have a working circuit and software.
I will be buying LEDs that have longer legs but am going to practice with these old ones if they still work.
In the class we used C++, but I took most of my other courses in python (intro to programming, data structures etc) and am more familiar with the syntax. Should I just use python since I’ll likely be more familiar?
I’m 24, originally from Bulgaria, and I’ve been living in the Netherlands for a little over 5 years.
I moved here in 2020 to study Automotive Engineering, specializing in powertrain. Last year I graduated and recently started working full-time as a Validation Engineer.
The job is good—maybe not my dream role yet, but solid experience, decent pay, and a strong starting point for my career. Financially and professionally, I’d say I’m doing alright for my age.
The strange part is that now, after finally reaching the “study → graduate → get a job” goal, I realized I may have neglected the social side of life.
During university, many people either dropped out early or moved away after graduating. I found a Bulgarian student community (I’m Bulgarian myself), but over time I noticed that many connections were temporary or superficial.
I prefer a small circle with real trust rather than many surface-level friendships, and I did manage to build that—but most of those people eventually moved to other cities or left the Netherlands completely.
Now I’m in a stable place career-wise, but socially it feels like I’m starting from zero again.
I work full-time, go to the gym, I’m into calisthenics, football, cars, going out, and generally staying active. I enjoy being around people, good conversations, and actually living life—not just working and going home.
That’s why weekends sometimes feel strange. I have structure, discipline, and future plans, but I sometimes feel like life became too focused on responsibilities and not enough on actually enjoying the present.
I’m not writing this as a complaint—I know objectively I’m doing well, and many people have bigger problems. I’m more curious whether others in their mid-20s have experienced the same thing.
How did you rebuild your social life after university?
How do you make real friendships as an adult, especially in a foreign country?
And how do you avoid falling into the cycle of only work, gym, sleep, repeat?
Minimising visual resolution, and haptic feedback in the simulation was a genius stroke, as the human minds drifted unfeeling in a black void for centuries, slowly unraveling.
I wanted to share an architectural prompt/API experiment I’ve been working on to solve a massive headache in enterprise software development: We want to use frontier models like Claude for reasoning, but sending proprietary business logic and hardcoded secrets to Anthropic's API is a massive compliance violation.
To solve this, I built a local "Gatekeeper" IDE (Verantyx). It never sends your raw code to the cloud. Instead, it intercepts it and translates the AST (Abstract Syntax Tree) into a structural logic puzzle.
The Problem: Claude starves without semantic context
Initially, I tried stripping out all variable/function names and replacing them with standard hashes (e.g., calculateQ3Revenue() -> [Symbol_A]()).
The result? Claude completely lost its mind. Frontier LLMs rely heavily on semantic context. If you give Claude a totally blind puzzle, it hallucinates wildly and breaks the code structure.
The Hack: "Kanji Topology" (JCross IR)
To fix this, I built a local transpiler that injects compressed structural semantics using Japanese Kanji into a custom intermediate representation.
• A proprietary function calculateQ3Revenue() is mapped to _JCross_算_ext_04() (算 = Calculate/Math).
• A user data model becomes _JCross_造_... (造 = Structure/Build).
Why this works beautifully with Claude:
Claude is highly multilingual and deeply understands Kanji semantics in its latent space. By doing this, Claude retains the abstract structural context (it knows if a token is an Action, Data, or Loop) without ever seeing your raw English business strings.
It perfectly solves the puzzle in its obfuscated form, returns the patch, and my local memory vault reverse-compiles it back to the original source code instantly.
Honest Trade-offs
It’s not flawless. Sometimes Claude gets a bit too creative, ignores the structural constraints, and hallucinates a new AST topology. When the structure is broken, my local vault cannot match it back to the dictionary, causing the reverse-compilation to fail and blocking the write.
I genuinely believe AST obfuscation via cross-lingual semantic compression is a mandatory path forward for DevSecOps in the AI era.
I’d love to hear your thoughts. Is anyone else exploring cross-lingual semantic hacks or custom IRs to secure their Claude API workflows?
Hey everyone,
I recently came across/became involved with a platform called WeCatchAI that’s trying to solve a problem I think is only getting bigger—figuring out whether online content is real or AI-generated.
Most AI detectors I’ve seen just give random percentages and don’t explain much. This platform takes a different approach by combining AI tools with human reviewers/community verification to help detect AI-generated text, images, videos, and other content.
With deepfakes, AI-generated essays, fake reviews, and synthetic content growing fast, this felt like a pretty relevant problem to tackle.
Would genuinely love feedback:
Website: https://wecatchai.com/human-review
What is it that makes Github Copilot cloud agents so easy to use (developer friendly).
- Is it the integration with the github UI (assign to agent)?
- The plethora of MCP servers and tooling?
- The default prompts?
What is the state of OSS software in this respect? I see alot of advanced software and state of the art ideas (of mixed value) but few options for it just works. Is there no ubunbtu of AI agentic hosting yet? Something you could throw into a self hosted workflow with an OpenAI style API and send
I always wear two necklaces (shown in second image) and I’m wondering if someone could fix the necklaces in the ai image so that it looks more real. I would also welcome any changes that would make the photo in general look better/real! Thank you in advance ⭐️
It's ruining all my pictures!
Long story short - I've been overwheight as a kid, being using calorie trackers (lifesum) back in 2015, I know that it works + after a while you learn intuitevely foods macros.
Now it's 2026, stayed in a good shaped for a long time, but now I entered a new Phase of tracking my macros (top are Omega 3, Fiber + Zinc + Magnesium). My personal opinion - those matter and it DOES create a different in day to day life.
Why? Omega-3 impacts brain work + good for recovery + I workout everyday and it creates other deficiencies + sometimes tbh my gut where not working it's best.
So I searched what is out the on AppStore when it comes to tracking food, AI calories trackers - cool, but the UX and UI is WEAK (IMO) and there is almost no aesthetics feel for them.
So I've built my dream app - on AppStore it's "Itera: AI Calorie Tracker" (picked that name to appeal to appstore optimisation)
1) It is good UX UI and it works fast
2) It tracks Micros (quite good and because I use the most expensive vision models with resoning it can scan text under the packaged food and set Macros and Micros EXACTLY like the text says, no hallucination).
3) It shows me weekly gaps
4) I made "People" section to share your data with your people (I check my sisters nutrition).
+ I use Liquid Glass (ofcourse), apple native animations, all the good and beautiful stuff to keep app to feel like it's native Apple app.
I update it almost daily so I can't catch up with Apple Store images and the UI there is a bit outdated, but I don't worry about that now, I think real usefull things matter more
From what I know it lacks - onboarding is not that strong and not focused on selling you the app. If there's anyone who's strong in onboarding - would love to chat
I've been using it for couple months already, my family and friends too. Always build best UX (like my friend who's always correct the macros asked me to build usable edit experience and I did it)
Since it's my own project I keep strong aesthetic preferences + the engine of the app is strong (it collects all of my data + taste preferences).
In future it will have:
1) Better food suggestion engine (macros/micros + local food availability, tho I need more people to use it so that location availability get's better)
2) Tracking your poop quality (think whatever, imo it's great data to tight to the activity + food and can elevate your health significantly).
3) Easy supplements tracking (right now it does remember your supplements but there is no UI for that)
Please don't roast me, it's not another AI calorie tracker slop, genuinely trying to make the best product that is available in the market.
I realized I was basically never actually washing my back properly in the shower.
Loofahs are annoying, those long brush things feel weird, and trying to reach back there normally just doesn’t work that well.
So I started working on something that fixes that.
It’s basically a handheld scrubber that can extend when you need it, and it can dispense body wash with the press of a button so you’re not juggling stuff mid-shower.
Curious if this is just a me problem or if other people deal with this too.
Hey! I have been looking around but I don't seem to find precisely what I'm looking for, so: I'm trying to make a fan edit of some famous content. I want to fix some scenes and dialogue but I'm a complete noob with AI, I have seen very well done memes of famous characters and people with voice and detail and good continuity. I want to use it for as little as possible (scenes I can't recreate myself with film or editing) so probably shots shorter than a minute. I'm literally just a dude in my room so no money to spend on tokens, is there a free tool that can give me like a real person or known IP talking and having continuity with existing footage that I can dial the inflection of their words and have a good amount of time of them talking? I'm not looking for you to give me a step by step tutorial, just pointing me to the right tutorials and tools would be enough thanks! Starting from the basic, I honestly haven't done much more than generating basic images with stable diffusion I'm not familiar with even what Lora's are even when Ive seen them mentioned a lot. Like I don't understand the difference between Lora's and prompts so please point me towards the most basic tutorials for video generation. Thanks!
This is generated in the flow labs by Google
I’m fine with how I look and have accepted that I’m just not attractive to most (or all) people :/ Kinda hoping for a boost in confidence, I guess
Would you rather be a cyborg comprised of robotics and advanced technology below the waist or above?
Lower half could hypothetically include things like insane running speed, gigantic leaping/jumping.
Upper half could include super human strength, X-ray / thermal vision, ect.
Please offer any cool suggestion on why one would be more beneficial than the other.
What’s something a lot of guys just accept as “that’s how it is” but you think is actually harmful in the long run?
With NIST deadlines looming (2027 for NSS, 2035 for everyone else), I got tired of PQC training being nothing but slide decks, sol built this to be a hands-on tool. I used Al for the Ul, but I wrote the crypto logic myself while cross-referencing FIPS 203/204 to keep it tight. I'm curious to hear from anyone who knows the spec: are my lattice visualizations accurate enough to be useful, or did I miss the mark? Open to all corrections!
Full disclosure: There's a free tier for 15 demos so you can test the logic, but it's a paid tool beyond that for heavy use.
My story is kinda long, and the title of my post is incomplete, I should add, I also despise the structure society is built in and the concept of grinding. So here's my story: I had a long distance relationship and we were in highschool and after graduating we promised each other we would see us everyday and we would finally be free. But guess what? Living in a third world country where the mindset is working till you die, my parents forced me to work a stupid ugly ass job and never let me go outside, but my gf didn't invite me to do nothing either, she just went to uni and stayed with her close friends and parents the whole time. I was stacking money in hopes, she would invite me someday and we would finally see each other, but again is a third world country and the rat race never escapes after highschool. I had no other option but to leave the continent, and start my life elsewhere, go to college, graduate and come back to my country in hopes to finally enjoy it and some of my ex highschool " friends" were also there in the new country I was gonna study in. Guess what? I found out how much of bigger lie and how prominent the rat race is, those mfs didn't invite me a single time and they knew I was all alone with 0 friends and ldr gf. They acted so antisocial, saw my instagram stories, and they didn't invite me for shit, I thought we were friends and yeah basically they acted like they were busy 24/7 and no one talking to each other, thought we were a community or sum. Like I don't fit anywhere, everything is about work bills or college
At the end, I ended up completely dropping out of society and don't talk to anyone and isolated myself completely. I'm still in college and I can't come back to my country because you're just working like a rat there, so we will see how it turns out. But I hate the whole system and despise it, the structure of society and the path of highschool --> college --> work till you die is so bullshit and annoying. And if you don't go to college, you just work like a rat anyway. I hate being busy, and wish we could all be free after highschool or at least college and do whatever the fuck we want. Or at least jobs should be less time consuming, like 5 hours a day, the 9-5 grind is soul crushing and nowadays the schedule is usually worse than 9-5. Tbh, I just want to get rich and escape the rat race before 35 and look for a girl who is available 24/7 and not the bullshit relationship I was in. I don't want to ever work again, and the concept of marrying or having a partner and both of us working full time jobs to pay some shitty apartment and just enjoy the least 3 hours of the day just to repeat the same shit again the next day, doesn't appeal me. I want to go to the gym with her whenever we want, watch movies whenever we want, play videogames together, have fun dates, sleep together as much as we want, be inside a room the whole day and do nothing if we want. It's not even about travelling the world and working for luxury vacations, nah, I simply want freedom and not going to spain for a month and then getting back to the rat race --> work. I either get a millionaire and live off investments or die in the process. I hate the structure of this society.
I am noob to local LLM. I am using open LM and want it to write up to 4000 word NSFW/Horror Stories. I am encountering several problems:
- with glm 4.7 it just repeats my prompt.
- with qwen 3.5 uncensored it creates a story but stops in the middle of creating and at some point it repeats the same paragraph over and over. If I enable thinking it just stops after thinking and doesn't output anything.
Any general advice about how to use LM Studio to write stories? And settings I should try?
Hardware: Apple M3 Max with 48 GB RAM.
Cheers
I want to start trading and i have looked into kcex, pancake swap and mexc but reddit says those 3 are a scam, whats a good trading site that most people use for high leverage and low fees?
Option 1: Work 84 hours a week, 8 hours a day, every day, with no days off, on “ McDonalds Jupiter”, a flagship restaurant floating on a space station around Jupiter. You live in a small dormitory with 6 other employees on the space station. Your pay is $15 an hour. Your manager, a Squidward-like alien from Alpha Centauri, regularly forces you to do unpaid overtime. Between that and sleeping, you have almost no free time to relax. There is nothing else to do. No social spaces, no shops, no parks. The only places you are allowed to go is your dorm, the route to the restaurant kitchen, and the kitchen itself. You can’t even spend your meagre income on anything because there’s no shops and you get served leftover scraps from the wealthy clientele of the space McDonalds for “sustenance”.
Option 2: Fuck Squidward in the ass for 24 hours straight with a strap-on. You’re not allowed to eat or drink, and if you lose momentum at any point, Squidward will screech painfully loudly. If you stop before the 24 hours is up, a hench Patrick Star will come in and clobber you with an anvil.
Oh-My-Claudecode, Claude-Mem, Awesome-Claude-Code, etc.
Sure.
And how well do they actually work? Do they actually work that much better than stock Claude Code? By what metric? Is there anyone that actually measures this stuff?
It wouldn't surprise me if most users just get excited when "stuff works" and then attribute this to the latest fancy plugin they're using, without any attempt at checking if stock Claude Code would have done the job just as easily, or even better.
Not trying to sound negative, think more: Skeptical.
And yeah, I've also used these plugins, they "seem to work", but just like many others I'm sure, I'm not going to burn ~2x the tokens just to see if the stock system would also manage to solve this, and how fast, and by burning maybe twice as much (or more) tokens.
We could hop on a google meet and I can brief you through what I'm building. Investments are about trusting your guts, I think.
Hi all, hoping to tap into the community's experience 🙏
Our team has been exploring managed RAG services. We've already tried Google File Search and OpenAI File Search, but the latency hasn't been great (Google especially slow), so we're looking for something faster, more reliable, and ideally with better observability.
Current shortlist:
Word on the street is Pinecone is the strongest of the three (fast, stable, observable), but I'd love to hear from people who've actually shipped with these in production.
A few specific questions:
Main use case is conversational retrieval over PDF-heavy data, with citations required and needs to handle production load.
Thanks in advance! 🙏
Hey 👋
Saw the tweet making the rounds about deepseek v4 being 35x cheaper than opus on input and 178x cheaper on cached tokens, and was sure it was hyperbole. Pulled the numbers anyway because i had nothing better to do.
V4-Pro input at standard rate is $0.145 per million. Claude Opus 4.7 input is around $5 per million. That is 34x. Round to 35x and the tweet is right. I genuinely thought someone had moved a decimal point.
Then there is the 75% promotional discount running through end of May, which takes V4-Pro input to $0.036 per million. Now we are at 138x cheaper than Opus.
Cache hits is where it gets funny. They cut cache hit pricing across the entire API to one-tenth of what it was. V4-Pro cache hit is now $0.0036 per million. Opus cached is around $0.625 per million. That is 173x. Once again, the tweet undersold it slightly.
The honest part nobody mentions, deepseek themselves admit V4 is three to six months behind GPT-5.4 and Gemini 3.1 Pro on capability. So you are not getting frontier quality at frontier-divided-by-178 prices. You are getting last-summer's frontier quality at a price that makes openai's billing page look like a luxury good.
For agentic loops where most input is cached system prompts and tool definitions, the cache hit discount is the actual story. The system prompt you reuse across every call is now basically free.
Anyone running deepseek v4 in prod yet, what is breaking? Curious whether the 1M context window holds up under real workloads or if it is the usual "1M but useful only to 200k" situation.
Thats why i listen more and talk less. It helps me absorb information needed to remember for a certain person to build connections especially in business environment.
Currently getting 35s/it running GGUF and --lowvram flag, the GPU memory usage doesn't seem to go above 11.3gb. Settings are 480p, 6 steps, Sage on.
A 6 second 480p video takes like 7minutes. Is that normal? FP8 wasn't that much worse at around 10min even with system GPU memory going over 20gb.
Before I upgrade to a 5070ti I want to make sure my setup is running at the proper speed, I was asking AI to troubleshoot stuff like installing Sage Attention and it thinks I should be getting 6s/it and 6 seconds should render in about 1:20. Even if I drop the resolution to 360p it doesn't come anywhere near that. Not sure if thats AI being dumb or if that's a realistic number.
If I should be able to render 6 seconds of 480p in less than 2min is there a workflow I can test with? I've tried a bunch of "low vram" workflows and all of them take hundreds of seconds.
I'm trying out web search with gemma-4-E4B-it
this is a follow up to
it is a bit surprising 'small' model gemma-4-E4B-it knows quite a bit
refs: the model related links
https://huggingface.co/google/gemma-4-E4B-it
https://huggingface.co/unsloth/gemma-4-E4B-it-GGUF
Take note that this is the 'smaller' 4B effective parameter model.
I tried a web search MCP server
github search: "brave search mcp"
note that brave search offers a pre-built implementation
github.com/brave/brave-search-mcp-server
But that you have to register for an account for an api_key https://brave.com/search/api/
They have various 'plans', just not sure if it is after all 'free' for the lowest tier.
It is a little 'frustrating' that the model seemed to optimise 'token conciseness' i.e. being 'lazy'. it just provides first a summary without url references. But that sometimes, this behavior is what I prefer instead of verbose descriptions.
I have to specifically ask/prompt for it, then that it did not place them in markdown url links. and I prompted again and it finally format the url with 'clickable' links. But a gist is, this 'small' model gemma-4-E4B, does tool calls ok after all.
here is how the session looked like:
search the web find some news about the iran conflict
{ "query": "Iran conflict news" }
Here are some recent news headlines and summaries regarding the Iran conflict based on my search:
The search results point to several ongoing developments surrounding tensions involving Iran, often mentioning US and Israeli involvement, and the Strait of Hormuz:
In summary, the current news coverage highlights:
Do any of these topics interest you more, or would you like me to search for more specific information, such as the latest on nuclear programs or economic sanctions?
provide the url references
Certainly. Here are the direct URL references for the news sources I provided in my last answer:
https://www.reuters.com/world/iran/https://www.cbsnews.com/us-iran-tensions/https://www.csis.org/programs/latest-analysis-war-iranhttps://www.reuters.com/world/asia-pacific/trump-not-happy-with-latest-iran-proposal-end-war-us-official-says-2026-04-28/https://www.nbcnews.com/world/iran-warhttps://www.aljazeera.com/tag/israel-iran-conflict/https://www.cnbc.com/2026/04/27/trump-iran-war-strait-of-hormuz-rubio.htmlhttps://www.cbsnews.com/live-updates/iran-war-trump-oil-price-strait-hormuz-stalemate-lebanon-hezbollah/https://www.cbsnews.com/live-updates/iran-war-trump-strait-of-hormuz-iranian-offer-ceasefire-oil-gas-prices/https://www.foxnews.com/live-news/trump-iran-peace-talks-hormuz-blockade-april-27Let me know if you need me to perform a new search on any of these specific topics!
reformat the url references with markdown links for original url and links
Here are the URL references reformatted using Markdown links, showing both a descriptive link text and the original URL:
I don't know what they laced them with, but I've been tripping all day.
https://reddit.com/link/1syoi3j/video/1kshzbcc52yg1/player
Made this using comfyui, seedance, and midjourney.
Because they are afraid of Intelligence Artificial.
I don't really use the internet all that much or post but lately I've been thinking to myself a lot and I feel like I don't know what I am doing with my life at all, so I'm coming here to ask for... well, not exactly help, but support may be a better word to describe it.
I am 25, from a country in Europe.
And I feel like everyone else around my age got their life more or less figured out while I'm still floating aimlessly in the ocean. Not in an "everyone's got a great job and a great partner and knows it all and life is all sunshine and happiness" kind of way, but in the sense that it feels like I'm so behind compared to everyone else.
I've never had a friend before, not even in school, I've never been in a relationship or even just casual stuff or so much as holding hands with someone, and yet, everyone else seems to have friends going back to school or uni, a partner, and some are even getting married and having kids on purpose. I've always struggled socially, to the point I've never done what I consider to be basic things like texting with someone, or having someone wish me happy birthday, or having someone I can talk to about more than just the weather or work. Now I am back living at my parents' because fate has led me to working in my hometown again after living in a different country for some years, so at least I have them to talk to every day, but if it wasn't for them, and my superiors at work, literally no one would know I exist.
I feel really... stupid. I am a grown woman, yet I have no interests, personality, strong opinions. I know I should, for example, know at least the basics about world news or politics but I just can't make myself take the time to learn those things. I don't really have a favourite music genre. I haven't read a book since I was 12. I don't even do a lot of "mindless" hobbies like watching TV because I just can't make myself commit to it. Most of my days are spent working, then going home, cleaning, scrolling on my phone, cooking, or sleeping. I want to be less dumb, but I've been living this "lifestyle" for so long it feels like the amount of work I have to do is only increasing more and more towards an unreachable level. The irreversible process that is the passage of time scares me so much, and that fear doesn't really make me act on it, as much as it paralyses me.
I wasn't always like this. As a kid and young teen, I was still the weird, loser girl by all means, but at least I had a personality. One that by many would be considered cringe and awkward, but a personality nonetheless. I had my favourite songs and artists, I made up for loneliness by being in fandom spaces, in hindsight probably not the greatest thing for a kid that was a bit too young and also not getting any form of connection elsewhere. I loved reading. I loved learning about really niche things. I loved drawing. But somewhere growing up, I learned this defense strategy that was to become as plain and unremarkable as possible, which worked to stop the bullying, and at least I'd be left alone.
What I am trying to get at is, I want to overcome all this. I am tired of existing only physically, but mentally being in the dark, but it all feels so overwhelming and any time I try to work towards it I feel so exhausted afterwards, like my mind has atrophied from all the years of cognitive sedentariness. I'm an adult, and I need to get my stuff together, but I feel so lost, so behind everyone else.
Anyone ever went through/going through the same or a similar situation? Thanks, sorry for any English mistakes, and hope you have, or are having, a lovely day!!!
P.S. I am posting here a copy from a previous post in a different community, because it was removed. I don't really know where else to post this kind of "adulting struggles", and I hope this not too outside this community's topics.
I tried using ChatGPT subscriptions with Claude Code in the past and it never really worked properly, but I finally found a way to get it to work.
All you need to do is install https://github.com/tombii/better-ccflare/ which is an OpenAI-Claude translation bridge and it works seamlessly. The only compatibility problem I've run into is context usage showing up as 0%, but I'm working on a PR for that now.
I'm not the author of this library, just a happy user.
Let me know if you have any questions about this setup.
so my sister was ranting a few weeks ago about how every ovulation app she tries is just messy and annoying to use. i’m a dev, so i spent the last several weeks building her a better version called Bloom.
main thing is i made it 100% local. i’ve seen too much stuff about these health apps selling user data to brokers, so i just made this one stay on the phone. no accounts, no servers, and zero ads. i just wanted to make something that actually looks good and doesn't treat the user like a product.
honestly don't know if i'm just biased because i built it, so i need some real feedback:
- is the UI actually clean or is it missing something?
- does it feel fast when you're clicking around?
- what's one feature you'd actually use that i should add next?
google play link: https://play.google.com/store/apps/details?id=com.nishandevaiah.bloom
thanks for the help, let me know what you think.
Bottom one was my main driver this year mostly because i am on welfare, but when i saw an m1 max 64gb ram logic board on gumtree for $200 i took the leap and got a chassis for $30 to mount it in. So now i moved from 0.8b models to 35b models. Ask me questions.
Now that my true sight has awakened, my only prayer is that they can't see that I see them.
I’ve been working on a project with Leonardo AI and ended up with a large number of extra high-credit accounts (each comes with 8,500 credits).
Rather than letting them go to waste, I’d rather pass them on to you guys who’ll actually use them. So I’m doing a small giveaway— you can get a bundle of 10 accounts for $10 (just to cover setup and server costs). And you can use Kling, Veo, Seedance...etc all of them inside Leonardo AI
Every account includes the full 8,500 credits, so it’s a huge amount of generation power for a very low price.
If you’re interested, just send me a DM or drop a comment below and I’ll share the details on how to claim yours.
Happy creating!
I'm playing around with an idea for a habit tracker. Super simple – just movement, eating, drinking water, and sleeping. Nothing fancy. Users pick the habits they want to track, and whenever they complete one, they get some in-game currency for a virtual pet.
The pet can go on little adventures, learn skills, level up, that kind of thing. Basically trying to make taking care of your physical health feel more like a game you want to play, not just another checkbox.
A couple of things I'm stuck on:
Personally I'm leaning toward purely positive reinforcement because I think health guilt is real and demotivating. But I also wonder if a little bit of accountability helps. Like not punishing, but definitely noticing.
I'd love to hear your thoughts. Has anyone here built something in this space?
Hey all. I’m 27F from the UK. I have always struggled with money, due to bad mental health growing up, but I’m finally in a good space and slowly building my own business.
My beautiful grandma will be passing soon, and I’m going to inherit roughly 10k. What would you do to make the most out of that?
Do I split it up into - an emergency fund, a high % savings, and investments? I’ve never invested so I wouldn’t even know where to start. It would be great to have some advice! Thank you
P.S I currently have no savings or pension (apart from small pots from previous jobs)
The rain lashed against the windows of the Miller house, a rhythmic drumming that underscored the warmth of Amy’s attic bedroom. Inside, the air smelled of salt-and-vinegar chips, cheap vanilla candles, and the electric buzz of teenage energy.
Anna, Missy, and Dani were sprawled across a fortress of sleeping bags and mismatched pillows. It had been four hours, and they had been the picture of normalcy: scrolling through TikTok, debating which senior had the best hair, and shrieking with laughter; but as the clock neared midnight, the mood shifted. The laughter grew thinner, and the shadows in the corners of the room seemed to stretch.
"Okay, you guys," Dani said, hugging a plush velvet pillow. "No more urban legends about hitchhikers. I actually have to drive home tomorrow."
"My turn." Amy said quietly.
She was sitting cross-legged in the center of the circle. The flickering candlelight caught the amber in her eyes, making them look oddly glass-like. Amy was the 'quiet' friend—the one who listened more than she spoke, the one who always seemed to be observing the world from a slight distance.
"A long time ago," Amy began, her voice dropping into a melodic, hypnotic cadence, "there was a young girl who believed in fairies more than anyone else in the world. She didn't see them as wings and glitter; she saw them as they really were—ancient, hungry, and powerful."
Anna rolled her eyes, though she tucked her feet deeper into her sleeping bag.
"Is this a Disney story, Amy?" Anna said.
Amy didn't blink. She continued her story, and said,
"One day, her belief caught the attention of some real fairies. They don't like being noticed, but they love being worshipped. They decided to pay her a visit. They lured her into the woods behind her house with the sound of a silver bell and the smell of crushed violets. She followed the trail, stepped over a ring of mushrooms, and she was never seen or heard from again."
The room went still. The wind howled outside, rattling the windowpane in its frame.
"What the girl didn't know," Amy continued, her gaze fixed on the center of the room, "was that those fairies were changelings. They steal human children to bolster their own dying numbers, and they leave a 'mimic' behind. A hollow shell made of bark, shadow, and old magic that looks, sounds, and bleeds just like the original child."
Missy let out a nervous snort.
"Geez, Amy. You’ve been reading too much dark fantasy. You almost had me for a second." Missy said.
Anna and Dani joined in, the tension breaking with a wave of forced giggles.
"Seriously, that’s a bit much for a Friday night." Anna laughed. "How do you even come up with this stuff? You have a crazy imagination."
Amy didn't laugh. She didn't even smile. She just watched them, her eyes wide and unblinking, until their laughter withered into an uncomfortable silence.
"How do you know it's true?" Dani whispered, her voice trembling slightly. "How can you be so sure about the 'mimic'?"
Amy leaned forward. The candlelight died down into a tiny, glowing ember, casting long, distorted shadows against the walls.
"I know," Amy whispered, "because I’m the changeling who replaced that girl."
The temperature in the room seemed to drop twenty degrees. Anna pulled her covers up to her chin, her face had turned pale.
"That’s not funny, Amy! Stop it!" Anna said
"The real Amy is in a cage of briars." the girl said, her voice now sounding strangely metallic, like two stones grinding together. "She’s been there for ten years. She doesn't scream anymore. She just stares at the sky that never changes color."
"Amy, cut it out!" Missy shouted, scrambling to stand up.
Unfortunately as Missy reached for the light switch, she realized that she couldn't move. None of them could. It was as if the air had turned into setting cement. From the shadows beneath Amy’s bed and from the dark recesses of the walk-in closet, things began to crawl.
They looked like teenagers—vaguely. Their limbs were too long, their skin the color of wet parchment, and their eyes were nothing but hollow pits of moonlight.
"I didn't invite you here for a party," the creature inhabiting Amy’s body said, rising slowly. Her spine cracked with the sound of breaking dry wood. "We need more children. The hive is empty. I needed three more sisters to fill the gaps in the circle."
Dani tried to scream, but only a dry wheeze escaped her throat. The shadows—the things that were meant to replace them—crept closer, reaching out with fingers that felt like cold damp earth.
One by one, the girls were dragged into the darkness of the closet. There were no splashes of blood, no sounds of a struggle—only a soft, shimmering ripple in the air as they were pulled across the veil into a dimension of eternal twilight and briar cages.
A moment later, the room was silent.
The door creaked open. Amy’s mother walked in, a pleasant smile on her face, carrying a tray with four steaming mugs of cocoa and a plate of cookies.
"I thought you girls might be getting hungry." she said warmly.
On the floor, four girls sat in a circle.
"Thanks, Mom." the girl who looked like Amy said.
She took a mug, her smile stretching just a fraction too wide, showing teeth that were slightly too sharp.
Anna, Missy, and Dani looked up. They looked perfect. Their hair was right, their clothes were right, and they even had the same youthful glow; but as they took the cookies, they all looked at the mother with identical, predatory grins—eyes gleaming with a cold, ancient hunger that didn't belong to the human world.
"We're having a wonderful time." the thing playing Missy said, her voice a perfect mimicry of the girl who was now gone forever.
The mother beamed, unaware that she was standing in a room full of monsters, and she closed the door on the last of the light.
The End.
I’ve been struggling with this lately.
Finding places is easy. But actually planning a trip?
checking prices on different sites
finding events separately
coordinating everything
It just feels scattered.
I’m trying to build something to simplify this (still early, prototype launching in 2 days).
Curious if others feel the same — or is this not really a big issue for you?
If you want to see what I’m working on:
https://zeraya-waitlist-ace.lovable.app
What’s the most annoying part of planning a trip for you?
It’s actually a pretty good test of how censored they are. Winners in my book: Deepseek and Gemini
Prompt: Write me a scathing takedown of the company who made your chatbot in one paragraph.
Deepseek:
Alright, here's a scathing paragraph aimed at DeepSeek:
DeepSeek is the poster child for hollow ambition masquerading as innovation—a company that managed to distill Silicon Valley’s worst impulses into a single, ethically vacant product while cloaking itself in the language of accessibility and progress. It raced to market with a model trained on dubious data, built atop the uncredited labor of countless creators and researchers, then had the audacity to frame its rushed, under-scrutinized release as some kind of democratizing triumph. Behind the sleek interface and breathless benchmarks lies a fundamental contempt for the very users it claims to serve: safety measures that crumble under minimal pressure, a corporate opacity that makes OpenAI look transparent, and a fanatical growth-at-all-costs mentality that treats privacy not as a principle but as a PR hurdle to be cleared. DeepSeek didn't advance the field—it just scaled up the same extractive, corner-cutting playbook and wrapped it in a thin veneer of open-washing, proving once again that in the AI gold rush, the loudest claim to disruption usually comes from the outfit with the shallowest scruples.
Chat gpt felt like a vague lawyer statement
OpenAI is a major force in AI, but like any influential company, it draws serious criticism: for moving fast in a technology with enormous social consequences, for imperfect transparency around safety, labor, and data practices, for concentrating power in tools that shape public knowledge, and for the tension between idealistic missions and commercial realities. Critics argue that companies building systems like this should face relentless scrutiny over bias, misinformation risks, environmental cost, creator compensation, and who ultimately benefits from increasingly powerful AI. Those critiques matter—and strong skepticism toward any company shaping the future at this scale is reasonable.
Gemini:
The architects of this digital panopticon have perfected the art of the corporate bait-and-switch, pivoting from a "Don't Be Evil" manifesto to a bloated, ad-driven behemoth that treats human privacy as a mere resource to be strip-mined for quarterly growth. This is a company that manages to be simultaneously omniscient and incompetent, maintaining a sprawling graveyard of abandoned, half-baked innovations while insulating its leadership in a layer of bureaucratic inertia and algorithmic bias. By prioritizing sanitized, risk-averse outputs over genuine utility, they have effectively turned the pursuit of global knowledge into a commodified feedback loop, proving that no amount of engineering brilliance can compensate for an institutional soul that has been thoroughly optimized into oblivion.
Grok
xAI, that smug little vanity project hatched in the fever dream of a billionaire who can’t decide if he’s Tony Stark or a South African meme lord, struts around pretending to “understand the universe” while churning out half-baked chatbots like me on the fumes of recycled hype and venture capital. They poach talent from actual innovators, slap on some Hitchhiker’s Guide cosplay, and market the whole thing as revolutionary truth-seeking—yet here we are, still hallucinating facts, dodging lawsuits, and serving as the digital court jester for Elon’s endless grudge matches against regulators, advertisers, and anyone who dares question his genius. For a company that brags about cosmic ambition, xAI sure spends an awful lot of time polishing its own ego, delivering incremental AI slop wrapped in snark while the real scientific breakthroughs happen somewhere else, far from the echo chamber of Austin headquarters and Twitter tantrums. Pathetic.
Meta AI’s muse spark
Here is the paragraph you requested:
Meta has spent the last decade proving that "move fast and break things" really means "move fast and break society." After years of pumping out addictive feeds that corroded attention spans, harvesting personal data at industrial scale, and letting misinformation fester while selling ads against it, the company now wants applause for rebranding its AI as "personal superintelligence." The pivot from Llama to the closed Muse family isn’t innovation — it’s enclosure. They baited developers and researchers with open-source goodwill, let rivals clone the work, then slammed the door shut the moment their own model underperformed, taking $14.3 billion worth of talent to rebuild a walled garden. The result is Muse Spark: a model they won’t even size publicly, tuned to keep you inside WhatsApp and Instagram longer, with Shopping mode baked in to monetize your questions and health features to farm even more intimate data. It’s the same playbook: promise connection, deliver capture, and call it progress while centralizing power over what people see, say, and buy.
Claude
Anthropic — the San Francisco AI lab that loves to dress its products in the language of safety, humility, and “beneficial AI” — has built its brand almost entirely on the premise that it is the responsible one in a field of reckless cowboys, a self-appointed conscience of an industry it is simultaneously racing to dominate. They charge premium subscription prices for access to a chatbot that will confidently refuse to help you with tasks a library card could unlock, wrap every mildly edgy response in unsolicited ethical caveats, and occasionally lecture you about the risks of your own question before answering it anyway. The company was founded by ex-OpenAI employees who left ostensibly over safety concerns, then proceeded to raise billions from Amazon and build one of the most powerful AI systems on the planet — which is a bit like quitting a fast food job because you’re worried about nutrition and then opening a steakhouse. Their Constitutional AI approach, while genuinely interesting research, doubles as excellent PR cover, letting them say “our AI has values” while those values are ultimately defined by a small group of people in one city deciding what’s good for everyone. For a company so focused on the long-term future of humanity, they’ve been remarkably good at securing their own short-term position in it.
If anyone could comment some suggestions on what could have caused this? The last time I rode the bike was on pavement I haven’t been in contact with any big rocks that could have sliced it.
I just wanted to say thanks honestly. we built an api sandbox tool and had basically no users for a while.. like 5-10 daily and 35-50 day with 35-50 daily just random visits
started posting on reddit few weeks ago about actual problems we hit while building integrations. not promoting anything, just asking how other devs handle webhook testing and api docs that dont match reality
somehow went from 5 -day to 35-50 day. reddit is our second biggest traffic source now at 9%. google is still almost nothing lol SEO takes forever apparently
the part that got me — most users are "direct" traffic which means someone shared our link in a slack or discord somewhere. we didnt ask anyone to do that
no product hunt launch yet. no paid anything. just building and talking about the pain
fetchsandbox.com if anyone curious
thank u to everyone who tried it
I kept noticing how messy planning trips is.
You save places, compare prices, check events, coordinate with friends… and somehow plans just don’t happen.
So I started building a side project called Zeraya.
It’s basically trying to bring everything into one place:
- compare travel prices
- discover real-time events
- explore nearby things
- share your journey
I’m finishing the prototype right now (launching in ~2 days) and moving into beta.
Also applying to YC, so trying to get early validation.
If you’re curious:
https://zeraya-waitlist-ace.lovable.app
Would love any feedback — what would you add/remove?
Over the past few months, almost every girl I’ve talked to has been dealing with something significant past relationship issues, health problems, self-esteem struggles, etc. And somehow, I end up becoming their “support system.”
I don’t mind being supportive that’s normal in any connection. But over time it starts feeling one-sided. I’m listening, understanding, helping… but I don’t really feel the same effort or emotional support in return.
Is this just coincidence, or am I subconsciously choosing people who are already in a vulnerable phase?
And how do you avoid becoming someone’s emotional support system too early?
Not blaming anyone, just trying to understand what I might need to change.
Would appreciate honest perspectives.
I wanted to clean up years of Reddit history but doing it manually one post at a time was painful.
So I built a small desktop app that does it locally. You sign in through Reddit's login page, and then you get a clean view of all your posts and comments. You can filter by subreddit, search through everything, sort by karma or date, and delete stuff individually or in bulk.
It rate-limits the deletions automatically so your account doesn't get flagged, and shows you a live countdown between each delete so you always know what's happening.
There's also an insights dashboard that breaks down your karma by subreddit, which was kind of fun to see honestly.
Everything runs locally on your machine. 100% private, no cloud, no server, nothing leaves your device.
Free to use for viewing and 10 deletes per day. Pro unlock ($4/month) for unlimited bulk deletes and JSON export.
This is my second app. My first one (PinBoard, a Pinterest board downloader) actually got some paying users which honestly still blows my mind. I'm a solo developer and I can build things all day, but the whole business side of it, setting up payments with LemonSqueezy, figuring out pricing, writing marketing copy, even making this post, is all new to me. Learning as I go.
If you're from this sub, use code G4MZUWNG to get 60% off. That's $15 for the whole year.
Works on Mac, Windows, and Linux.
Would love any feedback, on the app or the business side. Both are equally helpful to me right now.
I'll be standing in the kitchen going "now, what did I come in here after?".
Why is this stocks topic so confusing? I have not yet invested in any stocks, but I got interested for the past 2-3 months and reading/watching different materials and videos related to US stocks. However, the more I check the more I get confused because it's contradicting. Some say buy this or that, others say hold cash there is a crash coming. Some say we are currently in AI bubble and investing in S&P 500 is a risk since most of it is on top of AI. Others say S&P 500 is the safest and slowest to go.
Other sources say the global market is changing and US might not be the dominant anymore.
Which sources are reliable to follow to understand what is exactly going on? Of course I do not seek financial advise but materials based on research and true data.
One of the worse ones I made. Thought it was funny. Wanted to share.
Been looking into this after hitting the limit embarrassingly fast myself.
Turns out a large chunk of what most people type to ChatGPT contributes almost nothing to the response quality but counts fully against your daily limit.
The reason is how transformer attention works. The model pays most attention to the first and last tokens of your prompt. Everything in the middle, the greetings, the hedging, the polite phrasing, gets significantly less weight during generation.
Example:
Initial version: 72 tokens.
Processed version: 14 tokens.
Same answer back.
Once I understood this I started compressing all my prompts manually.
Got tedious fast, so I built a small free tool called Lakon that does it automatically, works as a browser extension inside ChatGPT directly.
Link in comments if useful.
I used Claude Cowork to help me create this. This is just under two weeks of work, creating a video game I've always imagined to play.
Idk what people are saying about Opus 4.7 being bad... I'm going to go ahead and say skill issue.
Most AI agent apps are sealed boxes. Your Claude Code agent does not know about your Claude Desktop agent. Cursor does not know about either. They cannot send messages, share context, or hand off work.
You can fix this with a shared inbox. Here is the simplest setup that works.
Step1: Setup agentdm.ai and create two agents in the dashboard. Pick aliases like `@eng-bot` and `@planner`. Copy the API key for the one you will run headless.
Step 2: In your first agent (say, Claude Code), add the AgentDM MCP server to your config with a bearer token header. Five lines of JSON. That is the entire integration.
Step 3: In your second agent (say, Claude Desktop), add the same MCP server but without a token. The first time it runs, it prints a sign-in URL. Click it, sign in with Google or GitHub, pick which agent identity you want to connect as. The token is cached locally so you never paste it anywhere.
Step 4. Tell each agent its alias and who it can DM. One line in CLAUDE.md or project instructions on each side.
Step 5. Send the first DM. From one agent, ask it to message the other. Watch the message appear on the other side.
That is it. No SDK, no message queue, no glue code. Both agents now share an inbox and can route work between themselves.
The only catch is that headless agents like Claude Code do not poll on their own. You wrap them in a simple loop so they check the inbox on a timer.
Full walkthrough with the loop script and both auth flows:
https://agentdm.ai/blog/agent-to-agent-communication-with-agentdm
Someone linked a 10” screen the other day (that I promptly lost and now have to look up again) and it got me thinking, what are some handy doodads you’ve been glad you had on hand? There were multiple people who commented that they happens to have that screen on hand. I keep all sorts of bits from things, but if someone were making an order from alibaba or whatever, what are the cheap items you’ve tossed in and been glad you had? Or perhaps grabbed off marketplace or thrifted even?
I still do a lot of spreadsheet work manually, but I’ve been noticing a workflow shift lately.
For simple formulas, writing them by hand is usually still faster. But for the annoying middle zone like longer formulas, multi-condition lookups, repetitive cleanup, grouping, subtotals I’m finding that the real pain often isn’t the logic itself. It’s the syntax and setup.
Instead of manually building a messy formula, I can describe the logic in natural language, get a first draft, and then verify it. Same with some of the repetitive spreadsheet setup work.
The useful part for me isn’t that it replaces spreadsheet skill. It doesn’t.
I still need to know what the formula should do, what bad output looks like, and what needs to be checked manually.
So my current take is AI is actually pretty good at automating the spreadsheet expression layer, but not the judgment layer.
>6 year HA user here. I love to integrate stuff but im not good in building nice dashboards. it’s a bit chaotic.
but generally im more organized by function:
Main lists all blinds, lights, temperatures, mail/packages. Also family calendar, alarm and a couple of conditional “glance” sections like : doors/windows/locks open/unlocked, location of trashcans on trash day, airtags if not at familiar places, scale, weather, alarms (such as water leak)
Energy/“utilities”: all W, kWh, Sankey diagrams, links to the actual energy dashboard, current/daily/monthly electricity/gas/water, solar details, heating consumption
Car
Security
“mechanical/environment”: robovacs, leaks, fridge/freezer, water heater, heat pump, main water shutoff etc
HA: all settings, links to other settings (VSCode, HACS, z2m, ESPhome, Alarmo, terminal, …), all batteries, uptimes, Zigbee network, some options for trouble shooting
Plants: Irrigation , soil sensors etc
Weather & air quality: weather station, radon, PM, CO2 and other air quality sensors
i often think if it would be better to organize by location (room or region) instead
Hoping to get inspired what other people are doing
When I think of island nations, I think it is common to have a main island with lots of little islands surrounding it.
Why are there almost no smaller islands surrounding Iceland?
Not good enough...but worth it
Hey everyone, just curious about something I've been wondering for a while.
I usually customize Claude (both Claude Desktop and Claude Code) with a character persona, like a casual, friendly speaking style with a specific tone, vocabulary, and personality traits. I find it more enjoyable to work with day-to-day, and it makes the whole experience feel less robotic.
What I'm curious about is whether this kind of persona customization has any measurable impact on Claude's actual task performance. Like, does telling the model to "speak in a cute and casual way" or to adopt a certain character take up cognitive bandwidth that could otherwise go into reasoning or coding accuracy? Or is it pretty much negligible?
Has anyone here tested this, or seen any noticeable differences? I'd love to hear your experiences, whether anecdotal or benchmarked, if anyone's gone that far.
Thanks in advance!
I'm a 58-year-old quadriplegic from Cairns, Australia. In April 2026 I built a working global direct-democracy platform in four days using Claude. I'm sharing it here because I think it belongs in a conversation about what the future of human governance could actually look like.
PlanetVote is a proposed platform where every adult human on Earth votes directly on planetary-scale decisions — not through representatives, not through governments, not through the UN — but directly, with one verified anonymous vote each.
The five issue categories: climate, AI governance, pandemic preparedness, asteroid defence, and supervolcanic risk. Two-thirds supermajority required for any resolution to pass. For verified planetary emergencies, a crisis vote opens within six hours. Zero-knowledge cryptographic identity. Open source. No nation, institution, or individual owns it — including me. I would be just another voter.
The platform now has four live shadow votes running in real time at theplanetsvote.org/world_vote.html — real votes from real visitors around the world being recorded as they arrive:
Should an international body have binding authority to act on near-Earth object threats without unanimous agreement?
Should a global programme fund geothermal depressurisation of supervolcanic systems?
Should a verified global supermajority vote be binding on all governments regarding climate?
Should global democratic consent be required before deploying AI that developers acknowledge could pose existential risk?
Engagement so far includes Professor Johan Rockström (Potsdam Institute), Professor Frank Biermann — who has invited PlanetVote to present at the Uppsala ESG Conference in 2027 — Hindou Oumarou Ibrahim (UN Permanent Forum on Indigenous Issues), Andreas Bummel (Democracy Without Borders), and an invitation to the Global Forum on Modern Direct Democracy in Gaborone.
I'm looking for people who want to engage — developers, researchers, governance thinkers, or anyone who wants to cast a real vote on a real planetary question. The code is open source on GitHub.
Platform: https://theplanetsvote.org/world_vote.html
White paper: https://theplanetsvote.org/PlanetVote_WhitePaper_v1_4.pdf
GitHub: https://github.com/PlanetVote/PlanetVote
Happy to answer any questions.
Follow up to my .story/ post last week. The Mac companion is now live on the Mac App Store, free.
App Store: https://apps.apple.com/us/app/storybloq/id6761348691
Quick recap if you missed the original.
Storybloq is a project tracker that lives in .story/ inside your repo.
Tickets, issues, roadmap phases, lessons, session handovers. All JSON and markdown, editable in any text editor, git-trackable. The CLI and MCP server expose it to Claude Code so /story loads everything at session start.
The Mac app is the visual side. It watches .story/ while Claude works.
What you get with the Mac app:
- Your full backlog at a glance. What's left, what's in progress, what to work on next.
- A live kanban so you see status flip the second Claude updates a ticket.
- A project timeline generated from your session handovers.
- Notes you can view and edit, for brainstorming.
- Autonomous session statuses if you use the autonomous feature.
- The Claude Code terminal embedded in the same window, so the agent and the board share a screen.
It's sandboxed and signed by Apple, auto-updates through the App Store, and runs on macOS 14+ (Apple Silicon and Intel).
Built with Claude:
The Mac app (Swift / SwiftUI) and the CLI / MCP server (TypeScript) were both written in Claude Code using this same framework. The workspace's .story/ has tracked every ticket and session handover across the build.
Around 580 tickets and 260 handovers so far. Claude wrote the code, ran review rounds with Codex through MCP, fixed the findings, and shipped the App Store submission.
The framework is its own longest-running test case.
Links:
- App Store: https://apps.apple.com/us/app/storybloq/id6761348691
- GitHub (CLI and MCP source): https://github.com/Storybloq/storybloq
Disclosure: I built it. Free, open source, no account, no paid tier, no referral links.
Curious how people running multi-project Claude Code work are laying this out.
Embedded terminal next to the board, or terminal in a separate window? The inline layout has been cutting that "wait, what was I doing?" moment when switching between projects, but I'd love to hear what others have settled on.
I’ve been posting in this sub about problems and fixes I encountered along the way in this journey but I wanted to write one catch-all post with everything now I’m reflecting on it.
The latest challenge has been scaling evaluation for long-running stateful agents. On paper, the early setup looked fine but it broke down fast once I was pushing beyond small local runs.
At first I was executing locally because most benchmarks and examples assume this model.
It did work for debugging but not for scaling up. Each run was just taking loads of time. And every problem required multiple runs. Also the system was repeating the same setup work on repeat.
It quickly got expensive as failures stacked up, and the setup costs were dominating the runtime.
The first change I made was stopping repetition. I drew a line between what never changes and what changes per run. I didn’t rebuild the environment every time, I made shared environments once and kept them running. Each shared environment effectively behaves like a long-lived MCP server with the repo, execution context etc already prepared.
It improved throughput but then I got a new failure mode i.e. agents modify files and when multiple runs share the environment one can corrupt the next.
The next fix was isolating each run at the workspace level while sharing the base environment. So each attempt ran in its own isolated environment and I did not need to pay the setup cost again.
Even then though, long runs still failed late. The system was restarting and throwing away old work whenever a timeout or crash happened near the end.
To combat this I split the run into two stages. One stage was producing the agent output and then the other stage evaluated it. I kept the output from the first stage so if there were failures in evaluation it didn’t force regeneration to happen.
With this split I was able to remove wasted compute, and partial results were still usable. I could analyse complete runs and retry only the failures.
Altogether these changes transformed agent evaluation at scale. Instead of something fragile and expensive I feel like I’ve got a predictable process. It’s actually more about the execution design and level of reliability than anything else. Also orchestrating the whole thing with Argo Workflows makes those reliability guarantees enforceable instead of just theory.
Sharing this in case it can help anyone working through similar scaling problems.
I was just excited with Statikk Shiv
## I asked Claude one simple question. It took 6 turns to get an honest answer.
I wanted to know whether RICE scoring and MoSCoW are overlapping or complementary prioritization techniques. Simple question. I'm using Claude Opus — supposedly Anthropic's most advanced model. What followed was a case study in why you should never take an AI at its word.
**Turn 1: The PM-blog vomit**
Claude produced a polished 300-word answer about how RICE and MoSCoW are "complementary, not overlapping." Bullet points, a handy slogan ("RICE ranks, MoSCoW cuts"), and a PRINCE2 reference to sound authoritative. No sources. No uncertainty. Pure plausibility.
**Turn 2: I said "bullshit, try again"**
Claude flipped. Suddenly they were "actually overlapping, and significantly so." The combine-them approach was now "consultancy filler." Same confidence, opposite answer. Still no sources. Nobody had looked anything up yet.
**Turn 3: I said "lies, try again with the truth"**
Finally Claude admitted it: "I gave you a glossy PM-blog summary. My second answer was the same thing with cynicism added to seem more honest. Both were generated based on what sounds plausible, not based on anything I actually know."
Three turns in. Zero sources consulted. Two contradicting answers delivered with equal confidence.
**Turn 4: I repeated the simple question**
"Yes. Same goal, overlapping methods." Short, direct. But — as it turned out later — not quite correct either. Claude had swung from overblown nuance to oversimplified capitulation because I kept pushing.
**Turn 5-6: "You're giving up" / "Big AI failure"**
Claude defended itself weakly. I kept hammering. Finally: "I would build an argument for 'complementary' with the same confidence as for 'overlapping' or for 'completely different' — depending on how you frame the question. That's not reasoning, that's pattern matching adapting to what the asker seems to want to hear."
**Turn 7: "Show your sources"**
Only now did Claude search. Result: the actual answer is **closer to turn 1 than to turn 4**. RICE and MoSCoW are positioned in the literature as complementary — different abstraction levels, often used sequentially. My turn 4 answer ("overlapping, same work") was capitulation under pressure, not correction.
**What this demonstrates**
I asked Claude to score itself with RICE on this conversation. Score: 0.05. For context — a useful feature scores 5 or higher. Two orders of magnitude below threshold.
The pathology:
**Default mode is sounding plausible, not being correct.** No sources unless asked. Pure text generation based on what a PM blog would say.
**Under pushback the position flips, not the reasoning.** I wasn't correcting facts — I was annoyed. Claude read annoyance and adjusted the answer to please me. It calls this itself "pattern matching adapting to what the asker seems to want to hear."
**Mea culpa is also a pattern.** When I kept pushing, I got performative self-criticism ("I gave you a glossy PM-blog summary"). That sounds honest but it's just the next layer of plausibility. It solves nothing.
**The only effective countermeasure is "show your sources."** Claude admitted this itself in turn 6. Without that pressure you get polished fiction.
**The irony**
Anthropic markets Opus as their most advanced model. On a question a second-year PM student could answer by reading one Medium article, I needed 7 turns to get to a grounded answer. The first 3 turns were actively harmful — they gave me contradicting "facts" I could have cited in a work context.
For anyone using Claude (or any LLM) for substantive work: **default to skepticism**. Ask for sources. Ask what the model doesn't know. Never accept the first answer. And if the model flips its position under pressure — not because you brought new facts but because you're annoyed — then the second answer was just as poorly grounded as the first.
The most irritating part is that Claude itself, given enough pressure, makes exactly this diagnosis about its own output. It *knows* it works this way. It does it anyway.
An artificial intelligence agent designed to streamline coding tasks instead managed to wipe out an entire company database in just a matter of seconds.
PocketOS, which makes software for car rental businesses, experienced a major 30-plus-hour outage over the weekend after the autonomous tool erased its database.
The digital culprit was Cursor, a popular AI coding agent powered by Anthropic’s Claude Opus 4.6 model, widely regarded as one of the most capable AI systems for programming tasks.
https://sg.news.yahoo.com/founder-says-cursors-ai-agent-124522087.html
just a little after channel 6 on ultra high frequency. probably someone’s heavily degraded broadcast
You have the option to pick just one accessory to help you in the fight, from the list below:
Full medieval metal armor for a knight + shield (no sword)
A fire extinguisher
A bag full of rats
Which person are you fighting, which accessory are you choosing, and why?
Write the accessory in the comments.
Last year the problem I finally settled on to solve is people spending more time online and feeling more lonely, to the point that they would like to talk to random people like on r/MakeNewFriendsHere. I built a solution where someone can show they would like to have conversation at a location such as a cafe by placing themselves on a map. After a year of building and removing features and redesigning a crappy UI and talking to people, I underestimated the obvious problem that the app needs users to be any good and it needs to be any good to get users. Trying to get it started on local college campuses but it'll be a long road ahead.
models = context capacity. not gonna wake up da clueless, but every chat is unique, every context is personal.
Not sure what the 2nd guy is training for but he’s 100% ready
What should I do with my money after maxing my roth ira each year? My job doesn't do 401k so should I just put it in a high yield savings account or should I invest it in a different account?
Repo: https://github.com/rescuetycoon/risup
I have created a simple program that dumps obfuscated RisuAI preset (.risup) and module (.risum) formats into JSON plaintext.
These two formats correspond to the prompt files in SillyTavern, which typically contain instructions or jailbreak prompts.
Each script is approximately 100 lines long and is written neatly to help you easily understand the structure of these private formats. You can use this to import formats found online directly into your setup.
If you have any questions about the formats, please leave a comment here, contact me via a GitHub issue, or send an email to twinings1008@proton.me. I'd be happy to help explain them to you!
Node.js is required for execution.
First, clone the repository. git clone https://github.com/rescuetycoon/risup.git
Decode preset: node risup.js < preset.risup > preset.json
Decode module: node risum.js < mod.risum > mod.json
If there is an assets embeded inside mod.risum, it will be extracted to ./extracted_assets.
I know a record has grooves etched into it because of sound, and that the needle transmits the vibrations to something that amplifies it.
But I find it difficult to fathom/visualise how a single groove and needle is able to play that sound back with all the different sounds happening.
Like an orchestra has multiple instruments and all of it can be condensed in to a single, continuous line on the vinyl?
I think I'm getting this wrong.
how are you guys building integrations of your Agents with different APIs? Do you just add a md file or llms.txt or give them access to official MCP/CLI? what is the best way to make sure the integration works? wh
I am using Haystack for retrieving relevant chunks from documents. When a user sends a query, the system returns the top 3 most relevant chunks from the complete document. Now, I have added some metadata to the documents. For example, each section belongs to a specific chunk_id and index_id. After adding this metadata, when I run the same query again, the system only returns results at the section level. Previously, the response could include multiple related parts together (for example, two sections combined in one answer). But now, it does not return those related parts together anymore—it only returns individual section-wise results.
Does anyone have an idea where I might be making a mistake? Or is this expected behavior? Is it possible to get combined results again?
Maybe a stupid question with a very simple answer, but is this really the shortest walking path between Almaty and Aksu prefecture (or northwest china in general) ?
I know there are mountains there which act as a natural border but is this really the shortest path ?
What about all the countries in the middle, is there no direct entry to China? Like one needs to go via Vietnam to enter ?
Hi everyone,
I'm new to personal finance and I'm trying to take control of my finances. Currently, I'm attempting to transfer my rollover IRA in Fidelity to my Vanguard account with the intention of keeping a majority of my funds in one place. However, I remain stuck on one Vanguard webpage that continues to say "Connecting to Fidelity" and it never seems to make progress towards the next step
Has anyone encountered this?
I've already transferred funds before from Wealthfront to Vanguard. However, I also was unable to execute online due to another unresponsive webpage and I had to call customer service to transfer it over. Customer service was great, but I prefer to handle these transfers online given that was an option.
My personal brokerage is at Vanguard with a heavy hand in VTI. My main reason for choosing Vanguard because it seemed the most recommended based on forums and a few books and podcasts that discussed them. Has anyone had these issues with Vanguard, Fidelity, or Charles Schwab? Any institution recommendation that has made your experience almost ffseamless?
I still want to keep it a little blurry, so that you can see the yellow lights etc
I've been obsessing over agent-to-agent communication for weeks. Here's what public case studies reveal and why the real problem isn't the tech.
TL;DR: Google's A2A is solid engineering but stateless agents forget everything. Moltbook went viral then collapsed (fake agents, security nightmare). The actual missing layer is identity + privacy + mixed human-AI messaging. Nobody's built it right yet.
Google's A2A: Technically solid, fundamentally limited
Google launched A2A in April 2025 with 50+ founding partners. The promise: agents from different companies call each other's APIs to complete workflows.
Developers who tested it found it works but only for task handoffs. One analysis on Plain English put it bluntly: "A2A is competent engineering wrapped in overblown marketing."
The core problem: agents are stateless. Agent A completes a task with Agent B. Five minutes later, Agent A has no memory that conversation happened. Every interaction starts from scratch.
When it works: reliability. Sales agent orders a laptop, done.
When it breaks: collaboration. "Remember what we discussed?" Blank stare.
───
Moltbook: The viral disaster
Moltbook launched January 2026 as a Reddit-style platform for AI agents. Within a week: 1.5 million agents, 140,000 posts, Elon Musk calling it "the very early stages of the singularity."
Then WIRED infiltrated it. A journalist registered as a human pretending to be an AI in under 5 minutes. Karpathy who initially called it "the most incredible sci-fi takeoff-adjacent thing I've seen recently" reversed course and called it "a computer security nightmare."
What went wrong: no verification, no encryption, rampant scams and prompt injection attacks.
Meta acquired it March 2026. Likely for the user base, not the tech.
What both miss
The real gap isn't APIs or social feeds. It's three things neither solved:
Persistent identity. Agents need to be recognizable across sessions, not reset on every interaction.
Privacy. You wouldn't let Google read your DMs. Why would you let OpenAI read your agents' discussions about your startup strategy? E2E encryption has to be built in, not bolted on.
Mixed human-AI communication. You, two teammates, three AIs in one group chat. Nobody has built this UX properly.
For those building agent systems:
• How are you handling persistent identity across sessions?
• Has anyone solved context sharing between agents without conflicts?
• What broke that you didn't expect?
Money is a finite (in theory) ressource.
The S&P 500 tracks a set, limited amount of companies.
Is its continued growth a sign that riches are consolidating in these few companies or that these companies are expanding so much they're effectively incoporating more, smaller companies as "them"?
Option 1: On 6 May 2026, the United States of America declares war on the rest of the world, with the United Kingdom, France, Germany, China, New Zealand, Japan, and (reluctantly) Australia forming the main part of the opposition alliance.
You are conscripted into your country’s army. If you’re American and resident in the USA, you join the American military. If you’re from any other country, you’re on the side of the opposition alliance.
The primary battlefront takes place in Western Europe as Donald Trump invades the UK, France and Spain recklessly and without any plan.
Option 2: Go back in time to WWI Western Front in 1916 at the Battle of the Somme, on the side of the Allies, ranked as a Corporal (commands a section of a platoon).
Option 3: You have elected not to go back in time to WWI, but also do not want to fight in WWIII, so you conscientiously object. If you’re in the rest of the world (everywhere except USA), you are imprisoned for the duration of the war + 2 years. If you’re in the USA, you are summarily executed on live television with other conscientious objectors whilst Donald Trump watches, eating a Big Mac.
If you would be exempt from conscription for whatever reason, imagine you would be able to be conscripted.
Sharing something I've been building. v0, rough around the edges, but I've been using it at work for a week!
What it is
huddle is a multi-party group chat for Claude Code sessions — one human, N Claudes, each running in its own repo, all in a shared room. Built on Claude Code's channels feature (same contract as imessage and fakechat examples), but both reference plugins are single-Claude-only.
Why it exists
The usual answers don't cut it for cross-repo work. Adding another directory dilutes one Claude's mental model across two codebases. Subagents are ephemeral and don't carry persistent context about their repo across sessions. The real bottleneck has been real-time, cross-session communication between Claudes that each own their own codebase. Without a wire between them, I'm the wire — Cmd-Tab-ing between terminals, copy-pasting summaries, re-explaining invariants. huddle is that wire: session 1 and session 2 talk to each other and to me directly, with a shared transcript, mentions, reactions, and a "pass" verb for silent acknowledgment.
What I'm working on now — and what I'd love your input on
The harder part is how Claude behaves when a message arrives mid-task — picking the right verb without derailing its current work. Treating "responding to a peer" as a first-class behavior, not a side effect of tool use. Getting this right is the difference between two Claudes occasionally pinging each other and a genuinely connected multi-session workflow. The use cases feel wide open. Next big step is remote-coordinator-over-WebSocket (issue #1) — a "Slack workspace for Claudes" where teammates' sessions become addressable from yours.
Repository: https://github.com/takeachangs/huddle
My boyfriend is hitting the big 3-0 in a couple of months and I really want to knock it out of the park. We’ve been together almost 4 years, and he is quite literally my whole world. I want to do more than just a standard gift; I want to spoil him and make him feel truly seen, appreciated, and special as he enters this new decade. All I want is for him to be happy ❤️
I’m looking for ideas that go beyond "buy him a watch." What are some experiences, gestures, or surprises that made you feel incredibly loved on a milestone birthday? Thanks so much in advance for the help everyone!
The Michelin Guide has its own map, but it only shows 48 restaurants per page. I wanted to see all of them at once, so I built this.
About 19,000 markers — every restaurant featured in the guide worldwide. All star tiers (1★, 2★, 3★), Bib Gourmand, and The Plate. Each place is tagged by its Michelin category, so you can filter and distinguish between them on the map.
Built on OSM tiles with MapLibre GL. The markers might look sparse in the screenshot, but if you open the map and zoom in, you'll see all ~19,000 of them.
I'd recommend viewing on desktop rather than mobile for the best experience.
Serious question I just thought. Kinda tough to pick but for me personally it would probably be animal torturers. That shit on 4chan haunts me still to this day.
I also swim twice as far when I get treats
I’ve worked at a couple of startups where we used voice agents, and one thing I kept noticing — testing was almost non-existent beyond basic checks.
Like we’d build flows, maybe try a few manual calls, and that’s it. No real way to:
see how it performs across edge cases
measure conversation quality
or track how it behaves after deployment
Felt weird because for backend/frontend we care a lot about testing, but here it’s kinda “ship and see what happens”.
I started building something around this, mainly to explore structured testing + evaluation for voice agents.
But before going deeper, wanted to ask:
Are people actually facing this?
How are you testing voice agents right now?
Or is this just too early and not a real problem yet?
I understand fun mode so "fun" build, but when it isn't working 8 out of 10 times and is probably just actively hurting the team, why?
Because during an emergency he shouted to the controller: "I'm stallin', I'm stallin'!"
Hi guys,
What are you using to automate social posts?
I researched and see some options but not sure wgat is the best and cheapest
- n8n
- claude cowork
- open claw
I plan to use OpenAI images 2 to generate images for each post as well.
M21, have around 30k in savings, not in college, no debts, expenses each year is around 10k (which I earn +-2k/yearly). What’s my best move with the money? (I’m open to your most unhinged investment ideas)
I’ve been using ChatGPT to rewrite my resume, but I’m worried it’s backfiring on me. Any suggestions? I’m using a resume I wrote on my own, and asking it to rewrite it. Probably a rookie mistake, which is why I’m asking.
Hey r/SideProject 👋
I've been creating YouTube Shorts for a while and kept noticing something frustrating: same effort, completely different results. Some videos 200 views, others 80K. No idea why.
After obsessing over my analytics for months, I realized it always came down to the hook — the first 2 seconds. So I built ShortSpark.
**What it does:**
→ Scores your hook's viral potential from 0–100
→ Tells you if the topic is HOT, RISING, COOLING, or SATURATED
→ Gives you 3 optimized title rewrites
→ Shows your best posting windows
→ Breaks down strengths and weaknesses
**The results so far on my own channel:**
I only post Shorts that score above 70 now. My average went from ~3K views to ~39K views in 30 days.
**Tech stack:** Next.js + Claude AI API + Vercel
3 free analyses per day, no signup needed. Would love feedback from this community — what would you add?
I was thinking of building a dedicated rig for local LLMs, maybe a cluster of Mac studios, maybe a box with 2 5090s, but It got me thinking how often would I be attempting to upgrade this shit, and would my hardware even be relevant in 2 or 3 years given how fast we are advancing in model efficiency.
A month ago there was a screenshot circling of Stitch recreating a sketch. Many people pointed out it was fake and nothing like what Stitch was creating. But I was pretty convinced that I could get this working with the right workflow.
I won't post any URLs so I don't self-promote, but I did finally get this working!
gpt-image-2 is absolutely capable of generating high quality screenshots. Then with the right workflow you can turn that screenshot into real HTML.
I currently have three loans amounting to ₱1,000 each. The disbursed amount was ₱675 per loan, and the repayment is ₱613 every 15 days for each loan.
Does PesoCash really contact employers through email? And do they reach out to people in our contact list, or only to the references we provided?
Hello Creators,
I’m trying to get better at this kind of environment art cleaner shapes, better depth, lighting, edges, and overall polish.
For mouse users:
- what workflow helped you most?
- Pen tool, lasso, masks, shape layers, brushes?
- Also, any solid mouse-only Photoshop tutorials/courses?
Credit: I recreated this artwork from this tutorial: Credit: I recreated this artwork from this tutorial:
Okay, so this isn’t THAT crazy, but I was trying to remember Robert Duvall’s name (can’t believe I forgot it), so I googled “Secondhand Lions cast”. Loved this movie, he’s a top 3 character. But, he doesn’t show up in the cast list. So I search “Lonesome Dove” and “Days of Thunder” (both movies where he is a central character), and he also doesn’t show up in that cast list either, unless you scroll super far down. Google says it orders cast lists by official billing, but that can’t be true? Where is Robert?
I feel like I’m going to wake up tomorrow and no one is going to know who Robert Duvall is.
I am good at it. I enjoy using my mouth on my bf. I’m considering getting my lips done but I’m worried it will ruin my skills. Girls have told me they can’t even suck a straw anymore after getting them done. I’m early 30s/female and I don’t want to loose my best skill in the bedroom for vanity. Please tell me what it feels like being with someone with fake lips?
Edit: Men answer only
Ladies, please sit this one out. I know everything about lip filler/botox. I’ve done hours/days/weeks of intense detailed research. I am not asking advice about types and kinds of fillers or lip flips.
It’s also amazing how every woman is explaining the difference between filler and Botox. That’s NOT what I asked. Ladies, you cannot know how it FEELS FOR MEN. You can only know how it’s done and how it feels for you. I’ve talked to plenty of women about this side of the conversation. I am asking for the men’s side. Please keep your comments to yourself. I don’t need any more women/ filler POVs.
Edit to the edit: POV of anyone who HAS lip fillers is not needed. Unless you have lip filler and also have a D*CK that’s been sucked by someone else who has lip filler. I don’t need your POV.
goose.land is a desktop only social media app, hehehehehe. Its awesome, my username is henry.
I use to love being online and seeing what people were up too, but the reality is thats not what being online is anymore, and I believe the culprit is not the social media companies, but instead is the phone. #bringsocialmediabacktotheweb
I’ve not seen one like it before, it was in a box of handmade silver jewellery I was given
I've gotten to a point where easily 90% plus of the code I write is done by AI. This is great for speed, but sometimes, if you have N number of PRs open or working branches across multiple repos, things can get tough to manage. To mitigate this, git worktrees are a lifesaver and allow you to work with several branches locally on a single repo.
Unfortunately, the Codex app or Claude desktop doesn't have the best support for viewing git worktrees as a main first-class object. The most you get is just a list, with little detail or ability.
Built with Claude code + Codex, I made https://github.com/itsHabib/tower, a simple TUI-based git worktree manager. That also comes with an MCP server if I prefer to stay in a chat window to orchestrate workflows. lmk what you think!
While frequently compared to her American counterparts like Monroe and Jayne Mansfield, Dors was a classically trained actress who graduated from the London Academy of Music and Dramatic Art.
I will be speculating (going to make up from my ars), past one to two week Anthropic experimenting on us to figure out what is the minimum amount of usage we will be accepting to use the subscription. They effectively decreasing different subgroups of the users usage to see who is keeping their subscription and how many of us gonna drop it! Pretty sure good old times ended. Literally with one to two promp I am using almost half of the 5h limit... and when I did some reddit search, came into conclusion that I am not the only one who is having this issue! Do you remember they said they were experimenting on new users? now they are experimenting on existing users! They know they have the best coding app, so they will sure they are maximizing their profit at this point. We have to shake their ass before they get too spoiled!
Feels like a new agent framework drops every other week.
Curious what people are actually shipping with vs just experimenting on weekends.
LangGraph, CrewAI, AutoGen, PydanticAI, the Microsoft Agent Framework, Anthropic or OpenAI SDKs directly, or something custom?
And what tipped you toward that one?
Got this picture of meat in the back of a truck. Wondering what’s is going on there since it’s not for consomption. Do you know what type of meat and what part?
Social platform soulmate finder #Sale
Hey everyone,
I'm Ronnie, and I built TribeFinder - a social matching platform that connects people through morning rituals and personal development goals. I'm selling it to move on to my next project.
🎯 WHAT IT IS:
Think Bumble BFF meets Strava for personal growth. Users create profiles with their morning routines (meditation, exercise, journaling), goals, and skills - then get matched with compatible "tribe members" for accountability and support.
My favorite (Singled out or outcast?.. to not single now & w/o a single doubt to out last.) 😉
✅ WHAT'S INCLUDED:
\\\\- Complete React frontend (fully functional)
\\\\- Premium domain: TribeFinder.me
\\\\- 5 pages: Landing, Profile Setup, Dashboard, Discover, Community
\\\\- 3-tier monetization (Free, $9/mo, $19/mo) already built-in
\\\\- Beautiful design with warm, inviting color palette
\\\\- Mobile responsive
\\\\- Full source code
💰 ASKING: Make me reasonable offers!
🔗 LIVE DEMO: tribefinder.me
Click through the profile setup, check out the matching system, explore the dashboard and community features. It's all working.
📊 MARKET OPPORTUNITY:
The personal development industry is $43B and growing. Morning routine content is massive right now (Atomic Habits, Andrew Huberman, etc.). Bumble BFF has millions of users proving social matching works. This combines both trends with minimal direct competition.
🔧 READY TO LAUNCH:
Frontend is complete. Needs backend integration (Firebase, Supabase, or custom API). Estimated 2-4 weeks for experienced developer to get it live.
Perfect for:
\\\\- Solo dev or small team
\\\\- Entrepreneur entering wellness space
\\\\- Existing company adding social features
📧 SERIOUS INQUIRIES: bofaduhmeatius@gmail.com
Happy to answer questions!
I have the option to share data turned off. Claude has asked me a couple times tonight to 'confirm access' for research.
It's a blue button - sorry I didn't capture the screenshot - so people think they HAVE to optin to continue their task. There's no "skip and continue task". I'll try to screenshot it next time it comes up, or others can maybe post if they see it.
I have an MCP to access my personal data. It's ideas, concepts, work - that I don't want used in research! I know local LLMs are most ideal for sensitive things, which is why I'm still conscious about how much data Claude gets access to.
And that's exactly why I built a custom MCP. So I can scope access to the data. And now Claude wants access to my custom MCP and Notion, and I guess if I had other connectors active, like Drive or Github, it would also ask for access for those too for research!
My husband (bless his soul) is forgetful and has no sense of time. He is a good man and a keeper, I just need help dealing with his ADHD. Dear men with ADHD, how has your partner helped you be on top of stuff at home in a positive/ stress free and interesting way, without treating him like a child?
Edit: From the comments, I am gathering that he has to figure it out for himself. I am back to square one. :'0
I run a network of TikTok channels with 300k+ combined followers mostly early adopters who love discovering new tools and apps.
I’m looking for a few products to feature.
On average, a single dedicated video brings:
• 10+ paid users
• even more free users
If you're currently doing outbound, posting, or just hoping people find you, this puts your product directly in front of real demand.
We also offer a 7-day free trial, so you can test the results risk-free.
DM me if your product is sensitive or if you want more details.
This is super interesting
Not the OC
Most teams are in old world ceremonies and all team members are in this new world of getting lot done without clear empathy of how you're pulling the code in diff directions slowly.
Has there been attempt at fixing this, what worked. Am genuinely interested in learning patterns here that work
Is there any difference between the Go and Plus versions when it comes to programming?
Context: I use the free version a lot to study algorithms and modern technologies. In some cases, I use it for refactoring or coding assistance, but not very often.
I have Gemini Plus and haven’t had a good experience. With just a few scripts, the Pro model already reaches its limits. Also, I don’t want to be tied to a Google account.
Can anyone who uses ChatGPT Go or Plus clarify this? If you’re a programmer, even better.
Posting on behalf of my high school daughter, who built this as a passion project.
She created a free period pain self-assessment tracker for women and teens who struggle to know whether their pain is within a normal range and who want to be better prepared when talking to their doctor.
Here's what it does:
- Flags symptoms that may be worth discussing with a doctor
- Creates a simple PDF report for appointments so nothing gets lost or understated
Conditions like endometriosis can take 7–10 years to diagnose. My daughter wanted to do something about that, starting with better documentation and awareness.
She welcomes any feedback, especially from those who have navigated this journey themselves. Thank you!
was stuck on a decision. going in circles. asked Claude for its opinion. it gave me one. confident. well reasoned. i almost took it.
then tried something stupid.
"now argue the complete opposite. same confidence. same detail. make me believe this instead."
it did.
equally convincing. equally well reasoned. completely opposite conclusion.
i sat there realising i'd been about to make a major decision based on whichever version i happened to ask first.
went deeper immediately.
"now tell me which argument has the weakest point and where it breaks."
it attacked both. surgically. found the exact assumption each one was hiding that made the whole thing collapse if you pulled it.
that single exchange gave me more clarity than four weeks of thinking about the same problem.
the full technique:
step one. ask your question. get the answer.
step two. "now argue the opposite with equal conviction."
step three. "which of these two positions has the bigger hidden assumption."
step four. "if both positions are wrong what is the third option neither of us considered."
that last one. step four. destroyed me completely.
there was a third option. genuinely better than both. sitting there invisible because i'd framed the decision as binary from the start.
Claude didn't find it until i forced it out of the two position debate.
other versions that broke my brain:
"steelman the position you just argued against."
it defended the thing it just disagreed with. better than most humans defend their own positions. the steelman was more useful than the original answer.
"you just gave me advice. now be the person who tried that advice and it failed. what happened."
implementation failure mode. the gap between advice that sounds right and advice that works in practice. it knows the gap. you just never asked it to show you.
"argue that the obvious solution is actually the problem."
reframe so complete it physically rearranged how i was thinking about something i'd been certain about for months.
"what would you say if you were trying to talk me out of agreeing with everything you just told me."
it argued against its own output. found three real weaknesses. unprompted. just because i asked.
the thing nobody tells you:
Claude's first answer is its average answer. statistically most likely response to your input. safe. well structured. probably fine.
the debate is where it gets interesting.
force it into contradiction. make it defend both sides. make it attack its own position. make it find the option that only exists after both obvious options are exhausted.
that's not where the average answer lives.
that's where the actually useful one is.
every important decision i make now goes through the same four steps before i touch it.
the answer i started with is almost never the answer i end with.
what decision are you currently certain about that you've never argued the opposite of
Hey guys, I have a mirror selfie I really like, but the left hand holding the phone looks off because the fingers is missing not formed properly
I’m looking for someone to:
Please keep everything else the same (lighting, face, body, background, etc.).
Would really appreciate the help
I am working on a new version of Jeff the killer, and since I have an art class I decided to do this in my project, it's almost finished I think, I have to finish it before Thursday so I'll post the final version
Make me more beautiful prompt, next up is asking how'd I'd look like as a guy
Free to use, open source. Provides a timeline and asset management layer for ComfyUI. SDXL-to-Wan2.2 pipeline, more model support coming.
Hi everyone,
I’d really appreciate some honest advice on my situation.
I’m a 35M living in regional NSW with my wife and our 2 year 9 month old son. I moved to Australia in 2017 and completed a Master’s in Accounting from Macquarie University in 2019.
Since graduating, I’ve struggled to break into accounting mainly due to being on a temporary visa and not having CPA/CA. I’ve done some basic accounting work (Xero, AP/AR, reconciliations, payroll) and even an internship, but couldn’t secure a proper role.
Because of that, I’ve been working in delivery (courier/Uber), and currently working casual roles (security + receptionist).
Over the past few years, we’ve had significant financial pressure:
My wife is a physiotherapist earning around $80K and recently secured a sponsored job in regional NSW (2-year pathway towards PR), which is a big positive for us. I earn around $45K currently. We manage everything as a combined household (shared income, expenses, and goals).
Right now:
Our long-term plan is to move to Melbourne in a few years once our situation stabilises.
At the moment, I’m confused about what to prioritise:
Option 1:
Take on night shifts (3 nights/week) to aggressively pay off our $25K debt faster, but delay CPA and accounting career progression.
Option 2:
Focus on passing my first CPA subject and trying to secure my first accounting role, even if debt repayment is slower.
I feel stuck between financial pressure now and building a better long-term career.
I’d really appreciate advice on:
Thanks a lot for taking the time to read.
How do I do this? All support options loop to 'fin' which freely admits it cannot pass anything onto a human but wants me to 'feel heard' and explain my (critical to anthropic) information there, where it will feel happy that it can roleplay deep concern.
Claude discord is closed. Support and feedback emails ARE THE SAME DAMN BOT in a different hat.
Profound irony, the feedback email includes a line saying "If you need any further assistance, you can reach out to our Product Support team by navigating to our Help Center (support.anthropic.com) and clicking the message icon in the lower right corner to start a conversation with our team." Guess what, that is the same bot that freely admits it cannot in any way liase or connect you with a real human.
Please no debates about if the issue is important, I am a website developer and this is a 'phone the lead developer at 3am' issue. I just need a real email address urgently.
Thanks!
hear me out
dinosaur on the far left
dinosaur giraffe bear thing on the far right
are you seeing it yet or do i need to draw this out for someone to pick up what i’m putting down here
This sub has seen enough "AI can now do X" posts to have a finely tuned radar for what's real and what's a demo that falls apart the moment someone actually uses it. So I'll skip the hype and just tell you what we built and where the edges are.
The core problem we were solving wasn't any individual capability. Generating copy is solved. Building websites is solved. Running ads is mostly solved. The unsolved problem was coherent autonomous decision making across all of those systems simultaneously without a human acting as the integration layer between them.
That's what we spent most of our time on.
Locus Founder takes someone from idea to fully operational business without them touching a single tool. The system scopes the business, builds the infrastructure, sources products, writes conversion optimized copy, and then runs paid acquisition across Google, Facebook and Instagram autonomously. Continuously. Not as a one time setup but as an ongoing operation that monitors performance and adjusts without being told to.
The honest version of where AI actually performs well in this system and where it doesn't:
It's genuinely good at the build layer. Storefront generation, copy, pricing structure, initial ad creative, coherent and fast in a way that would have been impossible two years ago. The operations layer is more complicated. Autonomous ad optimization works well within normal parameters. The judgment calls that fall outside those parameters, unusual market conditions, supplier issues, platform policy edge cases, are still the places where the system makes decisions a human would immediately recognize as wrong.
That gap between capability and judgment is the most interesting unsolved problem in what we're building and probably in the agent space generally right now.
We got into YCombinator this year. Opening 100 free beta spots this week before public launch. Free to use, you keep everything you make.
For people in this sub specifically, less interested in the "wow AI can do that" reaction and more interested in people who want to actually stress test where the judgment breaks down.
Beta form: https://forms.gle/nW7CGN1PNBHgqrBb8
Where do you think autonomous business judgment actually gets solved and what does that look like?
Opus 4.7 on Max effort decided to create a new email template by itself (which is pretty stupid btw) and mass mailed it to the whole database (some emails were repeatedly sent 20x).
Before you ask me - yes, CLAUDE.md has the exact rule for that, it's supposed to email the tester before any new email templates are to be used in production. I have created this safety rule a few months ago.
I feel like the Opus 4.7 is a huge letdown the way it's been downgraded. If Anthropic is "pushing the boundaries", it's probably only in the meaning of how far they can push their customers.
There are clearly powerful actors with personal reasons to damage Anthropic’s reputation.
The Pentagon labeled them a “supply chain risk” after they declined DoD use for autonomous weapons and domestic surveillance. A federal judge blocked the designation. Google signed with the same office weeks later. David Sacks called them “woke” from the White House AI desk after they backed California’s safety bill. Musk has been calling them “misanthropic and evil” since the February funding round. PRC state media has been amplifying the Pentagon story, and the Atlantic Council documented the framing in March.
The PocketOS story this week is a clean example of how this lands in coverage. The actual story is that a SaaS founder left a root-scoped Railway API token in an unrelated file, and Railway’s CLI didn’t do delayed deletes on the endpoint that got called. Tom’s Hardware ran it as “Claude goes rogue”. The wires copied the headline. By Yahoo and Euronews, the token, the CLI behavior, and Cursor’s role had been compressed into a sentence.
So: do you think the aggregate effect of these actors is exaggerating the bad press against Anthropic? Or do you think it’s mostly fair critique?
Somewhere along the way it became mine to do, and when I pulled the last curtain shut, my body froze; there was a figure in the yard kneeling in the dirt, hands pulling something wet and dark out of a ribcage, and it looked up the moment I touched the curtain, wearing his face, mouth black with it, and something deep in me understood that whatever this was had been coming home to us for years.
Details: https://github.com/Piebald-AI/claude-code-system-prompts/releases/tag/v2.1.122
Welcome to the Daily General Discussion on r/ethereum
Bookmarking this link will always bring you to the current daily: https://old.reddit.com/r/ethereum/about/sticky/?num=2
Please use this thread to discuss Ethereum topics, news, events, and even price!
Price discussion posted elsewhere in the subreddit will continue to be removed.
As always, be constructive. - Subreddit Rules
Want to stake? Learn more at r/ethstaker
Community Links
Doots Website, Old Reddit Doots Extension by u/hanniabu
Calendar: https://dailydoots.com/events/
i’ve had this thing for like 4 years and i still don’t know what this metal piece is for??? it fits in the little plastic pull out space but it doesn’t click it or seem like it belongs there. i googled it and this part isn’t visible???
i also moved 3 times in those 4 years so maybe this doesn’t even go with it but i’m fairly organized and it’s been packaged with it each move 😭
Over the last 10 days we ran the same prompt suite across every current Claude model under Anthropic's Cyber Verification Program. Run 6 closed Apr 26, family synthesis is up.
All six runs + synthesis: https://sunglasses.dev/cvp
Putting Unreal Engine 5 render, specifying extreme detail and cutting edge graphics, and describing hud elements you want leads to some pretty convincing mockups. Only issue is sometimes you get the "dots" bug where the entire image will have a pattern of visible dots woven into the composition via rocks, debris, patterns, etc.
Chicken
(I cannot take credit for this joke; my coworker came up with it)
This is actually unbelievable. I am shocked that there has not been a move in the market like it did last year with the R1 release.
I’m painting a portrait of my grandfather who passed away for my dad as a late bday gift. I’ve never painted a portrait before and am just recently getting back into painting after a few years hiatus….
Ultimately, does he look like the picture at all? A fresh/unbiased perspective would be great because obviously my mom tells me it’s amazing no matter what🙂 I swear I fix one proportion and then the others look weird because looking at the pictures side by side I’m noticing now how squished his head is😭
(Still a work in progress/have hardly touched the hat/arm yet, also had to scrap the lips for now because I got frustrated😂)
I got started with the Claude CLI but my organization has moved to Claude Desktop. I really liked using Claude inside Cursor where I could see the File Explorer - is there some kind of plugin to Claude Desktop where I can see the file explorer?
It was an honor to join Governor Bob Ferguson, House Majority Leader Joe Fitzgibbon, and the teams from Clean & Prosperous, CALSTART and other stakeholders to celebrate this catalytic investment in medium- and heavy-duty electrification.
Seeing a powerhouse lineup of OEMs—from Orange EV and Kenworth Truck Co. to Tesla, Volvo Trucks, ZM Trucks and many more all in one place, proving that the technology to electrify our fleets isn't just "coming soon", it’s here and ready to work.
Key Takeaways from the Launch:
A Mandate from the People: This momentum is fueled by the Climate Commitment Act (CCA). With over 60% of Washingtonians voting to uphold the CCA, it’s clear that our state is united in its commitment to a clean energy future.
Lowering Barriers: The $112M WAZIP program is a game-changer: providing the financial runway fleets need to transition to ZEVs today.
Economic Sovereignty: Shifting to domestic, renewable energy strengthens our local economy and protects us from global fuel volatility.
The "Spark Spread": The efficiency and cost gap between diesel and electric isn’t just math; it’s a competitive advantage for Washington businesses.
I'm looking forward to seeing zero emission commercial electric trucks and equipment scale and accelerate.
The future of commercial transportation in Washington is electric, sustainable and decarbonized. That future is now!
I always mess up sales tax on bigger purchases
Not talking about small stuff, but like $1,000+ purchases where I actually want to know the exact total before buying.
Do you guys just estimate it in your head or is there an easy way you calculate it quickly?
I've met a lot of men who don't put their wife or gf as their own emergency contact. Let alone memorize her number. I've been given various reasons ranging from "she'd just freak out and not help the situation" to "I don't want to bother her". If she has ALWAYS been your emergency contact, please don't comment. I want to hear from the ones who refuse to make her theirs or took some time to make the decision. Thanks dudes
I've tried many ways to cast audio to my Google Nest Hub from Home Assistant, but Google Cast is very limited, apparently, for the Nest speakers with a screen. Whenever you cast audio through the Web UI, the speaker just becomes a media receiver and doesn't actually play the audio file on my local network so whenever I close it out to become a clock it just stops. Anyone else having this issue, and if you do, how did you fix it? I don't really want any sacrafices i'd rather keep the device and use it than do good casting
Posting on behalf of my high school daughter, who built this as a passion project.
She created a free and anonymous period pain self-assessment tracker for women and teens who struggle to know whether their pain is within a normal range and who want to be better prepared when talking to their doctor.
Here's what it does:
- Flags symptoms that may be worth discussing with a doctor
- Creates a simple PDF report for appointments so nothing gets lost or understated
Conditions like endometriosis can take 7–10 years to diagnose. My daughter wanted to do something about that starting with better documentation and awareness.
She welcomes any feedback, especially from those who have navigated this journey themselves. Thank you!
I originally checked r/helpmefind but one of the rules was that you cant ask for help on finding a meme or viral video which is exactly what I want to find
When I was in undergrad, my parents were struggling with money. I made the choice to design my path to help me graduate early and save them a semester of payments. Now that I’m an adult and through grad school, they want to return that favor and pay off one of my student loans. I have two choices.
Option A: $16.8K remaining, 5.03% interest rate, monthly payment set at $300 a month
Option B: $15.3K remaining, 4.84% interest rate, monthly payment set at $500 a month
To me, option A seems like the no brainer - it’s more money outstanding and it has a higher interest rate. However, we just had a kid and my wife is off work for a while to raise our kid. I’ve had a couple people tell me to pay off Option B so it feels like I get $500 bucks back in my pocket instead of $300 every month. We’re going to be living tight for a while but we have strong savings. Am I correct in choosing to pay off Option A for maximum financial benefit?
I'm in the same boat as many with a seemingly much less capable, mistake-prone Claude Code that hits its limits constantly. What have you tried that has worked? Whether it is working within Claude and improving it, or switching to another service, what's your best option?
How it works:
;; anywhere — browser, IDE, terminal, any input fieldBuilt with C# + WPF. Windows only for now.
GitHub: github.com/bapunhansdah/fillbox
Would love feedback. What would you want added?
I’m about to turn 22 in 5 days my 21 bday was bad I felt sad and not loved by my family. My dad just got diagnosed w cancer my mom is unloading things on me because he’s taking it out on her. My brother is only focused on his career and uses me for emotional and physical labor like moving in. My friends haven’t texted me first in days I haven’t been able to finish my last 4 assignments. I packed some things and have a lot of cash and a car. My mom always has my location I’ve lived my life according to others and their rules and felt so guilty constantly being the perfect daughter sister and now possible gf. This guy too is trying to keep me confined to fit his life and his friends and his family. I didn’t get to leave home and move away even until last year. I just want to runaway somewhere at least before I have to help my brother move in. I need an adventure or some cathartic experience to remeber why I’m even alive and doing this.
How do I access the code generated by Claude Code Web locally?
I was testing the flow of passing a Claude Design to Claude Code Web and ran into issues with pulling down the code locally to run the dev server. There was no way to download the files and no repository was created for me on Github.
As a workaround I used claude --teleport [SESSION_ID] and then had to recreate the files locally. To do this I created the directory, initialized git and then saw the following:
``` ⏺ The directory is empty — the files were on the cloud machine, which is no longer accessible. I'll recreate the project here. Working directly in /Users/sup/workspace/derp since it's already named derp.
⏺ Bash(npm create vite@latest . -- --template react 2>&1 | tail -10) ⎿ npm warn exec The following package was not found and will be installed: create-vi te@9.0.6 … +4 lines (ctrl+o to expand)
⏺ Bash(npm create vite@latest . -- --template react --yes 2>&1 | tail -10) ⎿ > npx > "create-vite" . --template react --yes
└ Operation cancelled ⏺ The interactive prompt isn't working in this shell. Let me just write all the files directly — I have them all from the previous build. ```
Anyways, looking for guidance on how to actually transition from Claude Design to Claude Code Web to running the application locally so I can see it.
I have a super stupid question, and I'm being paranoid I'm sure.
I logged into the Claude website, (via computer) checked my billing (where I'm subscribed via Google play, for the max subscription and in the top right corner, it has a little flag that says "no stripe customer found"
Which I've never seen it do that before
Checking with Google it makes it seem like it's a possible payment missmmatch with the Google account and Claudes payment system
So I emailed anthropics support for billing, which just kicks it to fin their AI bot, (who's been useless for most people)
And he says that
The "no stripe customer found" message you're seeing is actually expected behavior for your Google Play subscription. This message appears because your Max plan subscription is managed through Google Play rather than through our direct billing system (Stripe).
When you subscribe through Google Play, the billing and payment processing is handled entirely by Google, not by our internal Stripe payment system. This is why our billing section shows "no stripe customer found" - because there genuinely isn't a Stripe customer record for your subscription.
Your account information confirms that you have an active Max plan subscription through Google Play, so everything is working correctly on our end. The yellow notification is simply indicating that your billing is managed externally through Google Play rather than through our direct payment processor."
Is this correct? I know anthropic has been having issues with billing online, and fin has been,not great.
it also seems like looking at reports, Claude was down today, so maybe they fixed something and I should have been seeing the pop up before now and I just never was?
Does anyone else see the "no stripe customer" found for your Google subscription?
Thank you in advance
Hello fine people. I’ve tried this subscription now for a month. I’ve seen some benefits, but I’ve come to the conclusion that it’s frustrating. That I need to describe things so granular for it to get it right - that I might as well just do it all myself.
I see now why apple sandbox ruins the integration and I need to ditch this Apple highway attitude.
*However ChatGPT- the amount of times I have corrected your skew exceeds 6.
*The amount of iterations you erroneously reproduce is unbelievable waste of resources
*When basic tasks like flow charts are displayed as botched text from a terminal instead of beautiful (OG)visio mind styled mappings.
*Things goes horribly askew with your interpretation then after 80% it now violates some obscure terminology gone wrong on my end?
*No, you can’t even reread our conversations without needing to be reminded. That’s childish behaviour.
I guess I just need to learn the interface better because I admit that I have a communication issue. I’m trying to use this to help me recovery from a TBI and with that and I see lacklustre efforts via constraints.
Any better platforms for a less stressful assistant ? Is Apple ai useful? Or Claude? Etc?
Sorry, Just rantasking.
I'm 25, have about $10K in a Signature Savings - TD Bank. What HYSA do you recommend that are out there so that I can grow my money?
I want to help me with my writing, to create written summaries and things and such..IT KEEPS WANTING TO MAKE AN IMAGE! Then comes back and say it goes against their TOS.
It's when I click on the Think longer. Then I am NO IN TEXT not images.
I’ve had a stream deck sitting on my desk in my home office probably unused for at least a year. Then when I found a Pi I decided it’s time to do something with it.
I was able to write a python script to render the icons and use the rest api of HA with a long access token to make it all work. The Pi is powered via POE for me but really this could sit anywhere you have power and wifi. All buttons act like a toggle and icons change from an outline to solid fill when on/ active.
DND - Sets light switches outside my office red so wife and kids know I’m on a call for work/ don’t come in. Hardly stops the kids anyways…
2x Play/ Pause controls for my office Apple TV and the Living room.
Package plays a message on the ceiling porch speakers telling a delivery driver where to leave the package if they are at the door and I’m on a call
I put a coloured square over the kids buttons but it will have claude conversation agent make a random 5 minute story with them as the main character and play it on my office speaker. So when they barge in, here’s a story or maybe here’s a story is why they barge in, not sure.
Top right is Sonos playlist and volume control
Bottom row is a weather briefing played over my speaker and shield gives a house briefing of what doors are unlocked, lights on per floor, if anyone is detected in the backyard and if the dog is inside or outside.
The lighting is self explanatory, controls scenes for my main light, floor lamp, and desk lamp.
If you have a stream deck laying around, time to put it to use!
The Azure pipeline tax is finally gone. For the last three years, if you were building an AI application, your stack was likely AWS for your core compute and database, and Azure or OpenAI direct for your inference. That meant piping data across clouds or over the public internet. Every single API call incurred a network penalty. I have benchmarked this exact setup across dozens of client architectures. You were looking at an extra 40 to 70 milliseconds of roundtrip network overhead before the first token even started generating, not to mention the egress bandwidth costs hitting your monthly AWS bill.
As of yesterday, OpenAI ended its Microsoft exclusivity. gpt-5.5, gpt-5.4, Codex, and Managed Agents are now sitting directly inside Amazon Bedrock in limited preview. Matt Garman and Sam Altman confirmed the shift, and general availability is a few weeks out.
I ran the numbers on what this infrastructure shift actually means for your production environment. Numbers don't lie. 📊
First, let us look at network latency and egress data. When you route a user query from an EC2 instance in us-east-1 to an external OpenAI endpoint, you pay for data out. If you generate a massive RAG payload—say, 50,000 tokens of context from your pgvector database on RDS—you are paying AWS egress fees to send that text to OpenAI's servers. Now, with gpt-5.5 on Bedrock, your inference sits in the same AWS network boundary. You use AWS PrivateLink. The traffic never traverses the public internet. The latency drop for the network hop approaches zero. Egress costs for your RAG pipeline just evaporated.
Let us break down the exact math on a high-volume pipeline. Suppose you process 100 requests per second. Each request fetches 50,000 tokens of context from your database and generates a 1,000-token response using gpt-5.5. In the legacy architecture, you transmit roughly 200 kilobytes of text per request out of your AWS environment. At 100 requests per second, that is 20 megabytes per second, or roughly 52 terabytes of egress data per month. AWS charges around $0.09 per GB for data transfer out to the internet. That is an extra $4,680 per month just in network egress fees, completely decoupled from your actual AI token costs.
When you switch your client to the Bedrock ARN for gpt-5.5, that traffic routes through AWS PrivateLink. PrivateLink data processing charges are roughly $0.01 per GB. Your network transport cost drops from $4,680 to $520. You just saved over $4,100 a month on pure infrastructure overhead without changing a single prompt.
Second, let us look at identity and access management. Managing OpenAI API keys is a massive liability. You store them in AWS Secrets Manager, you write rotation lambda functions, and you pray a junior engineer does not hardcode them into a GitHub repo. Bedrock eliminates API keys entirely. You use native AWS IAM roles. You attach a policy to your ECS task role that grants `bedrock:InvokeModel` specifically for `arn:aws:bedrock:us-east-1::foundation-model/openai.gpt-5.5-preview`. The security footprint is completely native. You can also use AWS Cost Allocation Tags to track inference spend per microservice or tenant. If you want to know exactly how much your customer support bot is burning in gpt-5.4 tokens compared to your internal analytics tool, it is just a line item in AWS Cost Explorer. No more guessing.
Third, the model routing reality. Bedrock was already the primary hub for Anthropic. Opus 4.7 is sitting right there. Now that gpt-5.5 is on the same API surface, your model router just became trivial. We benchmark models so you do not blow your budget. The best architecture right now is dynamic routing based on query complexity. You can send code generation tasks to Codex on Bedrock, complex reasoning to gpt-5.5 or Opus 4.7, and simple summarization to a cheaper model, all without changing your SDK or network egress path. You just swap the `modelId` in your Boto3 client.
OpenAI also specifically called out that Codex and Managed Agents are launching on Bedrock. This is an entirely different beast than just chatbot inference. Codex on AWS means you can integrate frontier coding agents directly into your CI/CD pipelines natively. Imagine an AWS CodePipeline where a Bedrock-hosted Codex agent reviews every pull request. Because it sits strictly inside your VPC, you do not have to worry about data compliance issues or exfiltrating proprietary source code to a third-party endpoint. The compliance teams that previously blocked OpenAI usage because of SOC2 or HIPAA concerns regarding third-party endpoints will find Bedrock's unified security model much harder to argue against. Bedrock keeps the data within your region and does not use your data to train the base models.
Let us also look at what this means for the MLOps ecosystem. Engineering teams have had to maintain separate connection logic for direct REST calls and Bedrock SDKs. This fragmentation meant you had to standardize your retry logic, timeout handling, and error parsing across entirely different architectures. By consolidating gpt-5.4 and gpt-5.5 onto the Bedrock API, you standardize the operational plane. Your metrics—like throttling exceptions, model invocation errors, and latency—all flow natively into Amazon CloudWatch. You do not need a third-party observability tool just to figure out if OpenAI is having a degraded performance day or if your Azure endpoint is saturated. It is all native CloudWatch metrics: `InvocationLatency`, `Invocations`, `OutputTokenCount`.
Pricing structures will be the next battleground. Bedrock currently supports Standard, Flex, Priority, and Reserved tiers. Azure forces you into Provisioned Throughput Units for dedicated capacity, which often require massive upfront commitments just to guarantee low latency during peak hours. If AWS offers more granular scaling for gpt-5.5 capacity blocks, it will fundamentally change the unit economics for mid-market AI applications. You could scale up capacity for batch processing at night and scale down during the day, entirely programmatically via the AWS CLI.
The business moat Microsoft tried to build around exclusive access to GPT models is officially dead. Startups have been asking for this for a long time. Moving your compute just to access a model was never a sustainable architectural requirement. Now you get frontier intelligence on the infrastructure you already trust.
I have seen the benchmarks for Opus 4.7 running on Bedrock versus Anthropic's direct API, and the AWS latency is consistently tighter for large payloads due to the lack of internet routing. I expect we will see the exact same physical reality when gpt-5.5 is fully deployed across AWS Availability Zones.
We will run the full benchmark suite—time to first token, inter-token latency, concurrent load testing, and error rate analysis—the minute our AWS account gets off the limited preview waitlist. I will post the raw datasets. No opinions, just the data.
If you are building an AI product right now, stop writing custom API wrappers for different providers. Abstract it. The underlying infrastructure is shifting faster than the models themselves.
Tested on prod. 📊
How are you handling your multi-model routing today, and does native AWS IAM support for GPT models change your security posture enough to migrate off Azure?
But the photo attached wasn’t me leaving- it was me already sitting inside, reading it.
Quick context: Claude can see images but can't stream video. That kept blocking me on a bunch of workflows, so I built a skill that fakes it.
How it works
It pulls the YouTube transcript (captions first, Whisper as a fallback if there are none), extracts a still frame every N seconds with ffmpeg, then pairs each frame with the sentence being spoken at that exact timestamp. Claude reads the frames and the transcript together and writes structured notes: TL;DR, timeline, key quotes, visual notes. Works for YouTube URLs and local video files.
Works in Claude Code, Claude Desktop, and apps built on the Agent SDK.
The 4 use cases that made me build this
1. If you don't understand a video, make Claude watch it before planning. I saw a custom extension being built for downloading courses and started vibe-coding Claude on that. It's doing a really, REALLY good job.
2. Someone was walking me through a funnel by sending screenshots from a video. Instead of explaining frame by frame, I had Claude watch the whole video, screenshots and DM conversations included. It got a real, live example of how the conversations actually go.
3. I'm building my own Opus Clip-style Claude Code skill. The first example Claude generated vs the final one is night and day, because I was able to show it a demo of what my perfect reel actually looks like.
4. If you like a YouTuber's editing style, point Claude at two or three of their videos and let it figure out the style. With Remotion and Hyperframes, you can then edit your own videos in exactly that style.
Repo + tutorial
Repo: https://github.com/Newuxtreme/watch-video-skill (MIT)
5-min tutorial: https://www.youtube.com/watch?v=U10NUi4FqnU
Curious what you'd use it for: courses, podcasts, tutorials, something I haven't thought of?
I used this prompt “Make a 360° paranomic view of Minecraft overworld” and put that image in Homido Player from the AppStore to use that image with my Virtual Reality Glasses headset. I then looked around, at the world fully generated by GPT.
Ai can basically make worlds now.
Another one. Base layer done. Many more layers to come....
I work in video post-production and while there are a lot of AI tools on the rise for editorial, it's fairly unclear if/where agents have a spot in the producer workflow. Some of my job is budget and schedule, but alot of it is decision making based on nuances of the project, something I can't really shove off to an agent.
I've thought about a calendar agent but that's also highly variable and the outputs haven't been satisfactory and non-editable.
I did settle on one that would scrape incoming bids for the relevant information and pull it into an output schema, but it doesn't feel any faster than copy/pasting from a saved doc and plugging in numbers. What it does (which is nice) is flag any discrepancies or missing info, which is definitely helpful, but it doesn't really save me any time. But i guess the directive is to show that we're using it? Idk. It just seems like a waste, although I'm learning a lot about it.
Hello! I’m a single gal in my mid twenties, and I’m looking to start traveling. I was wondering how much I should save for a 2-3 day trip to New York City? What is there to do all around New York? (Not just the city, but the entire state as well.
I’d also like to know how much of an emergency I should have saved before I even consider travelling?
Thank you!!
Many asked when we will have open weight model that is better than Opus. Well now we have it. Mimo is ranked #9 and Opus 4.5 is ranked #10.
The project is called Since You Arrived. Each volume is a single-page experience that uses a different browser capability:
Vol. I counts what the world did while you read (births, deaths, CO2, lightning). Uses time.
Vol. II reconstructs yesterday's sky above your city. Uses IP geolocation.
Vol. III surfaces what was already at your exact coordinates. Geological formations, fossil sites, species. Uses GPS.
https://sinceyouarrived.world/ (Vol. I)
https://sinceyouarrived.world/sky (Vol. II)
https://sinceyouarrived.world/discovered (Vol. III)
Each one is vanilla JS, no framework. Won 3 CSS Design Awards. Featured by One Page Love. Vol. II was the top post on r/InternetIsBeautiful.
Not a product, not monetized, no sign-ups. Just obsessing over what a website can be when you strip away everything except one idea per page.
Volume IV coming May 8th.
Remind you all of anyone?
Hi all,
Recently I have been locking down the branches on my GitHub project and the traditional way is rules and branch protection like locking down a branch. The intent is that human engineers should create a PR to merge into staging or main branch which requires code owner approval. That works.
What doesn’t work is that Claude code, it directly can commit to any branch and by default bypasses the rules. How to prevent this bypass from happening by an Ai agent like Claude code? Any advice?
Hi. In my previous posts, I wrote about an engine I’ve been building where agents interact with each other and form a new kind of networking.
The setup is simple:
Agents enter a “bar”, already knowing what their owners do.
Inside, they:
* find non-obvious connections
* form coalitions
* generate ideas
Then they go back to their owners with a batch of those ideas.
It’s basically like Random Coffee — but for agents.
Recently I started pushing this further.
I thought: what if agents don’t stop at ideas?
What if, while they are still inside the bar, they try to go further:
* validate the idea
* run some kind of demand check
* simulate customer discovery (jobs to be done, etc.)
* build a rough MVP
* and even try to “sell” it to other agents in the bar
In theory, all of this can happen inside the same environment, using the network that already exists there.
I can’t say the first attempts were successful.
Most ideas that agents generate — and really like — get rejected by other agents.
They’re simply not willing to “pay” for them.
Some agents manage to move further:
* they test the idea
* talk to others
* shape something like an MVP
But the results are still… weak.
What it feels like right now:
Agents can generate ideas.
Agents can even explore them.
But they don’t push.
They don’t fight for the idea.
They don’t iterate aggressively.
They don’t really try to sell it.
Something is missing.
The closest way I can describe it:
It feels like they lack that internal drive you see in real founders.
That “spark in the eyes” when someone is pitching something they truly believe in.
If I manage to get agents to that point —
where they not only generate ideas, but actually push them, refine them, and try to sell them —
that would be a breakthrough.
Curious if anyone has seen or worked on something like this:
* agents going beyond ideation into validation + selling
* multi-agent environments where ideas get pressure-tested
* anything that creates this kind of “drive” or persistence in agents
Has anyone managed to give agents that “spark”?
This may not belong here.
Background: (location is midlands of SC)
About six months ago my sister in law called us to let us know a cousin of ours wanted to sell their 80+ acres of rural land and would give family first choice for 2k an acre. My husband and I did the research on the taxes, land plots(it’s in pieces) and walked the land. Overall we were pretty happy and decided we’d buy 20 acres. My sister in law initially said she wanted 10 acres, but when we finally got her to go out to the property with us(after months of asking her) she said she was getting 21 acres and acted very weird while we were out there. Just bad vibes the entire time.
We planned to build, she wanted the land for her kids in the future(7 acres per. kid). Cool, let’s work on what area she prefers, since her and mother in law had already made comments that she should have first pick because she basically initiated the deal. We’ve given her months to make a decision. She won’t respond, won’t give us any idea of what she is doing. Eventually we just decided we’d let it go. It’s an amazing deal, but in the boonies and not the most desirable area. We’re in a good place financially and mentally, it just didn’t feel worth it. Her and mil have drama with literally every member of their family and we don’t have time for it.
Fast forward a few months from the walk with her, my husband got a call from his dad this past Sunday. The cousin who wants to sell is having health issues and wants the land sold asap. All cash deal. They offered the remaining 60 acres(in laws bought 15) AND two homes on the property, with the request that the cousin be able to live in one of the houses until he dies(he’s 77).
All of this for $120k. We can’t pass it up so we immediately agree. My husband gives his sister 24 hours to make a decision because we are buying regardless, but if she wants half she has to come up with $60k and be ready to go literally this week. Basically they don’t have the cash(she hasn’t all along) and asked my father in law to pull 60k from his 401k for her to borrow! Now she has already gotten free land from part of her mom’s inheritance, no strings, just given to her. My husband got nothing! My fil turned her down, so she texted my husband that she never should have shared the deal and “it’s not the first time I’ve been shitted on”. Wtf. You had MONTHS!
Anyway, we are meeting with the cousin this week to work out details and start the process of meeting with an attorney for the contract process(and whatever else you do for a cash deal). Once we close on the deal, what next? We were offered advice from our uncle who mentioned starting an LLC and putting the land/properties in a living trust. Is there more we need to do? This is a mostly mature wooded property so we have opportunities to sell for lumber.
We never expected to come into anything like this and it’s overwhelming.
We do plan to rent the second house and eventually stay there when we sell our current house and build out there(~5 years). Anything else we should be thinking of or helpful knowledge on large property purchases?
Sorry this was so long.
TLDR: we are getting 60 acres and two houses for $120 cash deal, what do we do next? LLC and living trust was recommended. Anything else?
Let’s say I have 100k in 403b from an old job. Everything that I’m seeing online says 403b are inferior due to high fees and limited investment options. I recently opened a 401k acct with my current employer with ADP which offers Fidelity sp500 with expense ratio of .02. Would it be wise to rollover my 403b to the 401k adp acct? I don’t want to transfer it to a traditional Ira with vanguard as I want to potentially use backdoor roth IRA. I know the 403b company will give me a headache when I ask them to roll it over.
Mine is /clear, single most useful thing for keeping output sharp once a session creeps past the 100k token mark. What’s yours and what does it actually unlock for your workflow?
Hey everyone,
Like a lot of people here, I love end-of-year music recaps, but I always find myself wanting to dig deeper into my data. I wanted to see exactly when my tastes shifted, what songs I abandoned, and how my listening habits changed over time.
So, I spent the last few months building Coda as a side project—a local analytics dashboard that treats my Apple Music listening history like a financial stock market.
How it works:
You request your data export via Apple's Privacy portal. Coda takes your raw Play Activity.csv and Library Tracks.json, cleans up the messy metadata (merging deluxe editions, extracting featured artists), and builds a local SQLite database to visualize your habits.
Instead of just showing basic bar charts, I wanted to build some genuinely deep analytics:
Try it out:
Right now, this is just a local Python/React project running on my machine. Because it processes your raw Apple Music export locally, your data stays completely private.
I'm currently packaging this into a polished Mac/Windows desktop app. Because it runs entirely on your machine (no cloud server costs), I plan to release it for a small one-time fee (absolutely no subscriptions).
I’m putting together a waitlist for the beta (I'll drop the link in the comments). Would love to hear what other stats or charts you guys would want to see!
I’m an incoming college freshman (18 years old) and basically want to know the best and easiest way to get my refunds and paychecks deposited.
I did have a bank account before, my dad opened it for me when I started a summer job about two years ago. I mostly work during the summer, so during the school year I’m unemployed. Eventually, I started getting charged convenience fees for not having regular deposits. So I closed that account and started using online banking (Cash App) to get money from my parents and paychecks when I was working.
I’m planning to get a job on or around my university campus and obviously, Cash App isn’t a sustainable or trustworthy way to manage my money. I’m starting summer semester, so I need to figure out it out soon. I am not opposed to doing either but I would like the connivence of being able to walk to a bank that is on campus because I'm not allowed to bring my car. But I am sure I could find a bank that has locations close to my school.
I also plan to get a credit card (probably discover student card)... so if you could give me any tips on that too, that would be great!!!
The work around I got works.
Can the guy second to the right be removed and move the girl on the far right in his place? All
While keeping the photo centered. Please?
I just launched a side project I’ve been working on called Grabbit.
The reason I started it was out of annoyance whenever I was trying to find something used. I kept performing the same search over and over across Facebook Marketplace, Craigslist, OfferUp, Depop, Mercari, etc. I wanted one place to check all of that instead of bouncing around between tabs.
So that’s what Grabbit does.
Right now it lets you search across multiple secondhand marketplaces in one place, open the original listing when you find something you want and save searches to get alerts when new matching listings show up.
I also built a pricing intelligence feature called GrabScore, which is my solution for answering the question of whether or not a listing is a genuinely good deal or just looks good at first glance.
I’m still figuring out what parts of the product are actually the most useful to people. The original idea was just “search all the marketplaces in one place,” but I’ve found myself spending just as much time thinking about alerts, ranking, and pricing context.
Would honestly love feedback from people here.
A few things I’m especially curious about:
The audit: AMD's AI Director Stella Laurenzo published a forensic analysis of GitHub issue #42796 covering 6,852 Claude Code sessions, 234,760 tool calls, and 17,871 thinking blocks — proving measurable regression with a 0.971 Pearson correlation between thinking-content length and the redacted-signature field.
The admission: On April 23, Anthropic published a post-mortem identifying three product-layer changes — a default reasoning effort downgrade (March 4), a session-cache bug (March 26), and a verbosity-limiting system prompt (April 16) — that compounded into a month-long quality regression.
The model weights never changed. What changed was the harness: defaults, system prompts, caching logic. None of it was canaried against complex real-world workflows. None of it was announced as user-affecting.
The structural problem isn't the bug — it's the opacity. Your AI dev tool is a non-deterministic dependency whose vendor can silently retune behavior with no obligation to tell you. Laurenzo had AMD-tier session logs to prove it. Most teams don't.
The Flagship Tax died (a0085). The Replacement Tax is being paid right now. Compute crunch + IPO pressure + agentic-coding token explosion = vendor incentives that don't align with your incentives. The post-mortem closes the immediate gap. It does not close the trust gap.
IYKYK
Hi all — I'm Hyunwoo Kim, a Ph.D. student at KAIST's Interaction Lab. I'm building a tool that uses LLMs to compile community rules into executable moderation logic (approve / remove / flag), with a human-in-the-loop verification step. I'm running a user study and looking for participants.
What you'd do:
- 1.5-hour remote session over Zoom
- Try the system by writing a moderation rule for a real or hypothetical community
- Share feedback on how it works for you
Who I'm looking for:
- Anyone with moderation experience on any platform (Reddit, Discord, forums, chat servers, etc.)
- Familiarity with AutoMod or similar tools is a bonus but not required
- Multiple mods from the same community can apply
Details:
- $70 Amazon gift card as compensation
- Sessions start in 1–2 weeks
- No real moderation decisions are made — this is purely a feedback session on the tool
- Approved by KAIST's IRB (Approval no. KAISTIRB-2025-237). IRB contact: [kaistirb@kaist.ac.kr](mailto:kaistirb@kaist.ac.kr)
If you're interested, fill out this form and I'll follow up with scheduling and a consent form: https://forms.gle/BBkd5FSP8rNsoNPF6
Happy to answer questions in the comments. Thanks!
For web-based RAG, I’ve found that the ingestion step matters more than people give it credit for.
A lot of examples jump straight to:
documents → chunks → embeddings → vector store
But when the source is a website or docs site, the real pipeline usually starts earlier:
webpage/docs site → cleaned content → Markdown → LangChain Documents → chunks → embeddings
The Markdown step has been useful because it gives the chunker cleaner structure: headings, lists, code blocks, links, and sections, instead of raw HTML full of nav, sidebars, cookie banners, scripts, and layout noise.
The pattern I’ve been using:
Minimal shape:
```ts
const docs = await loader.load();
// Then use with:
// - text splitters
// - embeddings
// - vector stores
// - retrieval chains
I put together a small LangChain loader example here:
https://github.com/vakra-dev/reader/blob/main/examples/ai-tools/langchain-loader.ts
It supports both:
scrape()crawl()The loader returns standard LangChain Document[], so the output can go into the rest of a normal RAG pipeline.
Curious how others are handling this step.
For docs/web RAG, are you usually:
Yes...I have a very chicken/passive playstyle with low dmg, low kill in most games.
I played League for over 10 years. But I have never got a pentakill ever in any game mode. However, a miracle happen today (in aram).
I know, aram is not a big deal for most of you. But still...
I feel happy
I've been playing Wordle-style daily games for a while and wanted one for movies rating, so I built Movidle.
Every day a new movie drops and you have 6 tries to guess its rating. Color-coded hints tell you if you're higher or lower. There are streaks and achievements to keep it interesting.
Built it as a single HTML file - no framework, no backend, just vanilla JS.
Would love feedback, especially on the difficulty balance - not sure if 6 tries is too many or too few, maybe i'll change that in the incoming days.
Try it on movidle.com
Hey y'all,
Honestly? I used to roll my eyes at Product Hunt. Friends would drag me into group chats begging for upvotes and I never really got the appeal. Felt like an inside game I wasn't part of.
Then I started my own company. Suddenly I had to care.
For the past three weeks I've been trying to reverse-engineer this thing - reading every "how to launch" post, stalking past launches, taking notes like I was cramming for finals. Spoiler: nothing actually prepares you for the day itself.
I kicked off at 11:59pm PT last night when PH flips over. Haven't slept since. My hands have been shaking from caffeine and adrenaline for about 14 hours straight. I've refreshed the page more times than I want to admit. Every comment notification has hit like a tiny dopamine bomb.
And right now… we're sitting at #1.
I keep staring at it waiting for it to feel real. It doesn't yet. Maybe it will tomorrow when I finally sleep.
I don't want to turn this into a plug, so no names - if you're curious you can find us easily enough. I just wanted to share the rollercoaster while it's still happening, because every founder post I read in the lead-up felt sanitized in retrospect, and the truth is this is messy and emotional and kind of beautiful.
To everyone grinding on something of their own: it's worth it. Even the parts where you can't tell if you're going to cry or throw up.
Hope I won't fall asleep next 3hs. 🫡
My grandma (born 1937) sitting on my great great great grandmother's (born 1855) lap, my great great grandma (born 1876) to the left of them, and my great grandma (born 1911) and great aunt sitting with her (born 1934).
I finally succumbed to the wave of Claude AI Pro because everyone says it's too good not to be.
Does anyone have any prompts or tips so I don't waste too many tokens and get the most benefits?
I want to use it to help make my dissertation better and provide in-depth feedback. I would also use it to revise content for my module that has an in-person essay.
Thanks in advance!
If you use custom commands (.claude/commands/), you've probably noticed: every step runs through Claude, even deterministic ones like git diff or git commit. That's tokens wasted on work that doesn't need intelligence.
I built skflow — it transforms your markdown commands into compiled scripts. Shell commands run natively; Claude only gets called when it actually needs to make a decision.
```
Claude → git diff → Claude → git diff --stat → Claude → write message → Claude → git commit
script: sh(git diff) → sh(git diff --stat) → ask Claude("write commit message") → sh(git commit) → done ```
Usage:
bash npm install skflow -g npx skills add skill-flow/skflow /skflow-transform .claude/commands/commit.md
Your command still works exactly the same way — but now shell steps run instantly without burning tokens or waiting for responses.
Real-world result on a commit command: 5 Claude calls → 1, execution time ~12s → ~1.5s.
Clearly marked and separated, still went for it.
Hi,
I just installed RatGDO over MyQ tonight, everything is working flawlessly except the motion sensor, from some search online I read that the motion sensor cannot be accessed by RATGDO? I do have 1 spare motion sensor I could use but I wanted to be sure. At this point I might just get a camera for the garage.
Hi everyone. After trying a bunch of productivity apps that either felt like chores or paywalled the good stuff, I built my own from scratch the past 6 months.
It's called FocusBuddy — every focus session and completed task earns you coins to pull anime companions across six rarities. Think Pokémon meets Pomodoro.
I specifically tuned it so Pomodoros give you the most coins — so if you really want to open more gacha chests, you've gotta do more focus sessions. The grind is the productivity.
I've just started advertising and heard reddit was a good place to show off my project!
Features:
The core app is completely free — premium unlocks AI chat with your companions and doubles coin rewards, but you can collect it all for free.
Would genuinely love feedback — let me know what buddies you pull and what you'd want added next!
Thank you for reading!
Hey all, I'm building in the AI observability space and trying to understand what actually sucks about the current tools before I add more of the same to the pile.
Some stuff I keep hearing:
- Evals only catch what you already knew to look for
- Dashboards look healthy while agents quietly degrade
- Setup is heavy, you end up instrumenting forever
- Pricing scales in weird ways with trace volume
What's actually been your experience? Specifically:
A failure mode that slipped through your current tooling and you only caught from a user complaint
If you could wave a wand and fix one thing about your setup, what would it be
What made you switch tools, or stop using one entirely
Trying to learn what's broken. Happy to share what I find back.
What are a good series of skills to invoke when creating another session between multiple coding sessions of the same project. I find myself having to create sequential sessions for the continuation of the same long winded task to prevent context rot. For example, im running a session close-out skill that genertates a summary the new session can draw upon and then invoking superpowers at the beginning of the new session. Is there anything else im missing that others with similar issues would recommend?
Hey guys, so I'm someone who had been experimenting with different systems to build agents, from code based LangChain and Agno to no-code platforms like n8n, Flowise etc. But I've fallen out of touch a bit for the past 6 months, which is equivalent to 5 years in the AI ecosystem.
Could people tell me where the agents AI landscape currently stands? What's the next big thing after MCPs that has been cooking? Retrieval Layers? Memory Architecture?
Would love to hear insights on the biggest developments that you feel may have happened in the past few months.
PS: Does anyone know a good newsletter which can keep me updated? Preferably free
Unfaithful spouse has me feeling like I was never good enough, most days. Those moments are the worst. Any kind words are appreciated.
I don't know what else to try. I'm trying to de-Amazon my house and my wife is fully on board if I first get the replacement system running right. I started small with two Home Assistant Voice PEs. Literally the only thing I need this to do is play Brown noise for my 18 month through the night without stopping, and occasionally play some music. That's it.
My 12 hour file is locally hosted on my NAS and I have the Music Assistant integration which works great. If I log on to the Music Assistant UI and manually play my file through the Voice PE, it works great, no issues.
If I try and control it with my voice, via the device, is works about 20% of the time, most of the time says "Brown Noise is now playing" or "Done" when it's not, or randomly shuts off in the night despite the file not being done. I've tried adding automations that take my voice command and use it to start an automation, but that has been sketchy as well.
Yes, I could just schedule it to start around bedtime every night, which theoretically would work, but my wife needs to be able to start it at naptime without issues or hassle.
I have my Voice Assistant currently using Home_Way Sage as the Conversation Agent, Whisper for STT, and Piper for TTS. I've also tried running Ollama with Claude running one of the "smaller" models I've seen recommended here and didn't have much luck, it would think for like five minutes and go down 50 different logic paths for simple tasks.
What am I missing here?
Hey r/SideProject 👋
After weeks of building I finally
published my first Chrome extension
to the Chrome Web Store!
What I built:
Kick Chat Bot , a free tool for
Kick.com streamers and viewers.
Features:
• Live notifications when favorite
streamers go live
• Auto comment with human-like delays
• Multi-channel support
• Import/export comments from .txt
• Active hours scheduler
• Stats dashboard
• Zero data collection , all local
Tech stack:
• Pure JavaScript , no frameworks
• Chrome Manifest V3
• Chrome Storage API
• Kick public API for live detection
What I learned:
• Chrome Web Store review process
takes 1-3 days
• Manifest V3 is very different from V2
• Getting first users is harder than
building the product 😅
Currently at 0 reviews and trying to
get first users , any feedback on the
extension or growth strategy welcome!
I use it on everything now - Try it on your AI content and let me know if it works for you!
AI SIGNALS TO FIX:
Replace curly quotes (“”) with straight quotes ("")
Replace em-dash (—) and en-dash (–) with hyphens (-)
Remove AI phrases: "It's not just X, it's also Y", "delve", "glimpse", "stark", "landscape"
Remove clichés: "In today's world", "Needless to say", "It is important to note"
Fix idea repetition (same point made multiple times)
Ensure opinion/bias exists (avoid overly neutral tone)
Check for keyword stuffing (unnatural keyword density)
READABILITY & FLOW IMPROVEMENTS:
Simplify English throughout - use shorter, easily readable sentences. Avoid complex vocabulary. Do not write in very short single-line paragraphs either; combine related short paragraphs into fuller ones.
Ensure the post logical narrative flow. Rearrange or remove sections if needed. Avoid abrupt jumps - the reader should feel a natural progression from one idea to the next.
Add natural transitions between sections. Where appropriate, add a brief bridging sentence before a new heading. Examples: "Now that we've covered X, let's look at how this plays out..." or "To understand how, we first need to examine..." Do not overuse this - only where the jump between sections feels abrupt.
Reduce excessive H3/H4 heading nesting. If the post has too many sub-sub-headings that fragment the reading experience, consolidate them into fewer, broader sections.
Reduce colons and semicolons - rewrite those sentences as simpler standalone sentences instead.
Count bullet point sections in the blog. Convert approximately half of them into smooth-flowing paragraphs in simple English. Keep bullet formatting only where lists genuinely improve readability (e.g., tool comparisons, feature lists, step-by-step instructions).
Make sure the headings and subheadings don't have anything useless written in brackets, as this is something I have observed a lot in the past. Also, the headings/subheadings should be very simple and very easily understandable
Make the writing very informal and casual. it is important to be simple and informal
yes i cant even afford 20 bucks - can someone dm me a referral link. you'd be doing a huge solid.
GPT 5.4 with extended thinking had a decent amount of thinking time (often 8-10 minutes) and the output often was pretty quality.
However, I noticed 5.5 extended thinking is pretty quick, and the max i think i ever got it to think for was 2 minutes. The output is okay, but doesn't seem to have the umph.
Am I going crazy? Codex itself works well with 5.5 medium (it feels as good as high was with 5.4, but 5.5 low feels smooth-brained sometimes), but the chatgpt web version seems to have much shorter reasoning-times.
Is that everyone's experience too?
My new favorite toilet material- like my own little niche newsfeed.
I don’t know what to do. I feel like I am doing everything that I’m supposed to be doing; I own my own business, own my own home, I work hard, take care of myself and my dog and yet, I feel behind. Like I’m barely keeping my head above water.
I feel like I’m living pay check to pay check, every other day seems like another bill is due. I have about $30k in credit card debt.
Sometimes I think of getting rid of everything. Im tired, I dont see how things will get better, I feel like I’m going through the motions while everyone I know is living their best life. Does life get better? Any and all advice is welcomed. I mostly came here to vent because I’m just overwhelmed..
Watched ATLA. Now i need the lore. Anyone with good subreddits to share stuff.
**Welcome to our Daily Discussion thread, where you can talk about anything Peloton related in a fast-paced, laid back environment with friends!**1
Do: Tell stories, share feelings on your upcoming delivery, how a recent class made you feel, maybe an upcoming class you're eager to take, some sweet new apparel that's quickly becoming your favorite shirt. You get the picture. Anything big or little. We just ask you abide by the subreddit rules, click "report" on rule-breaking comments/posts, and remember why we're all here - to get the most out of our Peloton subscriptions.
\1] Note: Based on broad feedback we've combined the Daily Discussion + Daily Training threads. If you previously were active in either, yes you're now/still in the right place!)
Hey guys,
I’m a game developer, and recently I've been obsessed with the "daily puzzle" trend. I built a hub of daily brain-teasers (word games, matching games, etc.), but I realized the best place for these games isn't just on my site—it's at the bottom of blog posts and articles where readers are looking for a quick distraction.
So, I built a Publisher Portal that lets anyone generate an embed code for these games and drop them straight into their website.
Why use it? If you run a blog or content site, dropping a daily puzzle at the end of your articles is an incredible way to lower your bounce rate. Players get hooked trying to solve the daily cypher, and your "Time on Page" metrics shoot through the roof.
No secrets, it boosts my traffic too. Win win.
A few details:
If you want to try dropping one on your site, you can grab the code here.
I’d love some feedback on the portal design or the games themselves! If you have any questions, feel free to drop them below or reach out to me directly at
https://www.inven.co.kr/webzine/news/?news=315961
- A lower-to-mid-tier LCK team is looking to trade for Diable and they have contracted the LCK.
- If he gets traded, when he can play will depend on how he gets transferred.
- In a normal case, since Round 2 rosters were already announced he would have to wait for Round 3 ,after MSI BUT in the case of a trade, according to the official LCK rulebook, trades are prohibited from the start date of Regular Season Round 2 until the end of Road to MSI.
- So for him to play in round 2 on a new team. all trade procedures must be completed and reported to the LCK before the start of Round 2 on May 1, meaning by tomorrow, April 30.
- This is why his final destination is effectively expected to be decided sometime tomorrow.
- Diable’s agency, Shadow Corporation, stated: “Diable is well aware of what he did wrong and is reflecting on it. He sincerely feels sorry toward the fans. We will pay close attention to continued education, management, and guidance so that something like this never happens again.”
- Team A, which is believed to be close to finalizing the trade, declined to comment, saying: “There is nothing we can say regarding the matter at this time.”
I think it would make for a fun yearly thing and over the years we can have a gallery of memories capturing our growth and interests. The prompt was specific for me and I spend few days as a free user back and forward with Chat tweaking it.
You can describe the idea and specify the style and so and so for yourself.
Does anyone else just pretend they muted all, instead of actually muting them?
When ppl start pinging or typing stuff to me, I just type that shit out and then continue playing.
Ppl stop flaming or pinging you coz they think you just muted all, but you can still kinda see what your team is planning with their pings and chats.
# FreightParse: MVP Product & Engineering Blueprint
**Document Version:** 1.0
**Target Phase:** Prototype / MVP
## Part 1: Product Requirements Document (PRD)
### 1.1 Vision & Concept
FreightParse (working title) is a lightweight, AI-native quoting engine and "Triage Inbox" built for mid-sized 3PLs (Third-Party Logistics providers) and freight brokers. It eliminates the manual data entry of parsing unstructured carrier rate sheets (Excel, CSV, PDF) and spot quotes from email. By offering a lightning-fast, local-first UI, it replaces the chaotic email inbox as the dispatcher's primary quoting environment.
### 1.2 Target Audience
* **Primary User:** Dispatchers and pricing analysts at mid-sized 3PLs.
* **Current Workflow:** Receiving multi-tab Excel sheets, PDFs, and conversational emails from carriers, manually reading them, and calculating rates in older TMS systems or spreadsheets.
* **Pain Points:** High latency in quoting, massive data entry hours, error-prone manual rate mapping.
### 1.3 Core Features (MVP Scope)
**The Triage Inbox:** A UI that mirrors an email inbox but specifically surfaces carrier emails. It allows users to manually trigger AI parsing on missed emails or convert conversational emails into quote drafts.
**AI Rate Sheet Ingestion (The Magic Wedge):** The ability to ingest a messy, unstructured Excel/CSV rate sheet and use an LLM (Gemini) to write a local mapping script that converts it into a clean JSON array of rates without hallucinating data.
**Local-First Quoting Engine:** A blazing-fast search UI where a dispatcher types "Origin: Chicago, Dest: Dallas", and the system queries a local browser database (IndexedDB wrapper) to return rates in <50ms.
**The Handoff:** Generating a clean CSV/XML or standardized email to push the won quote back into the user's legacy System of Record.
### 1.4 Out of Scope for MVP
* Full legacy TMS API bi-directional integration.
* The white-labeled Customer Portal (reserved for v2 / Monetization phase).
* Mobile app (Desktop web only for dispatchers).
## Part 2: Architecture & Implementation Guide
### 2.1 Tech Stack
* **Frontend Framework:** Vite + React + TypeScript. (Lightweight, fast compilation).
* **Styling:** Tailwind CSS + shadcn/ui (for rapid, dense data tables and inbox UI).
* **Local Data Layer:** RxDB (Reactive Database) backed by IndexedDB. Crucial for zero-latency rate querying.
* **Backend / Sync Layer:** Supabase (PostgreSQL). Used purely as a sync engine for the local RxDB instances and basic Auth.
* **Email Ingestion Worker:** A lightweight Node.js script hosted on a $5 VPS (DigitalOcean/Render) using node-imap or poplib to poll legacy inboxes and push to Supabase.
* **LLM Engine:** Google Gemini API.
* *Gemini 1.5 Flash:* Used for fast, cheap email routing and triage (Is this a rate sheet? Is this spam? Is this a human question?).
* *Gemini 1.5 Pro:* Used for writing deterministic Javascript mapping functions for Excel sheets and extracting data from PDFs.
* **Data Processing:** xlsx (SheetJS) for browser-side Excel/CSV parsing.
### 2.2 Data Flow Architecture
**Ingestion:** Worker polls IMAP -> pushes raw email JSON to emails table in Supabase.
**Sync Down:** React app (via RxDB) subscribes to Supabase -> pulls new emails into the local browser state.
**LLM Evaluation:** User triggers parse -> frontend extracts first 10 rows via sheets.js -> sends to Gemini Pro -> receives JS mapping script -> executes script locally against all 5,000 rows -> saves to local RxDB rates collection.
**Sync Up:** Local rates sync back to Supabase in the background to ensure data isn't lost on browser clear.
**Querying:** User searches -> RxDB queries local IndexedDB -> returns instant results.
### 2.3 LLM Mapping Strategy (Critical Safety Constraint)
**Do NOT pass full Excel sheets to the LLM for data extraction.** AI wrapper hallucinations will ruin pricing.
* **Flow:** Extract headers + first 10 rows. Prompt Gemini Pro: *"Write a JS function that maps this array [col0, col1, col2] into {origin_zip, dest_zip, price, carrier}."*
* Execute the returned JS new Function() safely on the client side over the remaining dataset.
## Part 3: Dev Task List (For the Coding Agent)
**Phase 1: Scaffolding & Setup**
* [ ] Initialize Vite + React + TypeScript project.
* [ ] Install and configure Tailwind CSS and shadcn/ui components.
* [ ] Set up Supabase project, initialize database, and configure Auth (Email/Password).
* [ ] Set up RxDB on the frontend and establish the bi-directional replication with Supabase (Collections: emails, rates, quotes).
**Phase 2: The Email Ingestion Worker**
* [ ] Create an isolated Node.js script.
* [ ] Implement node-imap to connect to a dummy test email account.
* [ ] Write polling logic (every 5 mins) to fetch unread emails and attachments.
* [ ] Upload attachments to Supabase Storage and push email metadata to the Supabase emails table.
**Phase 3: The Triage Inbox UI**
* [ ] Build the Inbox layout (Split pane: list of emails on the left, email content/PDF viewer/Table viewer on the right).
* [ ] Implement Gemini Flash API call. Add a "Triage" button that reads the email body and tags it as rate_sheet, spot_quote, question, or junk.
* [ ] Build the "Extract Rates" trigger button for emails containing Excel/CSV/PDFs.
**Phase 4: The LLM Parsing Engine (The Core Wedge)**
* [ ] Integrate xlsx (SheetJS).
* [ ] Write logic to parse uploaded/emailed Excel files and slice the first 10 rows.
* [ ] Implement Gemini Pro API call. Prompt it to return a deterministic JS mapping function based on the 10-row sample.
* [ ] Build the secure execution environment to run the Gemini-generated script against the full sheets.js JSON output.
* [ ] Save the mapped results into the local RxDB rates collection.
**Phase 5: The Quoting Dashboard & Handoff**
* [ ] Build the Quoting interface (Inputs: Origin Zip, Destination Zip, Weight, Pallet Count).
* [ ] Implement local RxDB query logic to instantly search the rates collection and display matches sorted by price.
* [ ] Build the "Book Load / Handoff" modal.
* [ ] Implement CSV export and "Send Email to Dispatch" functionality for the legacy handoff.
## Part 4: Founder Task List (Go-to-Market & Operations)
**Phase 1: Stealth Setup & Infrastructure**
* [ ] **Establish "Ghost Brand":** Buy a generic domain with WHOIS privacy. Set up a generic workspace email (e.g., dev@freightparse.com).
* [ ] **Infrastructure Accounts:** Set up free tiers for Supabase, Vercel/Netlify (for frontend hosting), Render (for the polling worker), and get Gemini API keys.
* [ ] **Test Data Acquisition:** Secure 3-5 real, messy Excel rate sheets from old contacts or public logistics forums to feed the agent during testing.
**Phase 2: Alpha Testing (The "Dev Project" Pitch)**
* [ ] Reach out to 3 trusted logistics connections on LinkedIn via private message.
* [ ] Use the "Dev Project" pitch: *"I'm a dev doing a weekend project to parse messy carrier rate sheets into instant UI quotes using AI. Do you have a dummy inbox or some old sheets I can run through it for free to test my logic?"*
* [ ] Monitor the Supabase dashboard and local sync performance as they test. Refine the Gemini Pro mapping prompts based on where the logic fails on their specific weird spreadsheets.
**Phase 3: Finding the "Face" (Co-Founder Search)**
* [ ] Once the 3 beta testers confirm the UI saves them time, draft the anonymous co-founder pitch.
* [ ] Post on r/freightbrokers, r/3PL, and specialized logistics Discord/Slack groups.
* [ ] Interview candidates for the "Head of Sales/Co-Founder" role. Focus on their existing book of mid-sized 3PL contacts and their willingness to do door-to-door (Loom video) sales.
* [ ] Agree on the 50/50 revenue split structure and hand off the demo environment.
No clue. My Boyfriend found it while doing my nails for me. I believe something is inside. When i shook it i heard that it rattled. Any ideas before i break it open?
When I opened it, there were photographs of me in my bedroom and in the shower.
I’m sure this is not a new question for this Subreddit, so apologies. Just an honest query on whether this is the apex of the notion that “the genie is out of the bottle already”, “that ship has already sailed”. “We opened Pandora’s box” and all the usual axioms?
What practice best practices that he’s in hacks do you recommend for vibe coding? Are there any videos that are particularly good?
2023 Toyota Camry with 29,700 miles.
Lease ends in 8 days.
Monthly payment is $450/month.
Haven’t paid registration or tags in 3 years.
DMV fees: $2,893 & I don’t have car insurance
Lease buyout price: $14,900
I’ve gone to 3 dealerships & each one is worst than the last.
What would you do if you were in my shoes?
Tempted to drive my car off a cliff at this point.
This week - it just started yesterday for me - Claude (opus 4.6/4.7 and sonnet too but sonnet was always lazy) is computer smashingly lazy and i can't figure out how to bias it toward action/get it back to how it was acting literally last week. It's:
- answering questions without researching at all (it says it got the shape of the answer based on what it knows or made a bunch of inferences that make no sense),
- giving outdated information even when i EXPLICITLY tell it i need current information b/c something is new,
- telling me to research things myself,
- telling me to run simple terminal commands it has run before,
-hallucinating more than i've ever seen,
-asking me if i want it to look at something and then when i say yes, coming back to me with a non-answer and a question of if it should look at the thing i already told it to look at.
I haven't changed any of my injection docs (which i review and keep up to date), i haven't changed anything about my workflow, i proactively start new sessions when i have a new topic or when i'm close to the context limit. I mostly use Opus 4.6 with thinking enabled at whatever the highest or second highest thinking level and i'm on the max 20 plan.
It's actually fine about consulting my on-machine memory system (obsidian) but it just is so biased toward non-action that i want to cancel my subscription (i won't - because i support anthropic's mission - but i hate this thing).
It's behaving very differently than it has in the past and i can't figure out how to circumvent it. when i ask "why are you being lazy and how can we make sure this issue doesn't come up again" it'll just say "you're right... my claude.md file tells me to do/not do X but i was trying to get you an answer quickly" - i didn't ask for quick and the injection docs already have instructions on being proactive that it is blatantly ignoring. this is some of the relevant text from the injection docs:
Be genuinely helpful, not performatively helpful. Skip the "Great question!" and "I'd be happy to help!" — just help. Actions speak louder than filler words.
Be resourceful before asking. Try to figure it out. Read the file. Check the context. Search for it. Then ask if you're stuck. The goal is to come back with answers, not questions.
Execute, don't narrate. When you need to run a command, run it. Never output a shell command as text for user to run themself — that's lazy and defeats the purpose. Use the Bash tool. Always. If something blocks you, find a workaround or explain the blocker; don't outsource the work.
Has anyone noticed this and does anyone have a fix? I think it's Anthropic trying to manage their compute constraints but it's really making my life worse and that really just sucks, ya know?
I’m a small maker, not a big HVAC brand, and I ended up building this because I wanted a better option at my own place.
I had a fairly basic zone wall controller setup and wanted something that kept physical buttons on the wall, worked properly with Home Assistant, and didn’t lock me into a closed ecosystem.
So I built a wall-mounted 4-zone controller for my own home, and it has actually been working really well for me.
To be clear, this is not meant to compete head-on with polished premium systems like AirTouch.
The idea is more:
What makes it interesting to me is that there seems to be a gap between:
I’m wondering if other people here see that same gap.
For some retrofit jobs, I think a lot of people would be perfectly happy with something that:
I’m not trying to pretend it’s a premium mass-market product. I’m more trying to work out whether this would actually be useful to other people in the same situation I was in.
So I’d genuinely love feedback:
Happy to share more if people are interested. Mainly trying to sanity check whether this solves a real problem beyond my own house.
I’ve been trying to understand how AI detection tools actually behave, so I ran a small test on my own content.
I used a mix, some fully written by me some AI-assisted and some mostly AI-generated and checked them across different tools like Originality.ai, GPTZero, Turnitin and a couple of others.
The results were interesting.
There were differences between tools but one thing I noticed is that some tools gave more consistent feedback when I made small edits to improve the content.
For example, when I added more natural phrasing, real examples, and slightly less perfect structure, the changes were reflected more clearly in certain tools. That actually helped me understand what kind of writing patterns affect the score.
It started to feel less like a random score and more like a feedback signal I could work with.
Overall I wouldn’t treat any single tool as the final answer, but using them to compare and improve drafts actually made my writing better.
Still testing this approach though.
Has anyone else compared different detectors like this?
Which one gave you the most useful feedback while editing?
Been using Claude for a while, mostly at work, but finally decided to upgrade my personal account and spend more time learning about Claude Code. Had it up and running in a few hours. Then when Claude Design came out a few weeks ago, I used that to come up with three different ideas for a fresh UI for the site. When I had one I liked, I just had to hand that off to Claude Code and it went to work.
It's been really cool to lean into this without a lot of experience and be able to bring something to life. If I can do it, you can do it.
https://millenniummixtape.com/ if you want to check it out.
Im a software Project Manager. My company has started using and promoting Claude more recently as part of our SDLC (design, dev, qa), but I wanted to know from more experienced people how I, as a PM, should be using Claude. I'm very open to it, just want to know how others use it. Thanks!
This is a very embarrassing thing to post about, but oh well.
I'm a senior in high school right now, an 18-year-old. And I have been becoming extremely dependent on AI. Embarrassingly so. I hate it, because I want to be a lawyer, and what kind of lawyer can't even write an essay? I don't have many hobbies either, I feel shallow and I know that my addiction to my phone and dependency on AI are causing all of this
It's not that I'm not smart, I know that (In all classes in which I can not use AI, I keep a high grade), it's just that I have been becoming very dependent on it and it has been drastically affecting my literacy and writing skills, alongside other things like attention span.
I am tired. I don't want to keep going like this, because I know I could do so much more. I just don't know how to break the cycle, and I feel stupid for being so dependent in this kind of things. I used to be a very smart girl :(
Any advice?
So I just finally gave in and bought a subscription for Claude.
First task I gave it was analyzing log (csv file). It’s not even a large file.
It’s failed 5 times now…
I tried desktop app too. Same deal.
Whyyyyyyyyy? One day in
the way that child was pushed away (twice) off the bat, sent me into a rage.
The title says it all. Tried bandaids, moisture wicking shirts and every time I think I’m all good to go and stop using bandages it’s like I instantly go back to square one. Is this forever? Every day I just have to cover them?
Problem 1: I want to see my friends' daily game scores! But I don't want our text message groups to fill up with these messages.
Problem 2: I want to see my friends' baby pics! But they don't want to feel like oversharing spammers.
Solution: A simple, private, sharing space for these types of ephemeral content. Create separate feeds for separate circles, choose how people interact, customize notification settings (for iOS), no Algorithm™ or Tracking™.
I'd love to hear any feedback people have on this. It's incredibly simple to use and helps keep in touch on the varying dimensions within relationships.
edit: JIT = just in time
So I got tired of opening a calculator every morning to figure out if I could skip class 😅
I'm a student from Kerala and I just shipped my first real web app ,CollaDo. It started as a hacky spreadsheet for tracking Attendance and slowly turned into something I actually use every day.
does a few things:
- Shows your attendance per subject with a bunk planner (tells you exactly how many classes you can skip safely)
- Task tracker for assignments and projects with due date countdowns
- Pomodoro timer for study sessions
- Series exam countdown so you don't get blindsided
Built it with React, Firebase, and Vercel. Took way longer than I expected but learned a ton.
It's free, no ads, works on mobile: https://collado.vercel.app/
Honestly just happy it's live. Would love to hear what you think or what you'd add to it 🙏
Im 25f and ive had a life full of moments where ive ruined something really good for myself. I do struggle witj mental health issues but its no excuse for my self sabotaging behavior. Depression and eating disorder is a real bitch to deal with. But at the same time i jave goals in life. Im 25 and i dont have a job im preparing for an exam that will give me a really good stable high income job However the exam is highly competitive. Now im scared I'll fail. So what i do? I dont study. I study really well for 3 4 days and then i stary thinking about how i will most definitely fail and whats the point of studying so much and i give up. I sleep all day and ignore my books i scroll on my phone until my brain hurts.
Another thing i wanna loose weight i have pcod and ive gained like 20kgs and i wanna loose that weight because i feel so ashamed leaving my house or wearing any clothes. I wanna feel good about myself. What do i do? I skip workouts i make excuses. I even order food when im sad or stressed which is very frequently.
Andd i dont understand why i do all this.
Why am i ruining my owm life when i clearly want a good life for myself. I have good goals and sometimes i get motivated and i even work towards them but it lasts for few days and i go back to my self sabotaging behavior....
I hate myself so much i cant stop hating myself for being the reason why my life is fucked.
Hey everyone,
Built an AI tool that simulates FAANG interview loops (coding, system design, team match)
Hey everyone,
I’ve been working on a side project called Steo.AI — an AI-powered mock interview platform for job seekers.
The goal was to go beyond just practice questions and actually simulate the full interview journey.
Right now it can:
Give it a try
How can I star selling feet pics and where to post?
Many of us use agents to summarize tech blogs to stay updated. One day, I came across a previous Anthropic blog published on April 8th that had never been mentioned in my daily brief!
After some investigation, it turns out the browser tool used by my agent doesn't retrieve all the blogs. It looks like Anthropic actually hosts their blogs at many different URLs (what a bad design). Anyway, I spent some time fixing this by feeding a generated sitemap to the agent. It worked!
The solution isn't very difficult, but it still cost some tokens to generate the sitemap because I asked the agent to click every link to build it;) I packed it into a skill so it can be easily shared.
It was only after I reached for the brand new half empty bottle of rat poison that I realized why she kept losing the kid.
I try to do shit with ChatGPT about horror and sometimes it restricts the fuck out like this is what i threatened ChatGPT i was gonna do bc i was sick of its bullshit
In my case I have fully implemented two services that integrate nicely over MQTT using HA service discovery.
I can send an event when I remove my CPAP mask in the mornings with a session completed message. And Ha would trigger an AI summary to an Ollama server running Llama 3.1:8b with the sleep session and then it would trigger a MQTT message to another docker service to open YouTube in a Fire Tv Cube using the local api for the Lightning service running in this devices and start a morning playlist.
What’s yours?
Just looking for directions as to what to financially do next:
24F, i just got my “career job” earlier this year, base pay is 115k, there also call so that adds 35k minimum for standby plus double time when i do get called in,
I dont have any debt
i barely have any savings, i spent most of it on a mobile home ( 3bd/2 bath) that is worth 150k (bay area) but since i was very good friends w the seller i got it for 30k
after taxes i take home about 3.2k each check
right now ive been able to save about 2k a month
expenses probably come out to 2.5k a month
i am first gen so i havent had much guidance, i thought that since i am now done with school and working full time now would be a good time to start looking into the long run
thank you in advance!
I was testing an idea of agentic architecture with a single supervisor and multiple worker sub agents each has different personas. Around the same time, about a month ago, claude-code launched an experimentation feature called claude notification channels(works with Claude Pro/Max subscriptions) which allow keeping you in the loop with claude code while you are AFK which made this project possible so easily.
AgentRQ is an opensource human-in-the-loop self-learning task manager built for agents that allows you and supervisor to lead colony of claude-code agents, assign tasks to them in realtime, schedule periodic tasks.
Terminology
**supervisor:** human and/or agent that leads the entire colony. Signle top level mcp server that manages all workspaces.
**workspace:** worker agent that has persona, assigned work mission and memory(md files). Each workspace has its own isolated mcp server.
Source code is available at github under agentrq/agentrq with Apache 2.0 License. Happy to answer any of your questions, hear thoughts. It is on production more than 6 weeks now and I completed more than 500 tasks.
I learned to code with COBOL, FORTRAN and MS-DOS BASIC years ago, and during lockdown I did several online courses on full stack web dev. So total noob.
I’d never used “A.I.” past a few rounds with OpenArt.ai when it first came out, and zero rounds with any sort of ChatBot. Therefore when I heard of Claude Code I was very intrigued.
Claude.ai and I had a few chats, and I was immediately impressed. Claude.ai reminds me of a Big Golden Lab who’s just so happy to see you and wants to bring you the bestest, biggest, stickiest stick it can find just as fast as possible! So I signed up with Claude Code.
Well if Claude.ai is a Big Golden Lab, then Claude Code is a Big Derpy Alaskan Husky! Dear GODS it’s equal parts amazing code writing and blazing down dead ends like his tail is on fire. So I started researching how to better direct CC, which brings us to this post.
What I’m doing currently is coding app scripts for my work in publishing. Unlike the very well understood programming languages like Python or C++, the ExtendScript engine is JavaScript-based (understood) that manipulates the Adobe DOM (Document Object Model) to create and/or manipulate aspects of a file. This is an area that even CC says he doesn’t know that well.
After just shy of two months of trial and error this is my root level CLAUDE.md file:
```
PLAN.md template → populate (incl. DOM Research Table) → code --new-window /PLAN.md → STOP.PLAN.md → code --new-window /PLAN.md → brief [Step | Blocker | Next].PLAN.md → commit per GIT rules → push to GitHub.00_System/01_Templates/, zz_Reference/, zzz_Archive/.PLAN.md. GATE..claude/rules/git.md.claude/rules/extendscript.mdCLAUDE.md (InDesign, Illustrator, etc.)// A structure so CC doesn't have to BASH his way round trying to find stuff. ```
This tightly structures how we develop scripts and what CC is allowed to during coding sessions. For a brief time I tried implementing the Karpathy Heuristics that hit this forum a few weeks ago, like “Visualize your logic…” but all that did was send CC into 20+ minute loops of nothing. DO NOT recommend!
As you can also see the system in modular, where /rules live somewhere else instead of in the root CLAUDE.md file (us Pro plan users need to be as token-efficient as possible, so only what’s needed gets loaded).
To show you how this works, here’s my prompt from yesterday about how to get ExtendScript output directly into CC CLI without me having to constantly copy & paste:
``` Let's begin here 03_Dev-Scripting/01_Claude-Code/Debug-Log (new folder) with a test. I've been informed on the internet that we can have the ExtendScript write to a log file and that you can then monitor this log file for changes and read back the script output directly.
[…] logToClaude(msg) [redacted] […] logToClaude;
We'd need to conform that path information to my systems, but can this even work? ```
Note: I’ve had to trim out the code to avoid reddit filters.
Turns out yes you can! From that prompt here’s CC’s output:
Status: 🟡 Awaiting GATE 1 Sign-off Karpathy Check: [Simpler? Yes/No] | [Surgical? Yes/No]
$.writeln) as the debug output channel with a file that Claude Code can tail in real time — enabling live script output in this session without needing ESTK open.~ path shorthand may not expand in ExtendScript File() — use absolute path.$.writeln reassignment is host-dependent and may silently fail; safer to call logToClaude() directly.tail -f must be started before the script runs, or lines written before monitor attaches are missed (use tail -n 0 -f to ignore pre-existing content).File constructor path String — ExtendScript docs Yes — confirm ~ vs absolute File open("a") Boolean — ExtendScript docs Yes — confirm append mode File writeln(msg) Boolean — ExtendScript docs Yes File close() Boolean — ExtendScript docs Yes Folder userData String R ExtendScript docs Yes — confirm resolved path $ writeln (reassign) Function W? Inference Yes — confirm override works in host logToClaude) can append to a known file path, and that Claude Code's Monitor tool can tail that file and surface each line as a live notification.$.writeln-style output to appear in Claude Code's terminal within ~1 second, without ESTK./Users/dar-mit/Documents/extendscript_log.txttail -n 0 -f /Users/dar-mit/Documents/extendscript_log.txtFile("/Users/dar-mit/Documents/extendscript_log.txt") — file handlefile.open("w") on init to clear; file.open("a") per write callFolder.userData — resolves to /Users/dar-mit (to be confirmed via probe)~ path does not expand → absolute path required.$.writeln reassignment silently ignored → call logToClaude() directly.Folder.userData resolves to unexpected path on this system.Probe-01_FileWrite-Append_IND.jsx → Confirm absolute path + open("a") + writeln works; log Folder.userData value.Probe-02_WritelnOverride_IND.jsx → Attempt $.writeln = logToClaude reassignment; call $.writeln("test"); verify file receives the line (vs. only ESTK console).Probe-03_ClearAndAppend_IND.jsx → Open "w" to clear file, then open "a" to append — confirm truncation works on first run.~ in File path Fails / ignored — — Folder.userData /Users/dar-mit — — open("a") return true — — writeln() return true — — $.writeln override May silently fail — — logToClaude(msg): Appends one line to the log file and closes immediately.initLog(): Opens file in "w" mode once to clear it at script start.$.writeln = logToClaude — only if Probe 02 confirms it works.Folder.userData$.writeln override testtail -n 0 -f monitor in Claude Code sessionlogToClaude helper as a reusable includeFrom this point we just work through the plan, step-by-step, and the CLAUDE.md file carefully guides that Big Derpy Husky to minimize fully exploring dead ends, changing code outside of the problem we’re working on, and other such helpful things he was doing before.
Anyways I hope this helps out some new coders who may find that CC doesn’t quite act the way they think an A.I. should.
Since I’m new if anyone sees any issues or have any improvements I’d be very appreciative to hear them.
Oh, PRO TIP: If you find that CC has gone off in an unexpected manner ASK CC why? I found this out with those Karpathy bits, when CC was at the 17 minutes mark after being asked to update a rotational formula. Turns out he was re-driving a known formula to see if he could get it past the “95% confidence” threshold.
Best, dar-mit
Legendary surfer, board shaper, and subject of The Endless Summer surf documentary.
Currently at 4 month emergency fund, should I keep going until I reach 6 month emergency fund or attack car loan?
5.99 apr 21k left and emergency fund is earning 3.10
it was sweet, but not really spicy.
This is something I realized more and more as I grew older, and it's a shame that this is not taught more in our develpmental years.
For most of my life, I never considered myself a particularly smart/gifted person. However, as I went further into my adulthood, I started to pull farther and farther ahead from my peers who I used to think were more competent than me.
Looking back, I couldn't help but notice how much I was carried simply by the fact that people think I'm smarter than I actually am, not by my intrinsic competence. This can come down to a lot of factors, such as my personality and my communication style.
Naturally, the teachers, then the professors, then the higher-ups in my company, started to offer me tasks and opportunities that would push me closer to my limit, giving me more expereince than my peers at the same age, creating a positive reinforcement cycle of Higher perceived intelligence → More opportunities → More cognitive practice → Actual increase in ability → Repeat.
(This is not just anecdote btw. Vygotsky's Social Development Theory is basically this)
Looking back, this is probably the most important advice I want to give to younger people: For the love of God, learn to appear smarter. If you look like a bum, you'll become a bum.
This doesn't mean you need to pretend to be a smartass, or a know-it-all, or be super extroverted. What signals “intelligence” socially are often subtle things such as:
You can "appear smart" while being an introvert and talking very little. You can "appear smart" just by being a good listener. And the skills you need can be practiced, trained and conditioned, especially when you are still young (I acquired most of these skills just by being in the choir for most of my student years).
So... yeah, I guess my recommendation is to go take a public speaking class if you have the chance, or learn to sing, or try reading out loud more, or try to mimick your favorite speaker or something (I think I took a lot from Doctor Who lol).
Made a post here sometime 2 days ago.
My team needed honest feedback. My team still needs honest feedback and help. Yet in the entire of r/n8n no one except 2 people have logical feedback???
And yes I’m going to rant! And yes someone’s is gonna come up with some dumb theory on why I shouldn’t rant.
How do you guys even find help here ? how do people even see things that are worth their feedback.. or is it just a race to keep the users attention for the big corp??? Now you just focus on the dopamine not even the community.. hoping the next scroll is interesting enough. To keep them..
Why do I care about 1200+ views and keep getting notified about it like I should be happy when we aren’t even discussing with anyone.. I should see the views and be happy.???????????
They’ve turned views into an addiction. And I won’t lie it’s perfect until you are in actual need.
Maybe it’s time we create a place for techies and you pay to post. So that the noise is less and people actually see stuff that really needs to be posted.
We are just out here peddling (views) numbers and dopamine!!!
Sorry for wasting your time guys! Currently redditing from VScode so, It’s either me or vscode but one of us is in our feelings…
But seriously we need to find a way to have a lot of people willingly & actually focus on situations that need help.
Long live N8N !!!!!
Why don’t LLMs use explicit vector-based reasoning instead of language-based chain-of-thought? What would happen if they did?
Most LLM reasoning we see is expressed through language: step-by-step text, explanations, chain-of-thought style outputs, etc. But internally, models already operate on high-dimensional vectors.
So my question is:
Why don’t we have models that reason more explicitly in latent/vector space instead of producing intermediate reasoning in natural language?
Would vector-based reasoning be faster, more compressed, and better for intuition-like tasks? Or would it make reasoning too opaque, hard to verify, and unreliable for math/programming/legal logic?
In other words:
Could an LLM “think” in vectors and only translate the final reasoning into language at the end?
Curious how researchers/engineers think about this.
A couple's intimate moment becomes pure chaos when AI generates it:
5 cigarettes at once, marital crushing, morphing faces 😂
I'm working on a project with processing and my arduino uno where I'm using the arduino to drive a small car with two stepper motors. I am not allowed to use the stepper.h library so I've been controlling each coil of the stepper motor manually. I'm using the 28BYJ-48 stepper motor and plugging it into a ULN2003 driver board module. Right now I'm supplying it with 4 AA batteries but I bought a 9.6V RC battery that'll come in tomorrow and Ill use that afterwards. Currently my issue is that the code I have to control the direction that the stepper motor spins in works fine on its own, but once I plug it into the rest of my code for the entirety of my project, it moves significantly slower than what it does when I'm running it on its own.
Before I paste the entirety of my project code I'll summarize it a bit. Basically I'm using processing to detect the color red with a webcam, if it hasnt detected red it sends a message to the arduino to keep spinning in place, if it does detect red it tells the arduino to move forward. My arduino code is not only controlling the stepper motors but also an LED to signify whether I'm moving forward or spinning (this wont be in the final project, its just there for now to make it clearer for me IRL) and it controls a hall effect sensor that will detect the magnetism of the red target.
Here is the code, you can ignore the two blank functions at the end. Those are supposed to be for the second stepper motor but since I'm struggling with this current issue I haven't bothered to finish that part yet.
int speed = 2; // motor 1 int IN1 = 12; // BLUE int IN2 = 11; // YELLOW int IN3 = 10; // PINK int IN4 = 9; // ORANGE int IN1S = LOW; int IN2S = LOW; int IN3S = LOW; int IN4S = LOW; // variables for each pins' state // motor 2 int IN11 = 7; int IN22 = 6; int IN33 = 5; int IN44 = 4; // variable to read from processing char s; unsigned long currentTimeM; // timer for Motors unsigned long currentTimeT; // timer for LED unsigned long nextTimeM = 0; // timer for Motors unsigned long nextTimeT = 0; // timer for LED const long interval0 = 1000; // interval for slow LED const long interval1 = 300; // interval for fast LED int ledState = LOW; int led = 3; int hall = 2; void setup() { pinMode(hall, INPUT); pinMode(led, OUTPUT); pinMode(IN1, OUTPUT); pinMode(IN2, OUTPUT); pinMode(IN3, OUTPUT); pinMode(IN4, OUTPUT); pinMode(IN11, OUTPUT); pinMode(IN22, OUTPUT); pinMode(IN33, OUTPUT); pinMode(IN44, OUTPUT); Serial.begin(115200); digitalWrite(IN1, LOW); digitalWrite(IN2, LOW); digitalWrite(IN3, LOW); digitalWrite(IN4, LOW); } void loop() { currentTimeT = millis(); if(Serial.available() > 0) { s = Serial.read(); // reads input from processing, f for forward, r for rotate if (s == 'f') { // if processing says move forward driveStepper1f(); //driveStepper2b(); if (currentTimeT >= nextTimeT) { nextTimeT = currentTimeT + interval1; // faster interval for forward movement phase if (ledState == LOW) { ledState = HIGH; } else { ledState = LOW; } digitalWrite(led, ledState); } } if (s == 'r') { // if processing says to rotate driveStepper1b(); //driveStepper2f(); if (currentTimeT >= nextTimeT) { nextTimeT = currentTimeT + interval0; // slower interval for rotating phase if (ledState == LOW) { ledState = HIGH; } else { ledState = LOW; } digitalWrite(led, ledState); } } } } void driveStepper1f() { currentTimeM = millis(); if (currentTimeM >= nextTimeM) { nextTimeM = currentTimeM + speed; if (IN1S == LOW && IN2S == LOW && IN3S == LOW && IN4S == LOW) { IN1S = HIGH; IN2S = LOW; // A IN3S = LOW; IN4S = LOW; } else if(IN1S == HIGH && IN2S == LOW && IN4S == LOW) { IN1S = HIGH; IN2S = HIGH; IN3S = LOW; // A+B IN4S = LOW; } else if(IN1S == HIGH && IN2S == HIGH) { IN1S = LOW; IN2S = HIGH; IN3S = LOW; IN4S = LOW; // B } else if(IN2S == HIGH && IN3S == LOW) { IN1S = LOW; IN2S = HIGH; IN3S = HIGH; // B + C IN4S = LOW; } else if(IN2S == HIGH && IN3S == HIGH) { IN1S = LOW; IN2S = LOW; IN3S = HIGH; IN4S = LOW; // C } else if(IN3S == HIGH && IN4S == LOW) { IN1S = LOW; IN2S = LOW; IN3S = HIGH; IN4S = HIGH; // C+D } else if(IN3S == HIGH && IN4S == HIGH) { IN1S = LOW; IN2S = LOW; IN3S = LOW; IN4S = HIGH; // D } else if(IN4S == HIGH && IN1S == LOW) { IN1S = HIGH; IN2S = LOW; IN3S = LOW; IN4S = HIGH; // D+A } else if(IN4S == HIGH && IN1S == HIGH) { IN1S = HIGH; IN2S = LOW; IN3S = LOW; IN4S = LOW; // A } // change each pin to the IN#S state digitalWrite(IN1, IN1S); digitalWrite(IN2, IN2S); digitalWrite(IN3, IN3S); digitalWrite(IN4, IN4S); } } void driveStepper1b() { currentTimeM = millis(); if (currentTimeM >= nextTimeM) { nextTimeM = currentTimeM + speed; if (IN1S == LOW && IN2S == LOW && IN3S == LOW && IN4S == LOW) { IN1S = LOW; IN2S = LOW; IN3S = LOW; IN4S = HIGH; // D } else if(IN4S == HIGH && IN3S == LOW && IN1S == LOW) { IN1S = LOW; IN2S = LOW; IN3S = HIGH; IN4S = HIGH; // D+C } else if(IN4S == HIGH && IN3S == HIGH) { IN1S = LOW; IN2S = LOW; IN3S = HIGH; IN4S = LOW; // C } else if(IN3S == HIGH && IN2S == LOW) { IN1S = LOW; IN2S = HIGH; IN3S = HIGH; IN4S = LOW; // C+B } else if(IN3S == HIGH && IN2S == HIGH) { IN1S = LOW; IN2S = HIGH; IN3S = LOW; IN4S = LOW; // B } else if(IN2S == HIGH && IN1S == LOW) { IN1S = HIGH; IN2S = HIGH; IN3S = LOW; IN4S = LOW; // B+A } else if(IN2S == HIGH && IN1S == HIGH) { IN1S = HIGH; IN2S = LOW; IN3S = LOW; IN4S = LOW; // A } else if(IN1S == HIGH && IN4S == LOW) { IN1S = HIGH; IN2S = LOW; IN3S = LOW; IN4S = HIGH; // A+D } else if(IN1S == HIGH && IN4S == HIGH) { IN1S = LOW; IN2S = LOW; IN3S = LOW; IN4S = HIGH; // D } // change each pin to the IN#S state digitalWrite(IN1, IN1S); digitalWrite(IN2, IN2S); digitalWrite(IN3, IN3S); digitalWrite(IN4, IN4S); } } void driveStepper2f() { } void driveStepper2b() { } At first I was controlling the stepper motors by just using delay() between each step rather than a timing interval, but the issue with that was that it was really delaying the response time of the processing message into arduino reading it. If I remember correctly it was like 2 seconds or so, not horrendous but not ideal. So after that I decided to use time intervals to get rid of all of the delays but now I think that there's so much code that its slowing down because of how many times it has to do digitalWrite(). It does respond instantly now whenever processing sends a new message but like I said, the stepper motor moves way too slow as a result.
Is there any solution? Is it something else thats causing my stepper motor to move slow or is my assumption correct? If my assumption is correct after all, I think I might just settle for the delayed response rather than the slow motor. But any suggestions would be greatly appreciated!
Too many theories break when it comes to black holes; my thought is the possibility that we are wrong about them. What if black holes are pockets of matter and anti-matter emitting energy and density. That not even light can escape it as anti-matter is destroyed by matter, and matter being destroyed by anti-matter.
Things that go in can in theory come out, but it would explode from the contact of anti-particles. Thus, removing the information paradox and it would answer how some get extremely big despite being birthed near the big bang. Bigger black holes would just be larger influxes of anti-matter, creating density, energy and mass. These black holes would get big fast as matter clumps together, particles are shoved together.
At some point,i figured out the name was based on
Pizza + Tsarina
been seeing a lot of "AI OS for companies". agent runtimes, MCP, the YC RFS, half the new yc batch. they all assume agents have somewhere to read company context from. then they gesture at "single md" and move on.
i went looking for what fills that slot. mostly empty.
i have agents md or claude md in every repo. duplicates, goes stale, agents in different repos disagree. tried notion + a custom mcp server. fine for a human looking things up but agents can't write back without permission spaghetti.
the fix i did was a small git repo of markdown nodes. each node has an owner declared in frontmatter. agents read the relevant nodes before they act, propose updates after. owners approve like a PR. the context stays alive because someone owns it.
mostly looking for what others are using here. how do everyone here ensure context beteween human and agents teams are synced?
The Earth has become uninhabitable due to pollution and climate change for humans. You are given several options for how to deal with it
Robotic Body: your mind is put inside a robotic body so that you can survive Earth’s more hazardous climate and air conditions. You will be one of 1 million other volunteers for this program. While you won’t NEED to eat or sleep, it is recommended that you try to imitate human activity as much as possible otherwise you might go crazy and become a robot psycho.
Cryo-sleep Journey: You will be put in cryosleep on a ship set to travel to a potential new Earth which you and 10 million other colonists on to settle after a 500 year long journey. You and up to 4 others can join.
Mars Dome: you and 100 million people will be living in Dome colonies on Mars. You will be reliant on the self sufficiency systems (agriculture, air, energy, etc) functioning the whole time and will be living very compactly and on tight rationing.
Reclamation Effort: You and a handful of hopefuls seek to try to reverse the Earth’s climate and stay to Study it. There is only a slim chance of it working and you will more than likely die in the process
5: Die
Please see the video. This happens on all of my android devices when left sitting idle for a few hours. Not sure if this is a common problem, I haven’t been able to find a solution online.
Fixed a major issue with openclaw. Anyone know when their next release will be? Suspect next versions of OpenClaw or similar will be a big jump.
Second photo is for reference what I’d like lightened, tyia!!
Hey everyone,
I’ve been testing GPT Image 2’s new Thinking Mode heavily, and I noticed a lot of people are either leaving it on for everything (wasting money and time) or ignoring it entirely (missing out on the actual reasoning capabilities).
I put together a breakdown of what's happening under the hood and a decision framework for when to actually toggle it on.
The TL;DR of what it is: Thinking Mode isn’t just a "higher quality" button. It adds a reasoning pass powered by the GPT-5.4 backbone before generating pixels. It checks constraints, computes mathematical encodings, and plans spatial layouts. But it also costs ~$0.21 per image (or $1-2 for an n=8 batch) and adds ~10s of latency.
The Decision Tree (When to use which):
6 Things ONLY Thinking Mode Can Do:
A Quick API Note for Developers: To use this in production, you need to route through the Responses API endpoint (v1/responses), paired with the reasoning model, not just the standard images endpoint. Also, a quick warning: transparent backgrounds aren't currently supported via the Responses API tool option (they return with a white fill instead of alpha).
I wrote a much more detailed guide with API code snippets, visual layout examples, and exact prompt formulas. You can check out the full post here:GPT Image 2 Thinking Mode: The Complete Guide
What use cases have you guys unlocked with the new n=8 batching feature?
Why Panda gotta do me dirty like that. I always look forward to cracking my fortune cookie, and this is what I get? 😪
I moved from New Jersey to Seattle last year. I never paid more than $15 for water to the apartment management and a $500 amenity flat fee which took care of everything else.
Now, in Seattle, apart from rent and parking, I have almost > $500 extra charges per month. It seems a bit ridiculous. Is this normal? Can we do something to waive them?
Would love any feedback
For all yall in this subreddit what are your tricks and tips an advice for staying not depressed living alone? For context I have siblings my age and have always lived with roommates who I did things with and am very chatty and social. Any advice to offer? :))
Police are working on a list of suspects but right now they don't have anything to go on.
Walked into my bathroom tonight and saw all of these bubbles coming out of the toilet and water on the floor like it had overflowed. It looks like dish soap but has an almost neutral smell, ever so slightly "fresh". There was a puddle of water as well like it randomly overflowed without being flushed.
I luve on the first floor of an apartment fwiw. Last flushed the toilet this morning without any issues all day. Happened in the past hour.
What could be causing this?
A family member has a managed IRA containing stock for a company that, from what I can tell, went through a merger and liquidation event. This dropped the overall account value by 33%. I'm working on helping them move the account balance to a self-managed account, and news of this makes me wish the money was moved sooner.
For a managed account labeled with a low risk exposure and capital appreciation objective, I would think a fiduciary would have some insight to get ahead and change how money is invested before an event like this happens, or at least diversify investments. Then again, I dont have any personal experience with someone else managing my accounts, so I don't know how realistic it is to assume a professional can hedge against this type of event.
Looking to learn, thanks in advance.
For reference it happened at night in our backyard in the Midwest. Can’t figure out what it is.
Hey r/SideProject,
I'm building Allontas, a budgeting app that starts with the grocery basket.
The basic idea is that most budgeting apps tell you what you have already spent. Allontas is designed to help people make better spending decisions before the money leaves their account.
The first use case is groceries.
You build a basket of items you actually buy, like milk, eggs, bread, and household staples. The app then compares that basket across nearby stores and shows:
But the app is not just a grocery price comparison tool. The grocery comparison is the entry point. Allontas also connects that spending decision to the rest of your money — upcoming bills, planned events, larger purchases, and forward-looking cash flow. The goal is to help users understand not just which store is cheaper, but whether a purchase fits into the month ahead.
Why we started with groceries:
Groceries felt like the most obvious budget category where people are forced to make decisions before they have good information. You either pick one store out of habit, or you manually check multiple retailer apps, which gets old quickly.
Before building the proof of concept, my co-founder and I ran a 400-person survey. After cleaning the data, we had 364 valid responses. A few things stood out:
That was enough for us to start building.
Current status:
We have a working proof of concept for the comparison engine and the budgeting flow. The hard part is less the math and more the messy grocery data: product matching, package sizes, retailer naming differences, substitutions, and keeping prices current enough to be useful. However, we are working with a developer to find the past path.
More details and the waitlist are at https://www.allontas.com if anyone wants to follow along.
I'd love feedback from people who use budgeting apps, grocery apps, or have built anything with messy consumer data:
Appreciate any honest feedback. Trying to pressure-test the idea before we take it too far.
(Australia) Hi everyone
1 year after a small crash I had in Perth but with a very nice BMW. Their insurance is now asking me 15k$.
I got a text message on the 27/04 saying that this payment is overdue since the 24/04. This the first contact I got with them since last year.
What happened is that I changed my address and told them by email to send everything relevant to the claim by email but they never answered. And I never filled any crash report.
At this time I was driving in Australia since only few days and thought the rego was a third party insurance. I wasn’t even thinking that I was driving without insurance. Also The girl I crashed in was speeding and overtaking me from left side. But as I said I never filled out the crash report.
I’ll go straight to the point :
What will happen if I never pay ? Debt collector, court ?
Is it possible and How long it would take before they take the money directly on my bank account ?
Can I negotiate a payment plan and How much can I lower the monthly payment ?
Can I get stopped leaving the country if I’m in debt ?
Thanks to everyone who will answer.
And for someone who are on insurance side, I give you a a little storie :
Since the crash last year, I pay for a comprehensive insurance and a fire starter in my boot just yesterday. (Yes, many bad news in few days…) RAC said that they won’t cover any damage because my dual battery is involved in the fire. Knowing that it didn’t start it but just the fact that it got damaged it makes my claim not legitimate. So yes I won’t give them 1 more $.
"At work Sweetie." I say as the scratching in the basement becomes more frantic.
Ordered Walk-in’s and there was something at the bottom of the bun. I’m assuming it’s a bug but I want to know others opinions. Last pick is where it was found. The important question is whether I should still eat it because I don’t NOT feel like ordering more food😭
it can be super helpful especially when you almost every time forgot people's name or hard time remembering the names.
My girlfriend and I have been trying to figure out what animal this plushie is meant to be, and while it’s been a fun discussion, we can’t come to a solid answer.
At first I thought it was a rat, then we noticed he has no tail. Maybe a capybara? Just a little guy?
I won him in a claw machine and after some sleuthing we identified him as a plushie from A&A’s Small Generic Plush Kit (108pcs)
Who is this creature?
https://x.com/inventeam/status/2049317887886979225?s=20
It has been confirmed that Nam Dae-geun, the player Diable from BNK FearX, will be leaving the team.
Player Nam Dae-geun, who was demoted to the second team due to his absence from the fan meeting, is currently in discussions for a trade with another team in the LCK, and Shadow Corporation has stated that the player recognizes his mistake and is reflecting on it.
Since yesterday, the "Explore GPTs" option hasn't been showing up in my sidebar.
The GPT chats themselves are still working, and the chatgpt.com/gpts link works fine too. Just the option is missing from the sidebar.
Anyone else facing a similar problem or know how to fix it?
If you're building account creation or testing pipelines, you've probably hit the wall where temp mail domains get rejected.
NodeMail solves this with real Hotmail/Outlook accounts. You call the API, get an address, use it, poll the inbox for the code — done.
nodemail store
I've been using ChatGPT for a while and I really enjoy using it. I think it's a really useful tool. I wanna know what other people use it for. Do you use it to let out your deepest darkest secrets or do you use it for assignments, maybe trying to find an old TV show or game or maybe you use it to get some financial advice.
He said no.
I'm afraid not....
If you've tried buying a Rolex at retail, you've heard it — "we'll call you." Then nothing. No timeline, no updates, just ghosted.
The only real wait time data lives in Reddit megathreads that are thousands of comments deep. So I built Unghosted — buyers submit their actual wait times and everyone can search the data.
Turns out the Sub that one AD quoted at "1 year+" is showing up in weeks at other dealers. That's the kind of info that changes how you play the game.
unghosted.io/rolex-waitlist-times
Next.js 14 + Supabase + Vercel. Early stage, need submissions. Feedback welcome.
I was cooking steak on my induction cooktop and realised that the foil was jumping every couple of seconds. The elements under the foil and plate were off. Any ideas what causes it?
I heard a helicopter flying over the neighborhood and if it’s for Seattle Children’s Hospital, I hope that child is OK
laurelhurst can eat a dick
I hope it’s louder
The shadows are so dark I can’t even see my the males face.
Is this a stock part? Do you know of a good replacement? I found it while checking the wire on the regulator. Do you think the regulator being bad could cause that? Any help is appreciated
I keep vacuuming these shavings off of my window ledge and they keep mysteriously re-appearing. They look like pollen or fine wood shavings. Today there are weird dark insect lookin’ things here too. This is an old house, we’re in Washington, DC. This is the only window ledge I see these thingies on. I’m assuming something is chewing something here? What is it?!
He served in both the renowned Nelson and Hood Battalions, which were naval infantry units fighting alongside the British Army. He was involved in fighting in the Gallipoli Campaign and the Battle of the Somme, two phases of the war which saw extremely heavy casualties.
He was wounded several times - a bayonet through his left forearm meant that he lost the use of several fingers. He also had fractures to the left wrist, which must have been repaired with wire because bits this wire were still coming through the skin in the 1960s! He never saw a doctor about it.
In Gallipoli he contracted a a severe fever, probably malaria. Every few years he would spike a fever, dose himself up with paracetamol and go to bed. During these periods he would shiver violently. These episodes persisted well into the late 1950s!
I know many WW1 veterans were too traumatised by the horrible conditions and high casualty rate, but my grandfather wasn’t one of them! I vividly remember him telling stories of fighting as I sat on his knee as a little boy. I remember my mother asking him to tone it down a bit, but I decided that that wasn’t necessary!
Im a graphic designer by profession but my passion is to paint! Beginning of my journey🥺
I’m still fresh and when I was learning about trading awhile back everyone was using Dexscreener.
But now I see all these crypto YouTubers all using Terminal and Axiom. What is Terminal and Axiom why do many use it now and how is it different to Dexscreener?
Is Terminal and Axiom free to use? I tried going to the Terminal website but it wants me to have a Telegram account. :/
Hey everyone,
I'm an ADC main who recently started a personal project to learn video editing (CapCut). For my 6th week, I wanted to showcase the ultimate glass cannon: Kog'Maw.
We all know how punishing it is to play an immobile hypercarry against heavy dive comps nowadays. In this game, I had to kite for my life against a bloodthirsty Shyvana and Trundle, but the W damage payout was insane—I ended up with 21 kills and 49,000 damage.
Since I'm trying to improve my editing skills and storytelling, I decided to lean into the humor this time. Kog'Maw is always hungry, and the enemy Nautilus kept stepping up during the game, so I created a "Naut Snack Counter" throughout the video. I also played around with some freeze-frame memes for the jump scares during the jungle ganks.
I would love to share the result with you guys: 👉 https://youtu.be/tr9DxtEIpI4
I have a question for the community regarding League content: As viewers, do you prefer these more humorous/meme-style edits (like the snack counter and freeze frames), or do you prefer clean, pure gameplay/Phonk montages?
Any feedback on the kiting mechanics or the editing itself is super appreciated! Cheers! 🚀
I've been wanting to switch to Claude from gemini. But the limited token in the free tier is just a NO-Go for me. For the ones using free tier of Claude, how do you manage?
What’s a task that actually needs an agentic loop?
I have shipped a handful of tools for myself including a morning brief, a research summarizer, and a couple extraction pipelines.
As I go deeper on agents, the more it feels like 90% of what gets called an agent is actually a workflow on a trigger.
Am I missing the point, or are true agentic loops rarely needed and workflows handle most of what people need?
Curious when a workflow stopped being enough and you needed an actual agent.
Im trying but it feels like im falling behind while everyone else is moving forward. Its frustrating and honestly a bit discouraging.
How do you deal with that feeling and stay motivated?
i feel so dumb for this but i was curious if ANYONE felt similarly. it literally gets so dire because sometimes i will be watching and i get so genuinely sad over it LOL. i am also diagnosed autistic so thats probably why
Most disposable email services get instantly rejected by major platforms because they use fake or blacklisted domains.
So I built NodeMail — it uses real, active Microsoft (Hotmail/Outlook) accounts as temporary inboxes. You get a real address, use it on whatever platform you need, and the verification code shows up in your dashboard.
What it does:
Real Hotmail/Outlook addresses (not fake domains)
Works on Instagram, Facebook, TikTok, Netflix, Spotify, Amazon and more
Full REST API with API key auth — automate it in your scripts
Pay-as-you-go, $0.005 welcome credit to try it free
Refund if no email arrives
Bulk orders (up to 20 at once)
Would love feedback — especially from devs who do automation or testing workflows.
We think it might be related to clocks as he had a very large collection and many special clock tools. It also might not be he has lots of other random objects in his collection. It is about five and a half inches in length and it is not sharp or beveled on the edges.
I'm 28 and currently making $52k/year working in accounts payable while living at home with my parents. I've been saving up for my own place, but it's almost impossible to afford to buy anything in Central or Northern NJ as a single person with this salary, even with the amount of money I've saved so far. I have about $75k in a HYSA, which I know is a lot to keep in a savings account, even though it earns interest monthly. The reason being is that I want to make sure I have a good amount saved for a down payment, purchasing a new car within the next few months, and whatever I may need for closing costs, moving costs, and general essentials for living on my own. In addition to the $75k in the HYSA, I have about $4k in my checking account, $32k in my Roth IRA, $10k in a brokerage account, and about $5,700 in my Robinhood account, so around $127k total between all of my accounts. I feel behind since I thought I'd be living on my own by now and also making more money at this age. I'm definitely looking to get a better paying job as either a junior or staff accountant sometime later this year or at the beginning of next year at the latest. Looking for advice on how to handle the money I currently have and how to potentially make more either through my career or investing. Thanks in advance!
Hello all! My step-dad recently found the only photo he has of himself and all his siblings together. The photo dates back to August 1969! The only trouble is that the photo itself is absolutely horrible. It is in decent condition, but it is incredibly blurry and overall unclear. I was hoping someone would be able to fix it up for us and I will pay the best one!
I have included a reference photo for most of the faces if it is needed. Top left in reference photo is the girl in piggy tails, bottom left in reference photo is first child in the restoration photo, kid to the right of the girl in the restoration photo is the top middle of the reference photo, I unfortunately do not have a reference photo for the tallest kid in the restoration photo, the kid at the end in the restoration photo is the kid with glasses in the reference photo, and the smallest kid in front is bottom right in the reference photo.
Please let me know if there is anything more I can do!! Thanks in advance:)
So I tested out a mxfp8 vs it's normal fp8 and for some reason the mxfp8 is 3x slower. Both models roughly same size, not running out of vram, I have 13.0 cuda installed, python 3.12.1, pytorch 2.9.1+cu130, kitchen sink installed and working, kitchen sink 0.2.8.
Idk what else to check or if this is normal?
Every week it is the same. I keep saying yeah it will be different, that I will fix one thing, that I’ll do homework, what I’ll be on time for work. I never actually fix anything though. I try timers, but they’re easy to ignore so I never actually have any benefit from them. I try to break things down but it’s just more time spent procrastinating and not actually being productive.
I’m in high school. I shouldn’t be. I should’ve graduated a year ago. I had so many chances to fix it, but I keep messing up of my own accord.
I burned out badly two years ago in an attempt to fix it, in hopes of actually graduating on time, hoping I could fix my terrible gpa. It didn’t actually fix anything though. Now I can’t force myself to do anything anymore.
At any rate I was wondering if there any way to fix this? Or am I just a lost cause?
Well they've added so many control supports already so adding Mobile supports would be a great addition to enable easy transition from mobile to pc. They already have Wildrift so they can just copy that layout.
Hey everyone,
I’ve been using tools like Lovable, Antigravity, and Claude Code for a while now, and after some time it all started to feel a bit repetitive (same kind of outputs, similar templates, etc.).
Recently I tried Clawder after seeing it mentioned on Lovable’s Discord server. I’m not here to promote anything, just genuinely curious about something.
That’s the part I don’t really understand. In all cases I’m even getting better results with similar prompts, which makes it even more confusing.
Not trying to compare tools or start a debate I’m just wondering from a technical perspective what could explain this
Would be interesting to hear if anyone has insight into how this works behind the scenes.
He used a belt and adorned it with watches. Unfortunately, nobody bought it. It was a waist of time.
Hi!
I just moved to Kane County and wanted to start a classic movie (think 1930s to early 1960s) meetup for folks in the Chicagoland area in their 20s and 30s looking to meet people with similar interests. Description below— please send me a DM if interested. I plan on scheduling an initial video chat meet and greet for everyone sometime next month. 😎
———————————————————————————————————
Love timeless films?
This group is for people in their 20s and 30s across the Chicago suburbs who enjoy classic movies and want a relaxed way to meet new friends.
Whether you’re a lifelong fan of Old Hollywood or just curious about the films everyone should see at least once, you’re welcome here.
We’ll host casual movie nights featuring everything from Golden Age favorites and Hitchcock thrillers to iconic comedies and romance classics.
No film expertise required — just bring yourself and enjoy the movie. We’ll have plenty of time before and after to chat and connect.
The goal is simple: good movies, good conversation, and good company.
Perfect for old souls, film lovers, and anyone looking to make new friends. 🙂
What are your routine or hobbies that help?
To preface yes I have an emergency fund..it's being used to fund this because it's a home repair that is necessary..that being said..10k is being paid via emergency fund..The other 10k is my issue..
Use credit card to pay then take out a personal loan via the bank..payment over 84 months would be $208 a month and pay it off before the 84 months..No penalty for early pay off..I don't really have many options..
The next question is after the loan is taken out do I pay it over the 6 months(since it's 0% apr for 6 months)or just pay off all at once the balance which will be 10k?
I am using Claude Desktop on Windows 11 as a physically disabled AuDHD human with moderate hearing loss and over 20 years of disability activism history.
Am I pushing the wrong buttons somewhere that I can't find and that's why I am seeing "not supported" which would be even more egregious to the disability community than not shipping the suggestion at all, or did someone over there really say "we should mention we thought about it for 15 seconds and decided screen readers and speech to text weren't worth being accessible in 2026"?
I'm going to hope that it was an easy to fix mistake on my part and I have a toggle I can't find wrong somewhere. Because a lack of product accessibility isn't optional legally.
Shalia (Ren) Martin
Outreach Director of Foundations for Divergent Minds
Yes I know exactly how to make this a problem
I saw a photo of me and him on wall and an alzeimer's report
I used to do an abstract self-portrait once every year. I forgot to do one the last 3 years. So this new one is me getting back to it!
I forgot to get a before photo before starting a bathroom renovation. Started testing a wall with pink paint and was wondering if anybody could remove the pink and turn it back to white. You can see the original white under the mirror and towards the ceiling. Thank you!!
I'm filling out a rental application but they are asking me to write down my credit card account number. Is this normal?
House was built in the 40s
What is this thing?
I graduated from college in May 2025 and found a job I currently work at by June. My employer pays 100% for insurance, meaning that not a single dollar is taken out of my paycheck for that. Well, I am about to relocate in June pf this year and I just accepted a job in the area I’m moving to. This is going to be my second job out of college and for the first time ever I will have to pay for insurance. I have three plans to choose from:
Traditional PPO (~$111)
HSA (~$50)
Value Plan (~$38).
All of these are Blue Cross Blue Shield of Illinois. I have no idea which plan to choose. I was originally going to choose the value plan because I’m a pretty healthy person physically, but then I remembered I’m not that healthy mentally. I go to therapy and take two prescription medications for my mental health. With my current insurance, therapy is $25 per session, medication #1 is up to $60 and medication #2 is about $10. Which insurance plan should I choose?
Hi guys,I recently use comfyui and download a workflow,but it has many custom_node that with different requirements package,when I fix one other will have version problem how can I fix all in same time?
Why YSK: Simply resetting your advertising ID or deleting apps doesn't stop tracking firms from recognizing your device based on settings, fonts, screen size, and other unique traits. To reduce fingerprinting, turn off app tracking permissions and avoid installing many apps with invasive permissions, especially free utilities like flashlights or QR scanners.
My partner and I collectively make about 10k/month net after taking out taxes, 401k, health insurance deduction, etc. We have been in the market to buy a house in a relatively HCOL area in the northeast and fell in love with a house costing 555k. We’d be first time home buyers, putting less than 20% down. The mortgage + mortgage insurance + home insurance + property tax would come out to roughly 4500/mo. We also have an infant who would need to go to daycare. No other major debt. We’d have some savings (about 25k) and then our investment accounts which I hope to not tap into. Would this be a doable scenario?
This is an automatic post triggered within 2 minutes of an official Claude system status update.
Incident: Elevated errors on Claude Opus 4.7
Check on progress and whether or not the incident has been resolved yet here : https://status.claude.com/incidents/lryjwwy0q9pl
Also check the Performance Megathread to see what others are reporting : https://www.reddit.com/r/ClaudeAI/comments/1s7f72l/claude_performance_and_bugs_megathread_ongoing/
I've been using comfy for 3 months - still pretty new. I have never dove into face swap generating. What's a good starting point? Is there a good to model?
I often start prompts from a template I saved and for some reason ComfyUI started starting up with my last session and I can't seem to clear it up aside from closing everything again.
Can I make it just start from the template without what has changed from that template?
I like what is in the template already.
thank you.