This is a first. I never lost access to any of my past sessions because I unsubscribed in any of the LLM apps.
I actually wanted to try out codex previously, but had similar experience with my credits. They gave extra credits equivalent to my montly subscription price, with some time limit because claude has so many issues that month. And as soon as plan ended. I lost access to the credits. Even after resubscribing, I still don't have access to those credits.
I have sympathies towards the engineers, especially the ones that are putting themselves on X. But only when someone with large following has some issue, they sort it out.
Having worked at a billing company, I can see how complex contracts sound good for the growth/sales folks but are also horrible for engineers actually implementing those contracts. Their complex rate limiting which is now a norm, identifying other harnesses to count them against extra usage are all probably not easy to implement without very rough edge cases. But all the "bugs" are just where the user gets screwed is what is problematic.
I just wanted to post this here, after tagging them multiple times on X to alert other users.
It could be worth a quick $20 subscription just to grab your stuff, then cancelling. Trying to get support from either Claude or OpenAI seems pretty hopeless. Hopefully this post will get them to see you
If you export your data [0] all your Claude Design chats are in a design_chats directory along with the code, even if your account currently has no access to Claude Design. It is .json, but converting that into usable code is easily done, either manually or by asking any fairly modern LLM via OpenCode. Just did it myself, it works. I will say that I'd still prefer if they allowed API use of Claude Design, it does have some niceties regarding the way follow up questions have been implemented that I feel make it worth it for very narrow UX experimentation but can't justify a whole sub at the moment, given I for the first time started experiencing regressions up to making Opus unusable via Claude Code with the Max subscription for the first time and the new pretrain in GPT-5.5 is very strong for very specific coding use cases. In fairness though, compaction and task adherence can be inferior compared to GPT-5.4 which did both better than any other model ever, so using both for their specific use cases is my go to.
Not feeling like commenting on every statement regarding SaaS and expectations, but I will say that some are mistaken/not considering the law and your rights by just telling you it is your fault and (at least) implying the data is lost. It can't be, think about it. Any temporary subscription cancellation/payment processing issue/bug on Anthropics part/etc. would mean permanent data loss. That'd be less than ideal, not least because Anthropic has in the past had trouble processing payments from verifiably covered accounts.
Users in consumer friendly area have the right to export and access their data, including data not exposed via any frontend or API if associated with their account. Doesn't matter whether they pay or not. Course, manual backups are always preferable. A provider could still have a data loss, but as long as they have the data, at least in my neck of the woods they have to give it to you. As it should be.
To end, I generally try not to comment on others comments or down outside of actual spam and bad faith, but if more than one comment already was helpful enough to tell OP that they should have exported/backed up, do we really need it repeated?
[0] https://claude.ai/settings/data-privacy-controls
yeah.. anyway it will be my coding agent that will be reading these. and if needed, it can show me what they look like.
in the ideal world, I know all these things should be in place, but I was not sure, they had the bandwidth to implement all these before releasing these things into the wild. but i will use it to download my sessions.
as a dev, building the product is the fun part, implementing entitlements, payment gateway, rate limiting, usage calculation, billing, gdpr stuff, account creation, deletion, export, these are the boring parts. so I wasn't sure they would have implemented this part.
To add something else to the discussion however, I'd encourage people to skip out on Claude Design for other reasons, and that is the inherent restrictions of LLMs for visual design. LLMs are blind, and spatial relativity is tremendously hard across layers of nested html / css.
If you're early on, I'd recommend starting with diffusion first. GPT-Image-2 is phenominal at UI design, and especially if you're just starting out will let you align on a direction more rapidly than an LLM can. The difficulty will be converting from image->html, but you'll be able to explore different directions more cheaply/faster than you could with Claude Design.
I will note a bias disclaimer here - I quit Figma to work on my own diffusion-based UI design tool. Not promoting that here, but wanted to at least share my findings in this space.
Also, GPT-Image-2 is not a diffusion model, it is based on Transformers, like other LLMs are.
Try doing 100% vibe-coding with an agent and loosely specify what kind of application you want, and observe how the resulting UI and UX is a complete mess, unless you specify exactly how the UI and UX should work in practice.
If they actually had spatial understanding, together with being able to visually experience images, then they'd probably be able to build proper UI/UX from the get go, but since they only could describe what those things are, you end up with the messes even the current SOTAs produce.
yes, the visual intelligence is limited, but they do actually have vision capabilities.
Images are tokenized and fed to the exact same model, they can “visually inspect” images, eg “find the 2 differences between two images” and “where’s Waldo”-style things.
So your mental model that they see descriptions is inaccurate.
Exactly, here is where the fidelity of an image is being lost, they don't "see" visually, they get a representation of the image via tokens, that's why I said they don't see but basically "see an explanation of the image". I don't mean like a caption, but in the end, they act and work with tokens, not pixels or actual images, internally.
Example from Grok and Claude, with a very simple test case. I made a white image with 7 dots, ask Claude and Grok to count the red dots. The filename is "8-red-dots.png" but actually only has 7 dots.
Because they don't actually receive the image itself, they receive "tokenized images" as you say, they don't seem to actually be able to see the number of red dots. ChatGPT correctly identified that there are only 7 dots, but only because it ended up using Python to actually count the pixels it seems.
Original image + what the various LLMs responded: https://imgur.com/a/vh1tU6Y
Again, very simple (and dumb test), I won't claim this is science, but once you start trying to use these vision models for precise and exact UI and UX work, you'll notice over and over how bad fidelity and spatial awareness they actually have when it comes to images.
I've tried doing the same for design work, just really outlining exactly how the UI and UX needs to look and work, but for some reason it struggles a whole bunch with it, regardless of how clear I am. Maybe it's I'm just worse at explaining and describing what UI and UX I'm actually after though, I suppose.
But then, I would not spend more than five minutes on this decision, so I'm probably the wrong audience for this ;)
The UI and UX of the product was amazing, and took some time to get used to actually delivering pixel-perfect designs across three different browsers, but fun times regardless :) Probably takes a certain individual to enjoy that sort of experience though.
Where are you getting this from btw? AFAIK, OpenAI hasn't openly talked about what exactly is powering the Images 2.0 stuff, unless I missed something? I think they've said it's not a diffusion model, but I'm not sure they've said what they're doing instead, have they?
Ask a LLM how much time has passed. Watch it hallucinate wildly.
Has anyone noticed that Opus has trouble building ascii diagrams (often leaves out spaces so lines are misaligned)?
As semiquaver said, modern LLMs are multi-modal, they can reason in image-space and audio-space as well as in text-space. It is not a translate then operate kind of situation. Claude Design is not a raw LLM, nor an instruction-tuned LLM. It is an agent harness around an LLM that allows it to do certain things.
I wish it was more integrated into PowerPoint but it's still the best slide generator I've used.
But I will give GPT-Image-2 a try. Actually few months back I remember doing this UX/UI research on the chat gpt app itself, just asking it to generate what a certain app might look like and etc.
Please let me know your UI design tool. I'm want to try it out.
Yeah, I'm starting to be worried about Anthropic's security controls for customer information.
To say they'd have a firehose of sensitive info from customers would be a massive understatement. Hackers gaining access to that, especially for a non-trivial duration, would be a disaster.
Claude design in my experience is very, very solid.
No kidding - you can't even delete a design system, draft or otherwise. Research Preview is accurate, it can do some things (but every system I've tried building it has resorted to the "hero text with key word in a different color" trope, however I try different prompts), but there's a lot missing (and when you ask Claude Design how to delete a design system it gives you an absolutely inaccurate and hallucinated answer and you say fine, here's the project ID, do it for me, "Sorry, can't, only you can").
Anthropic lazily calls everything a preview and then pushes it hard on everyone. That feels dishonest
similar to claude code, they need to revolutionise customer support. Maybe from a ticket, if the agent decides if its real bug, and legitimate, it will go on and fix it.
FTFY
Shiny thing syndrome at its finest.
It's really hard not to especially if you enjoy building.
It's funny because sometimes it will remember stuff that is lost and not be able to reference stuff that is clearly visible.
One area where I find ChatGPT superior (and this is just my own experience) is not losing things and also respecting project boundaries. Claude projects just seem to be a way to lose things faster, the model seems to be entirely unaware of projects as a concept.
Anthropic may be a bunch of skids but it sounds like they did the right thing here. Pretty much all SaaS applications, especially in B2B, are required by compliance to remove customer data within X amount of time at the end of the contractual relationship.
The only example I can think of are the TV services: Netflix will erase your watched show list if you unsubscribe. But they are very purposefully doing it out of spite: they want to push you towards not unsubscribing at all (so they penalize it even at the cost of discouraging you from coming back ... because they know "subscription hopping" is a thing, and expect you'll come back anyway).
It's 100% a dick move when the TV services do it, but at least it (kind of) makes business sense for them to do it. For Claude it's just alienating their customers needlessly.
> are required by compliance to remove customer data within X amount of time at the end of the contractual relationship.
that's a very bullshit justification, we're not talking about the 'delete account' button - especially since claude has a free tier.
Google Workspace seems to halt access immediately[1] and purge data within 60d[2]. For comparison, Atlassian leaves you access for 15d, and purges data at 60d[3]. 365 gives you 90d[4] before purging.
This is a pretty regular thing across the industry.
[1] https://knowledge.workspace.google.com/admin/billing/cancel-...
[2] https://support.google.com/a/thread/345697828/recovering-dat...
[3] https://support.atlassian.com/security-and-access-policies/d...
[4] https://learn.microsoft.com/en-us/compliance/assurance/assur...
I want to know if I should subscribe again to get this data or shouldn't bother.
You can talk about all these rules, but that was my data, when I was subscribed to their product, I'm not asking for access to generate more, just my past sessions.
How long would you expect them to keep your data for? Do you really expect them to pay storage costs for your data indefinitely just because you paid them $20 once upon a time?
And on the inverse side, would you really want your data to be compromised when they inevitably get breached just because you had a sub there once?
These are the reasons data retention policies exist.
It's not entirely unprecedented - seen these tactics in the google ecosystem. Google music. Unsubscribe killed(kills?) access to see you playlists which of course you only learn once it's done. Give them a credit card again and you can see and export them again. Magic!
Resubscribed for 1 month, exported it, unsubscribed, and swore to never trust google music again. idk why they implement patterns like that because sure you extorted $10 in cash out of me but it makes the brand toxic. There is no way that decision has a net positive future value. Hell it even got them a pissed hn post years later
I've been on product launches many times, so can drive the design side appropriately and keep things focused. Has been a wonderful addition to my workflow.
As usual with any agent-driven tool - GIGO. If the human driving has no product experience and is blindly accepting designs, well, that's... a choice.
It's an extreme example of slop code since while normally LLMs can produce code that ranges from some-what-okay to utter garbage, the web code claude makes is awful. On the other hand: you get a single file (even if it is full of 20+ embedded SVGs, javascripts, and other such things.)
I actually find, claude models to have superior visual reasoning, in their multi modal llms, im not talking about image generation LLMs. so I just share the picture, to let it undersand the layout and go from there, and just iterate until I like the final look of it.