Petra

joined 2 months ago
[–] Petra@lemmy.world 2 points 3 weeks ago (1 children)

I don't really have much to add since most of your suggestions are reasonable and valid.

One thing I must say is that in my experience, 90% to 95% of the user base actually uses ACC for NSFW and sexual stuff with their characters lmao, and not narrative driven RP with lore rich stories and long dramas. At least for most of discord and reddit this is what I see. Last one that wasn't was a user that had images and nice lore about a black ooze on Reddit. I even wished more people were interested in this sort of use for AI Chats as I'd be interested in sharing and reading stories, but it seems to be a rarity.

Still, without accurate data, we can only guess what people actually use it most for.

The image upgrade was much better than I was expecting, so at least we can be hopeful the new model will be just as good and provide good uses for both ACC and specific use cases like AI RPG (which also might need improvements to match ACC's features).

[–] Petra@lemmy.world 2 points 3 weeks ago (3 children)

Some of these suggestions are related to the interface itself (ACC), not the model (AI) that runs on the server. Modding ACC's features are possible since the code is open for anyone to edit. The rest will be only significantly improved when the model is upgraded.

In my view, ACC is more of a generic platform to chat with the bots in several separate threads, not really one that is focused solely on being an RPG adventure (like AI RPG for example). Something like this might be a different goal than what ACC intends to be. An upgrade to AI RPG with these features would sound more appropriate, I believe.

That is why AI Dungeon is called that, it is focused on a specific type of AI roleplay and usage. ACC is a general chatting platform, and more advanced users have tools available to modify it to their use case, similar to SillyTavern, for example.


Most of the AI problems you mentioned have to do with the low context window (6k tokens) it has. It's not able to ingest that much information, so unless it is properly curated with what is happening, the AI will miss things and details. Many other models on the market have immense context windows, from 64k or 128k to 1 million (or so Google claims). Until the text model is upgraded, the best you can do is curate the information available to it (past messages, summaries, character descriptions and General Writing Instructions are the most important).

Now, about the AI issues, let's start with chars not leaving the scene. I faced this problem a lot more with narrator characters or using a single character. One thing to keep in mind is that the description of the main AI of the thread is always sent to the AI on every message. So, if you're having an actual character as the main AI, even if they left the scene, their description is always sent. This can cause the AI to keep referring to them, especially if their description is long.

I started using a narrator character (more of a "hub" character just to host common lorebooks and other things) and never posted as them, but with my main chars. This description was always sent to the AI and improved the narrative instead, but on very rare occasions it would still refer to them as "The Narrator".

Using multiple chars made this a lot more consistent, and the AI was able to read previous messages to better decide what was their status. Had a long 2v2 battle between chars, and once two of them died, they stayed dead, had no issues there. Make sure to always track your summaries and memories to avoid wrong information being fed to the AI.

I eventually found out multiple chars in the same thread causes all their descriptions to be sent (for the chars that posted on the last 20 messages), so this made the AI receive more information about them instead of it being only collected from previous messages or only occasionally retrieved if on the lorebook (which has a low limit of how many entries can be retrieved).

Perchance uses an old text model, curating the information, understanding its limitations, and knowing what information it's receiving helps improving you dealing with it better and improving your experience. You'll have to micromanage? Sure, but the AI is not magical, it has limitations that can become apparent after a while unless you work around them. There are several tricks available like using character injection or hidden system messages.

Eventually, I decided to code a script to adjust which char description was sent, so that I could conserve tokens and also better indicate that a character has left the immediate area. This solved the problem of chars being mistakenly referred to as if they were still around. It is available on my fork that VioneT posted, at the options bar on the bottom right.

[–] Petra@lemmy.world 0 points 1 month ago (1 children)

That rentry looks unofficial and old (not updated to the new version). On the image generator frontend, Styles are simply extra words added to your prompt.

While there is a link to a post where the dev confirmed that the old model used more than one model (until the base model was updated), I don't think there is any information if the new model (Flux) behaves this way. There is also no detail about the cost, if those previous models could be merged into a single one, or if the Flux model is more expensive than the others combined.

[–] Petra@lemmy.world 5 points 1 month ago* (last edited 1 month ago)

I think dev made up his mind on having Flux.1 be permanent for all of Perchance AI image generators.

The people who adore the new model have won over the Developer of Perchance.

Pretty ridiculous and disingenuous thing to say. The update has been planned and mentioned in this forum for a while. This was not a "test" to see which "side" would like better so that only then the dev would make their final decision.

The dev likely had made up their mind way, way before the model was implemented, otherwise they wouldn't have implemented a new model, obviously.

[–] Petra@lemmy.world 1 points 1 month ago (1 children)

Took me a bit to reply to this. Anyway, if you're not willing to show examples of what you're trying to achieve, there's nothing to see here. You are just being abstract and that doesn't help proving to anybody that what you want is not achievable on this model.

I have already shown you examples of how to use seeds to achieve consistency, and yet we don't know anything about what you're trying. Not much to see here as constructive criticism if you're not providing examples of what you tried.

[–] Petra@lemmy.world 2 points 1 month ago (4 children)

Keeping two models hosted at once would very likely involve additional costs. While it might be possible, it seems unlikely due to this reason.

[–] Petra@lemmy.world 3 points 1 month ago (3 children)

With the old model, when I created characters using the same prompt across multiple generations, I got images that looked like the same character every time — same face, same style, same feeling, with only small variations. That’s what I loved. That consistency mattered. I could trust it. It made character creation easy, fun, and powerful for storytelling.

Now with the new model, I use the exact same prompts, same settings, and even the same seed structure — and yet the results look completely different. The style shifts, the faces change, and it feels like I’m getting a new person each time. Even the framing is inconsistent — for example, the old model would show the full torso, while the new one sometimes crops too close, like it’s focusing only on the top half.

Please demonstrate this. What are the prompts and seeds you are using here? What results you were expecting? What results you got? I posted examples previously.

I’m not saying throw out the new model. I’m saying: give us the option to choose. Let those of us who found value in the old system keep using what worked for us.

I answered this before. To make this request more likely, you need to show that what you got before or what you want isn't reasonably achievable with the new model.

Please don’t dismiss this as just a prompting issue. It’s a model behavior issue. And I really hope the devs take this feedback seriously.

For this to be taken as a model behavior issue, you need to provide information. What are the prompts, seeds, results you are getting? You are only talking in abstract terms here. Please provide some actual examples here.

[–] Petra@lemmy.world 3 points 1 month ago (6 children)

The old model was consistent.

If I described a character — like a guy in a blue jumper, red jeans, and purple hair — the old model actually gave me that. It might sound ridiculous, but at least I could trust it to follow the prompt.

Prompt Result

When I used things like double brackets ((like this)), the model respected my input.

Well, that was a syntax from SD, while the new model is Flux. It requires different prompting; it doesn't accept the same syntax, from what people tested. Some have had success reinforcing desired aspects with more adjectives, or even repeating specific parts of the prompt.

Now with the new model, I try to recreate those characters I used to love and they just don’t look right anymore. The prompts don’t land. The consistency is gone. The faces change, the outfits get altered, and it often feels like the model is doing its own thing no matter what I ask.

As I explained in another thread, you can use the seed system to preserve the some details of the image while changing others: https://lemmy.world/post/30084425/17214873

With a seed, notice the pose and general details remain. One of them had glasses on, while others were clean shaven. But the prompt wasn't very descriptive on the face.

Seed1

If I keep the same seed, but change a detail in the prompt, it preserves a lot of what was there before:

a guy in a blue jumper, red jeans, and purple hair, he is wearing dark sunglasses (seed:::1067698885)

Seed2

Even then, the result will try to be what you describe. You can be as detailed as you want with the face. On that thread I showed that you can still get similar faces if you describe them.

Let us choose. Bring back the old model or give us the option to toggle between the old and the new. Keep both. Don’t just replace something people loved.

Keeping two models hosted at once would very likely involve additional costs. While it might be possible, it seems unlikely due to this reason.

I’ve seen a lot of people online saying the same thing. People who make comics, visual novels, storyboards, or just love creating characters — we lost something when the old model was removed. The new one might look nice, but it doesn’t offer the same creative control.

On the discord server, I've seen people create all of these. A lot of it is a matter of prompting. People on the discord are very helpful and quite active at experimenting styles, seeds, prompts, and I've had a lot of help with getting good results there.

With the new model, everyone started on the same footing. We don't know the new best practices on the prompting, but people are experimenting, and many have managed to recreate images they made before.

[–] Petra@lemmy.world 2 points 1 month ago (2 children)

Because hosting an AI model is not free. To host both models at the same time would likely require additional cost.

[–] Petra@lemmy.world 4 points 1 month ago

"Stretched solution"? What do you even mean? It is a feature. The old model had a seed system as well, was that a "stretched solution" there too?

And by the way, I was still able to generate similar faces even without using a seed:

NoSeed Prompt

[–] Petra@lemmy.world 2 points 1 month ago (2 children)

Me neither. What are you on about?

I gave you proof the seed system preserves details of the image. It is not tied to the style of the image.

Seeds1 Seeds2

[–] Petra@lemmy.world 3 points 1 month ago (4 children)
  1. There is consistency if you use the same seed to generate images. I frequently save the seeds so I can create several variations of the same picture. In this example, I have the same woman being generated, sometimes with different glasses or minor variations:

Seeds

  1. Haven't had this issue. I've had it far more with the old model than this one. Especially messed up faces when the subject was slightly distant.
  2. Some people have posted some celebrities on reddit, like Taylor Swift. But yes, far more people found many celebrities that didn't work.
  3. This one I agree. Seems at least better today, not sure if something was changed at all.
view more: next ›