Prompting SD to get output at the MJ quality level takes significantly more tokens, lots of refinement, heavy tweaking of negative prompting, etc. I'm in several IRC channels where it's the image generator of choice for some IRC bots, and I'm never particularly impressed with the visual quality.)įor MJ in particular, knowing that they at least used to use Stable Diffusion under the hood, it would not surprise me if the majority of the secret sauce is actually a middle layer that processes the prompt and converts it to one that is better for working with SD. (Actually, my experience with hundreds of DALL-E generations is that it's actually quite poor in quality. What I do find is that they are significantly easier to work with. I pay for both MJ and DALL-E (though OpenAI mostly gets my money for GPT) and don't find them to produce significantly better images than popular checkpoints on CivitAI. If/When that happens nVidia will need to slash their prices to sell anything in this sphere, which I can't really see themselves doing. The AMD engineers are actively making the experience better, and it may not be long before it's a practical alternative. Exceed it by a lot, and everything is handled fine, but if it's marginal. The main problem is that exhausting VRAM causes instability. I'm running a RX7600 - 8GB myself, and happily doing SDXL. The software isn't as stable on AMD hardware, but it does work. Even a 3090, if you can find one, is £1,500. It's difficult to get a performance comparison on compute tasks, but I think it's around 70-80% of the 4090 given what I can find. The RX 7900XTX is also 24GB and under half the price (In UK at least - £800 vs £1,700 for the 4090). I think nVidia are extremely exposed on this front. How is the halo product of a range the "sweet spot"? because honestly Nvidia's current generation 24GB is the sweet spot price to performance
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |