Originally posted by R. Belmont:
Yes, but he *should* send his changes back to Pete so they're in the public source archive.
Oh, I only need feedback, if somebody finds a real emulation error, and has a fix for it.
Of course, if a porter has cleanly added his changes to the existing code base, using #ifdefs or whatever, his changes are welcome, and I will add them gladly to the sourceforge project.
But if a porter just hacked up the sources to run on his hardware, I prefer it that he releases this source by himself, kthanx
Originally posted by The Author:
PeteB: Why you cannot filter at shader level?
Well, you obviously cannot simply activate the usual texture filtering with your shader (or only if you are in for some psychedelic color effects).
And to calculate the interpolated fragment color yourself in your shader... you'll need the
weighted average of surrounding texels, okay, but which ones? How to calculate the needed texel coords? And how to realize enough dependant texture reads on ATI cards? nanana.... forget it.
Originally posted by DynaChicken:
easy, easy .. its not about easy. Its all about fast emulation...
A good RGBA texture cache is not easy to code, and it can even run faster than real palettized textures (or a palette shader), since it doesn't cause gfx card stalls by uploading texture data (the palette colors) all the time.
Of course, there is this one special case you have mentioned:
Originally posted by DynaChicken:
A smart texture cache still will have trouble with games that color cycle the palette during special effects.
True, true. If lotsa palette changes are happening to many of the used textures, a texture cache is borged... ah, well, nothing is perfect
Nevertheless, it's just as arekkusu has stated: if somebody wants to code an hw/accel psx gpu plugin, which should be able to run on as many cards as possible (and still with a good speed), he cannot use pal. textures or pal. shaders, he has to develop a good texture caching algorithm, imho.