EDITED: From the looks of it it isn't compressed in any way that isn't too difficult to understand. I just have a feeling there is more data elsewhere, because it doesn't look like enough to me. I wouldn't be surprised if it's something else altogether. Just FYI I'm pretty sure Mime is what Sony calls one of its animation formats. (I think it's in one of the manuals I have, with TMD.)
I was just panicking, the data isn't compressed more than removing a reserved ushort from the end of the vertex data. I haven't seen mime, but it probably won't be the same one. I've looked through a few of the developer manuals and haven't found any data that's remotely similar to MO or MIM.
What do you mean by "vertex data"? I assume you mean morph, because I've seen monster models ripped from KF2, and I doubt anyone that did it did more than use a program that searches for TMD style magic numbers.
Nope, I mean vertex data, as in the vertex data struct in the Playstation File Format PDF. In MIM it's the whole thing, but in MO it looks compressed because it doesn't copy entire sections of data, but just the vertices that were changed in that frame. That's it now, no more data to uncover but bar a few bytes, then there's nothing left in the file.
People have managed to rip the TMD's before because .MO files have a TMD inside them, which is used as a bind pose I believe for you to either just replace vertices in to create the frames, or mix with the vertex data to transform it. I don't know which it is yet but it's not hard to narrow it down.
I will write up a proper tool to load the data tomorrow, then I need to find a format that's good enough to keep the data in. I'm hoping MM3D will do.
Funny, I have for Dagger pages 28 (two kinds), 23 & 25. I don't know if these are 0 or 1 based page numbers. I guess I will have to look into it.
That's odd. I can only see that happening if you've ripped the dagger attached to arms that's supposed to be inside a MO file. Those page IDs are definitely wrong though. Are you sure your masking the TSB right, I've got mine as "(TSB & 0x1F)". It could be that you've got the data the wrong way round maybe. The sony documentation is very confusing to work with, and they enforce really bad habits with their suggestions on loading the data.
I've got my pages from 0 - 31, since I'd have to do addition otherwise. It feels more correct that texture pages would be an index as well.
I have what looks like a Sony manual, that I can give you copy of. I should probably go over the UV section sometime...
I've got the entire set, you can get a CD somewhere online called 'TECHREF', it has a bunch of documents tools used by PlayStation software developers back in the day.
As for atlases... yes I suppose if you pack wrapping coordinates in every vertex you can simulate wrapping, but at what cost? And it doesn't change mipmapping. I wonder if your projects don't take advantage of mipmapping/anisotropic filtering, which is basically what makes graphics appear uniformly solid today. I'm sure Direct3D 10 or so does have advanced atlas support, which just boils down to loading groups of textures into memory simultaneously. However it's certainly possible to develop custom mipmap levels, that do not bleed together, and stop short of final levels where the individual images become single pixels.
With graphics today you are probably fill (pixel shader) bound, and optimizing texture load/stores is probably not going to improve that. I think it's foolish to push the hardware to its limits. You want to work in a space where you are using about 25% tops. Dancing on the edge of a cliff is an idiot's game, and is certainly not of any form of artistic merit.
Fill rate certainly is a big factor these days. I tried to design a method of order independent transparency because I couldn't be bothered to depth sort, by using two depth buffers and sending the pixel data for solid and transparent primitives to two colour buffers. I'm still using it in my KF2 map preview, and it does look pretty nice and removes z-fighting, but it's very slow.
Modern GPUs are better at using larger textures than smaller ones, I'd say that the atlas actually reduces strain on hardware, since it only has to update when a new texture is added, and there's no difference from loading a texture, the data just gets slapped on to a big image instead of its own. Saying that, I do like to push the boundaries, be it an idiots game or not. I think there's merit to it, and a game such as King's Field wouldn't of been possible if people weren't willing to push a little. Smart use of the VRAM by FromSoft would've made a lot of it possible, I doubt they'd of got very far if they kept swapping texture pages for every draw call, instead they had a massive atlas known as VRAM.
I don't actually pack wrapping coordinates into the models UV because that would flood the pipeline with a lot of useless vertex attributes. I won't even put tangent/binormals in, I generate them in shader. My Vertex formats only get a Position, Normal and UV.
When I say I do this work in the vertex shader I mean with uniforms:
uniform float vParam[128]; //Max of 32 textures per atlas
uniform float tInd = 0.0f;
..
..
float fXPos = vParam[(4 * tInd) + 0];
float fYPos = vParam[(4 * tInd) + 1];
float fTexW = vParam[(4 * tInd) + 2];
float fTexH = vParam[(4 * tInd) + 3];
OUT.vTexcoord = float2(fXPos + (IN.vTexture.x / fTexW), fYPos + (IN.vTexture.y / fTexH));
This actually saves a lot of time in the pixel shader, because I don't have a ton of tex2D calls or more than one sampler being set.
I can use Mipmapping and filtering, because the atlas is still a texture at its heart.
Assuming the pages are just what addressable by polygons, if the texture is spread across adjacent pages then the UVs can sample outside the first page. But you've sown some doubts into me about how the UVs work with regard to the color mode. I think I've only just divided 255 by the width of the texture, since that works with MDL. But with TMD the size of the texture is not knowable. So I wonder if the meaning of 1U/V unit changes in 4-bit mode, etc. I've programmed a TMD loader, but left UVs with their raw values, and until now I've never worked with TMD+texture, so I can't really say. I have what looks like a Sony manual, that I can give you copy of. I should probably go over the UV section sometime...
TMDs work with the texture pages, which are always 256x256, so there is absolutely no way you need to think of anything other than dividing by 255. Personally since I'm using the texture pages as the textures, I've just divided the UV coordinates by 255 to get a nice value between 0.0f and 1.0f. I don't get distortion because the textures themselves are 255x255, so the precision of a byte is perfect for them.
I think you can calculate the size of the texture by doing something similar to what I do with my atlas system:
Vertices in the TMD Are always arranged as:
1, 2
3, 4
UVs too, obviously.
And a texture page is always 256x256
Scale the uvs down between 0 and 1, then
fx = uv2.x - uv1.x
fy = uv3.y - uv1.y
texW = floor(256 * fx);
texH = floor(256 * fy);
But I'm pretty tired right now, so my mind might be failing me. I think this will actually tell you the texture size at the current UV coordinates, but it could be made to give you the full texture size I'm sure. This does mean you could generate UVs for the regular texture data instead of the texture pages though.