Interest is still here very much here, my team just struck some success with the game the other day so I've had to write up some new APIs, which required me getting to grips with C# (didn't take as long as I thought, i don't like not being able to manage my own memory though).
Unfortunately however, my knowledge of reverse engineering does not appear to be enough to figure out how the rest of the MO format works.
Something else to look out for is something SOM uses called a CP or control-points file. It takes triangles artists put in the models and converts them into points (from their center) and then has extra data that tracks the points in every animation frame. This is just so the points don't have to be taken from the mesh data/calculated on the fly. They are sorted into categories based on their coloring.
I heard about that, and honestly thought the system was an excellent idea for expanding the capabilities of vertex morph animation. I actually integrated a very similar system into one of my engines, but rather than keeping the triangles, it looks for any triangles using certain 'point' materials, then calculates the center of the triangle for a position, then uses the direction from the position to the Apex to calculate a direction, then some fun cosine stuff to form a rotation. I liked this system because it allowed me to do skeletal techniques, like attach a head to a body, then use code the modify its rotation so it could face the way I was looking.
MM3D? Do you use it? It's interesting that you mention it, because I decided I would use that (defunct) project as the beginning of a modeling software geared for SOM developers. Something with vertex-morphing and palettes built into it. That will use COLLADA natively. I have some blog posts about its development on my website. It's called Daedalus, named after the DAE file format. I will start by rewriting the entire thing, but I find that it's better to develop software from a starting point than to just do it from scratch. It helps to get your brain's gears going, and I think it can be more attractive to say that some software had origins elsewhere.... it's less egotistical.
MM3D is what I consider to be an excellent middle format for game formats, as it supports both skeletal and vertex morph, internal textures and a few other things of importance. It's also the only software I've found that imports Milkshape3D animations correctly, which is my primary modelling tool. Despite blender and 3DS Max/Maya being allegedly much better, I think they're over complicated for game development.
It is a little mysterious. Something weird about SOM's models, is they are broken into small pieces, or clusters of triangles. This is actually not good on today's hardware, because it increases overhead by way of many unnecessary drawing calls. I don't know if that is strictly CPU overhead, or GPU also. If CPU it's probably negligible. Anyway, it's clearly a vertex caching scheme for SOM. Which was important in the early days, and still is for high polygon models.
From my time developing engines, it's neither overhead. With too many draw calls it simply stops the GPU from doing anything because the CPU will be busy sending data. I like to perform real time vertex batching to avoid it, but unfortunately this doesn't work for animated models. I think this didn't matter to much for older consoles since the GPU generally worked directly with the models themselves, using shared memory helped too. I think this is most apparent in TMD models, with their packet like system, in modern day technology emulating this exactly would require a separate batch/stream for each triangle type. It's very inefficient in modern times, because we need a vertex format which supports all features of each primitive, then just don't use the features of the format we don't need. This means there is a bunch of wasted data floating around in memory doing nothing, which requires to be sent to the GPU. I think partitioned models were done to avoid stretching and intersection of triangles during motion, before they knew how to make a triangle collapse on its self as skin would.
This is what I gathered yesterday. I can say that the white section which I'd not noticed, looks a lot like MDL deltas. You can see, 00, 01, FF, FE, these are byte sized deltas no doubt, small ones in this case, ranging from -2, -1, 0, 1... you know this. But whatever they are these are byte-size cues either to apply deltas forward/backward at a certain rate, or actual deltas, although because they are so small, it seems unlikely their units are millimeters if so. I mean, such micro adjustments would not be visible to the naked eye at PlayStation sample rates.
Ah, so you're saying that this information is specifying specific vertices in the model, and a motion of sorts? So for example, 'Vertex 11 moves in this direction'? Would the same information exist for scale or rotation? From what I know, rotation doesn't work too well in vertex morph, so I doubt it. Obviously when the format is finally figured out, I will 'bake' this information into a more standard vertex morph format, which true frame by frame animation as it is in MM3D.
P.S. I forked your GitHub a while back... I often fork things, but usually they are mature projects. It can be insurance against the project evaporating into thin air, or a way to make small changes to be able to fit projects into build environments. At any rate... I was pretty unimpressed with GitHub just now to find that not only are forks frozen at the time of their forking, but also there are no conveniences for keeping them up-to-date. So suddenly GitHub is looking a lot less enticing.
I did wonder where that branch was. I do not plan on removing the project from github, even if I were to stop working on it. I love open source and I feel like their is joy to be had in being the result to someones furious searching for a solution on google. It's nice to think that now, someone someday may also want to modify King's Field, and will find that a lot of work has already been done for them, and I didn't want to release binaries only because it's good to learn, certainly on a topic like reverse engineering game formats, which no one will teach you.
P.P.S. I'm especially interested in any findings to do with game's vertical elements, i.e. the sections with clearly layered floor plans. For example, if there is a 64x64 map (wherein tiles are functionally a palette) then is there up to 3 64x64 maps? Or if there is one, are the tiles fused? for example. Or if there are multiple maps in memory at a time, which would easily explain the connecting corridors. I've actually come across code in SOM that looks like multiple maps in memory... which it doesn't appear to use. Although I'm at a loss to come up with convincing alternative theories, I'm very much resigned to the reality being, ultimately, less glamorous.
I haven't done much research into the files. I can't tell too much, but I'll add the T Extracting tonight and map tile viewing and you can have a look though the tilesets for yourself. I don't think they are fused from the bit of research I did, but I also don't think there are multiple maps. Honestly I'm not sure how it works since it feels as if the maps are a lot bigger than 64x64. There are only 9 files for each area of the game, 9 music tracks, 9 level file sets (tile, database, event/unknown) and there are 9 big 'RTIM' files, which is the texture data for these. If you think about something like the big mine, you'd think these would be a lot larger than 64x64. I might have simply miss-read the length of a tile entry, and it could be that there is 8000, which would be 128x128 (or around that number). I'll again write a tool for this, since that's the only way more information will be uncovered. I can already read the tilesets, and know the tile data, so it shouldn't be too much work.