Author Topic: Modding Expeditions (KF)  (Read 45991 times)

Offline Holy_Diver

  • Holy Diver
  • Archmage of Light
  • *****
  • Posts: 2280
  • This account won't read/reply to Private Messages
Re: Modding Expeditions (KF)
« Reply #100 on: August 28, 2018, 10:05:45 pm »
Code: [Select]
TPNs:
Dagger (File_0 in ITEM.T): TPage # 10
Short Sword (File_1 in ITEM.T): TPage  # 10
Knight Sword (File_2 in ITEM.T): TPage # 11
All of the other items are either 10 or 11 as the TPage, and every single texture is stored in memory at once.

Funny, I have for Dagger pages 28 (two kinds), 23 & 25. I don't know if these are 0 or 1 based page numbers. I guess I will have to look into it.

Quote
I'm pretty sure now that this is vertex data, but encrypted some how. I've attached a .MO file from KF2, and a .MIM file from KF1 and a file spec I wrote for both of them. They are nearly identical, aside from the fact that .MIM files actually make sense. MIM files are I think chunks of vertex data, as their vertex count matches the size of the data blocks... From prior knowledge of the MO header and the simple to read MIM data, it was easy to analyse and reverse, but as it was an hour job, I haven't figured out every value yet, and the frames still allude me.

What do you mean by "vertex data"? I assume you mean morph, because I've seen monster models ripped from KF2, and I doubt anyone that did it did more than use a program that searches for TMD style magic numbers.

Quote
If MO is compressed in someway, I don't think we're going to be able to get the data without reversing the KF2 exes, because I have absolutely no idea how to go about uncompromising data... On the other hand, if the MO file isn't compressed, the MIM file will give me great incite on how to load a MO file,  because it seems like a dumbed down version of MO.

EDITED: From the looks of it it isn't compressed in any way that isn't too difficult to understand. I just have a feeling there is more data elsewhere, because it doesn't look like enough to me. I wouldn't be surprised if it's something else altogether. Just FYI I'm pretty sure Mime is what Sony calls one of its animation formats. (I think it's in one of the manuals I have, with TMD.)
« Last Edit: August 28, 2018, 10:13:40 pm by Holy_Diver »

Offline Holy_Diver

  • Holy Diver
  • Archmage of Light
  • *****
  • Posts: 2280
  • This account won't read/reply to Private Messages
Re: Modding Expeditions (KF)
« Reply #101 on: August 28, 2018, 10:50:02 pm »
Groovy, I got my TPNs straightened out. I have no clue where I got the layout I had used from. It's possible I copied it from MDL assuming it would be identical. The manual wasn't very helpful. Is PlayStation big endian? I know PS3 was. I got the same values as yours using a wiki online. ALSO my code was passing the CBA field instead of the TSB. Might have been a clerical error, since the code is not using structures to describe the layout.

Offline TheStolenBattenberg

  • Capricorn Crusher
  • **
  • Posts: 192
Re: Modding Expeditions (KF)
« Reply #102 on: August 29, 2018, 12:18:26 am »
Quote
EDITED: From the looks of it it isn't compressed in any way that isn't too difficult to understand. I just have a feeling there is more data elsewhere, because it doesn't look like enough to me. I wouldn't be surprised if it's something else altogether. Just FYI I'm pretty sure Mime is what Sony calls one of its animation formats. (I think it's in one of the manuals I have, with TMD.)
I was just panicking, the data isn't compressed more than removing a reserved ushort from the end of the vertex data. I haven't seen mime, but it probably won't be the same one. I've looked through a few of the developer manuals and haven't found any data that's remotely similar to MO or MIM.

Quote
What do you mean by "vertex data"? I assume you mean morph, because I've seen monster models ripped from KF2, and I doubt anyone that did it did more than use a program that searches for TMD style magic numbers.


Nope, I mean vertex data, as in the vertex data struct in the Playstation File Format PDF. In MIM it's the whole thing, but in MO it looks compressed because it doesn't copy entire sections of data, but just the vertices that were changed in that frame. That's it now, no more data to uncover but bar a few bytes, then there's nothing left in the file.

People have managed to rip the TMD's before because .MO files have a TMD inside them, which is used as a bind pose I believe for you to either just replace vertices in to create the frames, or mix with the vertex data to transform it. I don't know which it is yet but it's not hard to narrow it down.

I will write up a proper tool to load the data tomorrow, then I need to find a format that's good enough to keep the data in. I'm hoping MM3D will do.

Quote
Funny, I have for Dagger pages 28 (two kinds), 23 & 25. I don't know if these are 0 or 1 based page numbers. I guess I will have to look into it.
That's odd. I can only see that happening if you've ripped the dagger attached to arms that's supposed to be inside a MO file. Those page IDs are definitely wrong though. Are you sure your masking the TSB right, I've got mine as "(TSB & 0x1F)". It could be that you've got the data the wrong way round maybe. The sony documentation is very confusing to work with, and they enforce really bad habits with their suggestions on loading the data.

I've got my pages from 0 - 31, since I'd have to do addition otherwise. It feels more correct that texture pages would be an index as well.

Quote
I have what looks like a Sony manual, that I can give you copy of. I should probably go over the UV section sometime...
I've got the entire set, you can get a CD somewhere online called 'TECHREF', it has a bunch of documents tools used by PlayStation software developers back in the day.

Quote
As for atlases... yes I suppose if you pack wrapping coordinates in every vertex you can simulate wrapping, but at what cost? And it doesn't change mipmapping. I wonder if your projects don't take advantage of mipmapping/anisotropic filtering, which is basically what makes graphics appear uniformly solid today. I'm sure Direct3D 10 or so does have advanced atlas support, which just boils down to loading groups of textures into memory simultaneously. However it's certainly possible to develop custom mipmap levels, that do not bleed together, and stop short of final levels where the individual images become single pixels.

With graphics today you are probably fill (pixel shader) bound, and optimizing texture load/stores is probably not going to improve that. I think it's foolish to push the hardware to its limits. You want to work in a space where you are using about 25% tops. Dancing on the edge of a cliff is an idiot's game, and is certainly not of any form of artistic merit.

Fill rate certainly is a big factor these days. I tried to design a method of order independent transparency because I couldn't be bothered to depth sort, by using two depth buffers and sending the pixel data for solid and transparent primitives to two colour buffers. I'm still using it in my KF2 map preview, and it does look pretty nice and removes z-fighting, but it's very slow.

Modern GPUs are better at using larger textures than smaller ones, I'd say that the atlas actually reduces strain on hardware, since it only has to update when a new texture is added, and there's no difference from loading a texture, the data just gets slapped on to a big image instead of its own. Saying that, I do like to push the boundaries, be it an idiots game or not. I think there's merit to it, and a game such as King's Field wouldn't of been possible if people weren't willing to push a little. Smart use of the VRAM by FromSoft would've made a lot of it possible, I doubt they'd of got very far if they kept swapping texture pages for every draw call, instead they had a massive atlas known as VRAM.

I don't actually pack wrapping coordinates into the models UV because that would flood the pipeline with a lot of useless vertex attributes. I won't even put tangent/binormals in, I generate them in shader. My Vertex formats only get a Position, Normal and UV.

When I say I do this work in the vertex shader I mean with uniforms:
Code: ("From forwardtile.hlsl") [Select]
uniform float vParam[128]; //Max of 32 textures per atlas
uniform float tInd = 0.0f;
..
..
float fXPos = vParam[(4 * tInd) + 0];
float fYPos = vParam[(4 * tInd) + 1];
float fTexW = vParam[(4 * tInd) + 2];
float fTexH = vParam[(4 * tInd) + 3];

OUT.vTexcoord = float2(fXPos + (IN.vTexture.x / fTexW), fYPos + (IN.vTexture.y / fTexH));

This actually saves a lot of time in the pixel shader, because I don't have a ton of tex2D calls or more than one sampler being set.

I can use Mipmapping and filtering, because the atlas is still a texture at its heart.

Quote
Assuming the pages are just what addressable by polygons, if the texture is spread across adjacent pages then the UVs can sample outside the first page. But you've sown some doubts into me about how the UVs work with regard to the color mode. I think I've only just divided 255 by the width of the texture, since that works with MDL. But with TMD the size of the texture is not knowable. So I wonder if the meaning of 1U/V unit changes in 4-bit mode, etc. I've programmed a TMD loader, but left UVs with their raw values, and until now I've never worked with TMD+texture, so I can't really say. I have what looks like a Sony manual, that I can give you copy of. I should probably go over the UV section sometime...

TMDs work with the texture pages, which are always 256x256, so there is absolutely no way you need to think of anything other than dividing by 255. Personally since I'm using the texture pages as the textures, I've just divided the UV coordinates by 255 to get a nice value between 0.0f and 1.0f. I don't get distortion because the textures themselves are 255x255, so the precision of a byte is perfect for them.

I think you can calculate the size of the texture by doing something similar to what I do with my atlas system:
Code: [Select]
Vertices in the TMD Are always arranged as:
1, 2
3, 4

UVs too, obviously.

And a texture page is always 256x256

Scale the uvs down between 0 and 1, then

fx = uv2.x - uv1.x
fy = uv3.y - uv1.y

texW = floor(256 * fx);
texH  = floor(256 * fy);

But I'm pretty tired right now, so my mind might be failing me. I think this will actually tell you the texture size at the current UV coordinates, but it could be made to give you the full texture size I'm sure. This does mean you could generate UVs for the regular texture data instead of the texture pages though.

Ashes to ashes, Dust to dust...
Honor to glory; And iron to rust.
Hate to bloodshed, From rise to fall.
If I never have to die; Am I alive at all?

Offline Holy_Diver

  • Holy Diver
  • Archmage of Light
  • *****
  • Posts: 2280
  • This account won't read/reply to Private Messages
Re: Modding Expeditions (KF)
« Reply #103 on: August 29, 2018, 08:42:14 am »
Quote
Smart use of the VRAM by FromSoft would've made a lot of it possible, I doubt they'd of got very far if they kept swapping texture pages for every draw call, instead they had a massive atlas known as VRAM.

That was just how things were then. The PlayStation has no filtering, which is why it can get away with that. Typically a character or object has their texture all on one sheet. And it works alright, because when the mipmapping breaks down it's still believable. But if you have two different things in one texture, they will bleed, and your white ball will start to turn purple. You just can't put things together like that. That's why atlasing is not written about much. It used to be very popular before mipmapping.

Quote
uniform float vParam[128];

Typical shaders will not use that much memory for nonessential things. That can be lighting parameters. Or anything. And I still don't believe that wrapping can work after the interpolation stage.  Random access into an array is or was very costly, if allowed at all. I try to avoid anything that wouldn't fly in 2010, since I'm not sure where integrated graphics are now, and I know that we had all of the necessary ingredients to make artwork far and beyond what anyone's ever accomplished with this medium in abundance by then. (It also happens to be around when Moore's Law really bottomed out. So things have only marginally improved for affordable electronics ever since.)

EDITED: I meant to address/or ask about this...

Quote
I tried to design a method of order independent transparency because I couldn't be bothered to depth sort, by using two depth buffers and sending the pixel data for solid and transparent primitives to two colour buffers. I'm still using it in my KF2 map preview, and it does look pretty nice and removes z-fighting, but it's very slow.

What is normally done is to do the solid elements first, and then put the transparent on top. The PlayStation requires sorting because its transparency effects don't use alpha/inverse-alpha blending. Depth sorting never truly works, but order-independent  blending operations don't require it.

Strikeout: ignore this.
« Last Edit: September 03, 2018, 04:33:27 am by Holy_Diver »

Offline Holy_Diver

  • Holy Diver
  • Archmage of Light
  • *****
  • Posts: 2280
  • This account won't read/reply to Private Messages
Re: Modding Expeditions (KF)
« Reply #104 on: September 02, 2018, 07:29:11 am »
FYI I started an emulation discussion here (https://github.com/libretro/beetle-psx-libretro/issues/419) about the fog effect issues. Please join in to second/champion the issue. I'm mainly very curious WTF gives. And I brought up the menu panels issues too.

P.S. Somewhere along the way I found this (https://phoboslab.org/log/2015/04/reverse-engineering-wipeout-psx) with WebGL demo (https://phoboslab.org/wipeout/) ...this is my second favorite 3-D PlayStation game... or more specifically the sequel (just like with KF) so I hope they are doing well. I'd love to see it in VR.

Offline Holy_Diver

  • Holy Diver
  • Archmage of Light
  • *****
  • Posts: 2280
  • This account won't read/reply to Private Messages
Re: Modding Expeditions (KF)
« Reply #105 on: September 02, 2018, 10:39:05 am »
Off-topic: On the subject of atlases, here (https://docs.microsoft.com/en-us/windows/desktop/direct3d9/using-uvatlas) is an atlas API that could probably be integrated into SOM because it already links to this library (D3DX.) I came across it today while closing tabs.

I am (slowly) moving it toward building a cache of the proprietary formats involved. I don't think one of the SomEx.dll tools would provide this function if so. But the reason I'm interested in its approach is not for performance or texture reasons. Its value the way I see it is it looks like it should be able to convert a texture (or group of textures) into a mapping that could not be done by hand, but that produces minimal stretching and pixels of a consistent area. This I feel is critical to whether or not a game looks good/professional. Even games like Dark Souls suffer from very poor texture mapping. I'm not sure why, but they could have used a process like this to alleviate that problem. It's one of the things I feel like is critical to get right, and it would probably be better to do it automatically. It would take a load off authors and ensure a baseline of quality across SOM's media library.

(EDITED: I assume at a minimum the API generates images with custom mipmaps. Or if it doesn't, surely its output can be used to do that. A custom mipmaps would have to not consider border pixels in downsampling, and fill in the background with a blend of the surrounding foreground. Then it would be pretty good to use. I may work on a technique for agglomerating level textures at the same time.)
« Last Edit: September 02, 2018, 10:43:24 am by Holy_Diver »

Offline Holy_Diver

  • Holy Diver
  • Archmage of Light
  • *****
  • Posts: 2280
  • This account won't read/reply to Private Messages
Re: Modding Expeditions (KF)
« Reply #106 on: September 03, 2018, 04:46:10 am »
Quote
I tried to design a method of order independent transparency because I couldn't be bothered to depth sort, by using two depth buffers and sending the pixel data for solid and transparent primitives to two colour buffers. I'm still using it in my KF2 map preview, and it does look pretty nice and removes z-fighting, but it's very slow.

I later rethought this. Do you mean this (https://en.wikipedia.org/wiki/Order-independent_transparency) ?

That is pretty ridiculous in my book. I replied to this mistakenly. The 100%+100% mode the PS uses is order-independent. As are all 100% background operations. The subtractive one also, although not necessarily mixed.

It's modern, alpha/inverse-alpha blending that is not actually order independent. I just take it for granted, because it basically always looks fine to me. Sometimes a complex object with overlapping pieces looks bad, including when SOM fades things as they "spawn." But I don't know a fix for that, other than rendering them opaque, and blending them over as a 2D operation. Is that what you meant?

I rather like the double-blending effect. It makes the ice cave very interesting, like a shimmering wind chime ornament. It's like a cave of glass. I feel that games should embrace these unique esthetic qualities instead of bending over backward, relenquishing much of their computational power to paltry expectations of what things ought to look like, that are just ultimately boring to look at.
« Last Edit: September 03, 2018, 04:49:40 am by Holy_Diver »

Offline Holy_Diver

  • Holy Diver
  • Archmage of Light
  • *****
  • Posts: 2280
  • This account won't read/reply to Private Messages
Re: Modding Expeditions (KF)
« Reply #107 on: September 03, 2018, 04:58:04 am »
EDITED: Apparently the Dreamcast had some truly exotic hardware (https://msdn.microsoft.com/en-us/library/ms834190.aspx?f=255&MSPPError=-2147217396) "Taking Advantage of the Power of the Dreamcast 3-D Chip!"

It kind of makes me wonder if this was possible, why has its model never competed with the dominant 3D card architectures of today :eek:

EDITED: https://en.wikipedia.org/wiki/List_of_PowerVR_products CLX2 model.
SOURCE: https://dreamcastify.unreliable.network/index.php/transparency-issues/

EDITED: Oh, this (https://en.wikipedia.org/wiki/PowerVR) says:
Quote
The PowerVR chipset uses a method of 3D rendering known as tile-based deferred rendering (often abbreviated as TBDR) which is tile-based rendering combined with PowerVR's proprietary method of Hidden Surface Removal (HSR) and Hierarchical Scheduling Technology (HST).

So there is a name for this chip design alternative. I assumed that only a small subset of their product line worked like this. How fascinating!
« Last Edit: September 03, 2018, 05:21:32 am by Holy_Diver »

Offline Holy_Diver

  • Holy Diver
  • Archmage of Light
  • *****
  • Posts: 2280
  • This account won't read/reply to Private Messages
Re: Modding Expeditions (KF)
« Reply #108 on: September 03, 2018, 07:07:50 am »
 :tinfoil: You know. That there is a Direct3D document about the Dreamcast, makes me wonder if SOM was developed from Dreamcast code (what is that game called... Eternal Ring?)

P.S. One thing that is weird about SOM (which I intend to correct by next year) is it uses Direct3D lighting for the non-level geometry. Except that its level-geometry lighting doesn't match Direct3D's. I doubt that the PowerVR implemented lighting in its hardware, but it's possible that it did (it's described as raycasting) and did so in a cruder form than the standard D3D lighting model.
« Last Edit: September 03, 2018, 07:11:26 am by Holy_Diver »

Offline TheStolenBattenberg

  • Capricorn Crusher
  • **
  • Posts: 192
Re: Modding Expeditions (KF)
« Reply #109 on: September 03, 2018, 06:55:11 pm »
Quote
You know. That there is a Direct3D document about the Dreamcast, makes me wonder if SOM was developed from Dreamcast code (what is that game called... Eternal Ring?)
I don't think Eternal Ring came out for the Dreamcast, I'm pretty sure it was a PS2 exclusive.

Quote
I later rethought this. Do you mean this (https://en.wikipedia.org/wiki/Order-independent_transparency) ?

That is pretty ridiculous in my book. I replied to this mistakenly. The 100%+100% mode the PS uses is order-independent. As are all 100% background operations. The subtraction one also, although not necessarily mixed.
Precisely. I think the technique I use is something similar to 'depth peeling'. It's only issue is memory, keeping 4 render targets the size of the screen isn't great. It's probably save to assume that a fragments depth won't have changed much between two pixels, so it might be possible to cut the size of the depth targets, and combined them into two (two F16 values should be fine for depth). I think it would be possible to make it faster if I could find a way to choose the destination target.

It would be better to split polygons based on transparency, or sort them based on the camera distance though, but I've found the results are less pretty. The system I speak of does make it a little easier to implement certain effects though. Water is particularly good, as intersections with solid geometry will be very smooth, and you can use the depth buffer to apply a dynamic edge to the water in screenspace, making it look a lot better.
Ashes to ashes, Dust to dust...
Honor to glory; And iron to rust.
Hate to bloodshed, From rise to fall.
If I never have to die; Am I alive at all?