The client aborting at this phase of the handshake without further retries and without a timeout or error would generally indicate that the payload size in the server's response was too small to contain a challenge, or too big to have been encrypted with a standard account public key. Both of these would suggest the account you used to connect has a unique unexpected key length.
This is a huge accomplishment! Using this patch on Windows 10 felt like playing much newer game. Everything is smoother, more consistent, higher performance, and prettier! Never has Tribes 2 looked so good and played so well.
The only mempatches I'll need to port would be lerping the camera to the observed player look vector for twitch.tv/t2vault online match recording. I'm sure -developer will let me figure it out.
The Windows experience has been great. Linux and demo playback have a couple of rough edges, maybe.
My Linux environment is across two machines, as below.
Wine 10.15
Debian 12.12
Nvidia 1070, Nvidia 1070
i7-7700k (t2vault), AMD FX-8320
32bit + 64bit opengl libraries
wine x64 prefixdir with Win10 target
My feedback is...
Demos don't seem to have the same player interpolation as in previous versions, resulting in a sensation of 'low framerate' and unrealistic player positions. The following clips demonstrate a player leaving a TR2 base at high speed. I assume their hardware can't keep up. In the QOL patch, the player gets 'stuck' in the base then warps to their actual position. In the previous patch their movement seems more realistic. Ping is 999 in QOL, but varies in the previous patch.
Crash at the end of clip 1a. Sadly, no CRASHLOG.TXT yet - the process locks up or exits. Occasionally memory related errors are printed (free() invalid pointer, I forget the malloc() error), but I understand that's not much context. Infrequently, the crash doesn't trigger. It appears to stop crashing when hybrid terrain is enabled. Will update with any additional details.
On Windows 10 with a 7950X3D and a 3060, the only oddity I noticed was enableHybridTerrain(1); sometimes showing the sky through the terrain. Screenshots below. I see this less often on the Linux machines. It appears to be related to interiors, if 4b is anything to go by.
The recent QOL updates are HUGE and runs great on my Linux/Wine box. Thank you, Krash.
Since it's way more complex than the old RC2a patches. Is that possible to host the project on GitHub, like OldUnreal did on their UT99 community patches. Rather than rely on the constantly growing megathread on this forum?
The patch makes gamma correction work, which is a huge plus. I'm wondering: Is it possible to only piecemeal update that change onto an old T2 install (without the rest of the patch)? I looked at the patch files, but I can't see anything obvious that suddenly makes gamma correction work.
2) Interpolation and Player Models
The new patch has negatively impacted my weapon damage accuracy. I would be very curious to hear your thoughts as to what you think is happening behind the scenes (if you know). Here is my experience:
a) Prior to the patch, I used a given interpolate script setting, and I felt that I had satisfactory damage accuracy (given my connection and location). If I shot at the player model's current position, then I had a reasonably good chance of hitting them. Player model movement was relatively smooth for the most part. Only when players moved very quickly did I notice their movement wasn't 100% smooth, but I could easily handle it.
b) After the patch, that same interpolate setting causes players to "jitter" at a substantially worse rate. With relatively slow movement, players are twitching back and forth, which is not easy on the eyes.
c) Since installing the patch, I have trialed the following:
c.1) Remove my IP script altogether. Result: Player models are now smooth, but my accuracy is awful. When I hit a player moving at moderate speeds, the location where the ammo hits and the location of the player model is not predictably close.
c.2) Find a new IP script setting that doesn't cause players to jitter. After trialing a large number of combinations, there is only one setting that doesn't cause players to jump around, which is: 1 50 0 0. (The second number can be moved a little (25-75). All other numbers cannot be touched. The problem is that my accuracy is still awful with these settings. It's basically the same as removing the IP script.
c.3) Use my old IP settings. Using my old IP settings, I feel my accuracy is close to what it was before the patch. However, it's not as good, because players jump around so much. Plus, it's just not an enjoyable experience.
All of the above typed out, do you have any thoughts? I know that others have also complained about not being able to find a reasonable IP setting that prevents players jumping around. I assume they're just living with the experience. I don't know about how accurate they are now.
This demo isn't an issue with interpolation, exactly, rather that interior has some janky collision geometry, and unfortunately a small piece of updated physics code, while much faster to run, is a little less forgiving of the box hitting tiny ledges. Lacking more frequent movement input packets from the client (and likely due to a spike early in), the simulation diverged and the warp happens at the first opportunity: where the player had received a server packet with a control object transform update. It's a tiny deviation, but it should be adjustable to closer match the original later.
The default hardware tile blender enabled for standard terrain isn't configured for memory-constrained scenarios; it'll vary to some degree depending on the format the driver selects internally, but due to the increased caching limits and increased resolutions for the final composited textures for all tiles, it can reach somewhere between 1.5GB and 3GB of GPU memory for the terrain tiles alone if the game needs full quality tiles everywhere. This will be fine tuned in future to use the options menu slider to limit the maximum tile resolution and caching, but for now it'd have to be disabled with $pref::Terrain::softwareBlender = 1; or possibly limited a bit with $pref::Terrain::textureCacheSize = 220; – the former of these needs to be in ClientPrefs before the game launches, while the latter is going to need to be set after the renderer initializes (i.e. it won't work in ClientPrefs).
If the GPU runs out of memory, it'll eventually cause the game to crash in some unhandled way because it expects certain responses and handles to be valid. There's unfortunately not really a graceful way to avoid this aside from just reducing the memory load.
The new hybrid terrain renderer uses very little memory, the tradeoff being that it has somewhat heavier GPU processing and samples the memory it does need (the textures for materials, lightmaps, etc) far more frequently. It's not freeing any existing blended tiles when you switch to it mid-mission, but it's light enough that it'd fit in almost anywhere below the memory limit.
This is a vendor-specific driver shader compilation issue with the vertex culling trick used for punching holes in the terrain mesh... the trouble with working with OpenGL is that these techniques will work completely fine with 2/3 of vendors and the third will treat it differently. It's tricky to track why the driver would compile it differently without any nvidia hardware, but there's a few things that can be tried for the next build.
No, you'd have to use something third-party to override it for the old version. The engine originally used Windows GDI's SetDeviceGammaRamp which, as a general rule, is not advisable to use on modern systems because it applies globally and may conflict with display drivers, ICC profiles, HDR and night light color management settings, multi-monitor handling, hot-plugging, and so on... It'll still work untouched on certain systems under certain conditions, but not for long. The gamma correction applied now occurs in a post-process shader when compositing the output frame.
The patch does not make any changes to those settings. You receive more information much sooner from patched servers, and you have a significantly reduced trigger delay when both client and server are patched, i.e. within milliseconds of you hitting the mouse button, a packet is on its way to the server, and you will see the result on screen in a maximum of roughly 32ms plus your ping (plus the time to draw to your screen) – depending on the server, this may be two thirds or half the time it would take to see the shot as unpatched. Because there's no time synchronization in packets though, the server doesn't know when you hit the trigger, it just processes that trigger at the next tick, and your client has no idea exactly how stale the response from the server is when it arrives with new info on a player on front of you – and, because certain floating point values may be packed for transit, it may be imprecise data. This is why I would never promote those scripts nor recommend changing the default values to anyone who doesn't understand exactly what's being changed: the model is designed to smooth the client simulation into reasonably approximating what the server tells it, not to exactly reproduce it. Any continuous jitter or twitching, "jumping around" rather than interpolating directly to the new position as designed indicates that you're deviating further from the ground truth of the server. Increasing the max latency ticks, for example, which essentially fast-forwards a player object with its last received movement input, is almost always going to require adjustment after every incoming update, with some warp tick cushioning, or it's going to very visibly rewind every time a player changes velocity.
Solid color emoji fonts might be supportable to a degree, but since all the text is built around the original systems, it only supports a single-byte character encoding. Modern standard color emoji fonts would require a more comprehensive replacement of the whole system to support MB encodings, and a text rendering library like FreeType or DirectWrite that has support for those glyph. Of course, there's technically the possibility of something like the <bitmap:smile> tags used in GuiMLTextCtrl to just draw bitmap emojis if you had something to parse out and manage it clientside I guess.
Pre-patch, I would say that using an IP script was necessary for me. My aim without IP is significantly worse than my aim with good IP settings.
Given your response, and given the changes you made with the new patch, would you say that there is *no* scenario where a person should be using an IP script with your new patch? Or, is there some scenario where using IP makes sense? For example, a person with a slower internet connection or a person who lives a significant distance from the server.
Measuring the power usage of twitch.tv/t2vault with this patch is interesting. Testing wasn't stringent (random demos, nvenc encoding 1080p @ 60fps) yet there is a clear improvement. Measurements were taken after a few hours for each scenario.
Modified RC2a (framerate limited with long sleeps)
94.32KWH / 30 days
QOL Low latency (for the highest power usage scenario)
113.18KWH / 30 days, +20% compared to MRC2a
QOL 120fps
84.61KWH / 30 days, -10% (-9.71KWH) compared to MRC2a
QOL 90fps
82.6KWH / 30 days, -12.4% (-11.72KWH) compared to MRC2a
The low latency setting can reduce GPU churn in pure OpenGL mode, but it's largely dependant on display sync estimates that aren't always available, and it might need to be hidden in the future unless some vendor-specific features can be tied in to stabilize it. Its original implementation was for the DXGI interop extension, where it does have precise hardware timing information, and on supported devices can effectively provide a much more reasonable power saving experience for clients with vsync enabled – all the benefits of only rendering as needed without additional perceived input latency (nor locking up the CPU preventing i/o and other actions from processing).
By all means, adjust to whatever you feel improves your gameplay, but don't set the values blindly, they're not meaningless. The best case you're going to get with the nonsense settings people have shared in the past is some very rough alignment with stale server state, at worst far overshooting at every opportunity and having to be corrected every tick.
Your client's version of the simulation and the server's simulation are running separately, in parallel, and anything your client knows about activity on the server is an echo delayed by at least half your ping. Previous versions of the game had severe timing issues (compounded by modern operating system changes), so you were processing limited data much slower and needed to compensate more for arbitrary delays on both ends; e.g. unpatched servers running on modern Windows will completely skip roughly every third tick and advance by two the next time.
Network updates of players has relatively complex special handling that'll advance them to differently from simple objects like projectiles, but due to how sparse updates are, the lack of precise synchronized timing information, and latency limitations inherent in networking of any kind, a fast moving controlled object is only ever going to be "close enough" to the remote position at any given moment. If you have a low ping, say sub-70, you have info at most one full tick behind a patched server (where the update rate is generally very stable rather than arbitrarily delayed), so if you're telling it to fast forward a player before processing local simulation updates for the same tick, you're often going to overshoot. If you're instead trying to force player objects to move directly to the last position update sent by the server, you're going to undershoot.
The higher your latency, the bigger the difference from the server will be, and fast-forward latency ticks can help in extreme conditions, but with a low ping the most you'll usually want to slightly tweak are the number of warp ticks to smooth the offsets.
Consider for example you have a 100ms ping and midair someone moving at 200+ metres per second, if you don't get a packet from the server registering the hit for say, 50ms, that person will have already moved 10+ metres from the explosion by the time you see it. Would you rather see where the player was when it hit, or would you rather see (more or less) where the player is now? The latter can always be adjusted closer to the truth, but the former is always going to be wrong: the server has moved on since it sent you that explosion. It's ultimately unfortunately never quite going to line up a hit position perfectly in those high velocity scenarios without sync timings and projectiles set to use dynamic client-side detection to pre-empt the hit – and even then, those wouldn't necessarily be correct if the player stops on a dime.
One of the first things I'd do if supporting existing demo playback and legacy connections wasn't necessary would be to change the protocol to sync timing to better line up the physics between both simulations, also giving servers the option to allow adjusting the offset of shots for exactly when the trigger was sent... and if I were doing a complete rebuild of the engine I'd allow a higher server tick rate. As it is now though, those are a bit impractical.
Very interesting, thank you for the explanations. Same again for the low latency information.
Deleted all the additional noise - nothing conclusive yet. Haven't gotten to the bottom of my i7-7700k machine's issues. Fiddling around with winedbg --gdb to try and find out why it's so crash-y without the hybrid terrain renderer.
OS configuration has been ruled out. OBS was open in crashreports2 but not streaming. Hardware and software involved: i7-7700k + nvidia 1070 8GB + Debian 12.12 + wine 10.15 + 64GB system memory (60GB free).
Linux kernel option iommu=pt was tested (as per the filenames containing _iommu_ and _iommuoff_). The only side effect was the crash occurring sooner than the linked clip on occasion.
Crashes at 0x40ae83 would indicate that the resource manager was unable to open a file stream to read, preceding any processing of the file or use in the audio driver integrations. There's not really any new code run at this stage, it's simply failing in preparing to read; this crash should only happen if a file in the resource manager's hash table were removed outside of the game during gameplay, if there's no read access to the file, or if it encounters some issue reading it from a vl2. In theory you could hit it if a filename reference were somehow added to the hash table without the file actually existing, but that wouldn't be the case on a known file in a vl2 like "voice/Male5/vqk.help.wav"
Removing the .asi/.m3d files would of course stop it from trying to read audio files up to this point at all, lacking a provider
The file marked "miles_audio_crash_RC2a" is a crash attempting to read a missing Sky DML – the patch had a fix thrown in for this since I ran into it while cycling through demos on maps I didn't have once.
The references to crashing swapping providers/drivers multiple times aren't too surprising, there's a ton of teardown and potential conflicts in each subsystem, especially going between Miles and OpenAL, where a few hundred pointers need to be rewritten and library functions need to be loaded before any initialization occurs, and a lot of trust is placed in the game having fully stopped previous playback. Being multithreaded and waiting on async callbacks, with some possible variation in response from system level drivers, there might need to be an additional shutdown wait added before it starts back up.
Was thinking the other day, for fog sniping there's always been an offset between IFF and player position but not so with flag carrier. I don't know if I've tried sniping just IFFs with the patch.. Any idea what the root of the offset would be?
Since none of these files appear to be missing or being deleted, I don't know where to go from here. Removing the audio provider to prevent the crash appears noteworthy, especially when the crash location appears benign.
It was necessary to locate all terrains, interiors and skies when attempting to run every demo through RC2a Tribes2 for the first version of the demo archival project. While a few interiors couldn't be found, dawn.dml used by Final.mis is present inside a set of maps named stalker.vl2.
18990 was apparently recorded on a QOL preview patch. It crashes at around 00:13s when played back on the machine having problems with QOL. It lasts a lot longer on the same machine using RC2a.
You can try it with setEchoFileLoads(1); on to see if the filename it's trying to open a FileStream for is as expected, or memPatch("40AE4B", "8B5D0885DB74799090909090"); for the sake of curiosity just to see if what's sent to the readWAV call is somehow empty. For the alxGetWaveLen calls specifically, it should actually be checking that the file exists and creating an instance well before reaching the point where it's crashing with a null stream, which is why it's a very odd place to crash. It'd be easy to add some safety checks in that function of course, but it's not likely the source of the issue... I'd suspect there's either stack corruption or it's failing to allocate buffers, but being unable to reproduce it, and given there aren't any direct changes likely to cause this, I'd have to spend more time looking over the reports.
That particular demo's interesting though, and I wouldn't expect it to fully play back anywhere at the moment; whether due to a spike on the user's system (maybe during alt-tabbing or moving the window, maybe briefly freezing) or networking issues, the recorded blocks are getting backlogged at multiple points. The local simulation and ghost record-keeping ends up falling apart due to sequencing issues, and the playback eventually ultimately crashes.
I've written some changes tonight to the backlog detection to, instead of bypassing it entirely during demo playback, advance the demo playback read to the next packet received from the server whenever this happens, which allows the demo to play to the end – though for the moment it's possible this speeds up the timeline by a few seconds in extreme cases such as this demo.
For simplicity's sake the hybrid terrain square size scaling and vertical offset uniforms were submitted to the shader only upon certain triggers marking it as dirty, and it seems like there might've been an edge case that was missed. It might need to be changed to actively check whether these need to be updated before render.
Comments
The client aborting at this phase of the handshake without further retries and without a timeout or error would generally indicate that the payload size in the server's response was too small to contain a challenge, or too big to have been encrypted with a standard account public key. Both of these would suggest the account you used to connect has a unique unexpected key length.
Tnk U so much Krash. I will make a new nickname hehe
maybe cyanide -mHc- : - D
enjoy ur day my friend
Something I think about sometimes not related to the patch but you have extra fonts anyway is if tribes2 had some concept of emojis
👍️
This is a huge accomplishment! Using this patch on Windows 10 felt like playing much newer game. Everything is smoother, more consistent, higher performance, and prettier! Never has Tribes 2 looked so good and played so well.
The only mempatches I'll need to port would be lerping the camera to the observed player look vector for twitch.tv/t2vault online match recording. I'm sure
-developer
will let me figure it out.The Windows experience has been great. Linux and demo playback have a couple of rough edges, maybe.
My Linux environment is across two machines, as below.
My feedback is...
enableHybridTerrain(1);
fixes this, as seen in this clip of the first training mission: https://www.twitch.tv/t2vault/clip/DeterminedAbnegateDogeKappaWealth-2R7htOqEcKq1OfVqenableHybridTerrain(1);
sometimes showing the sky through the terrain. Screenshots below. I see this less often on the Linux machines. It appears to be related to interiors, if 4b is anything to go by.Thank you for sharing your hard work with the community!
If you'd like any help, message me, I am more than happy to lend a hand. I imagine you don't need it though!
The recent QOL updates are HUGE and runs great on my Linux/Wine box. Thank you, Krash.
Since it's way more complex than the old RC2a patches. Is that possible to host the project on GitHub, like OldUnreal did on their UT99 community patches. Rather than rely on the constantly growing megathread on this forum?
@Krash I have two questions for you.
1) Gamma Correction
The patch makes gamma correction work, which is a huge plus. I'm wondering: Is it possible to only piecemeal update that change onto an old T2 install (without the rest of the patch)? I looked at the patch files, but I can't see anything obvious that suddenly makes gamma correction work.
2) Interpolation and Player Models
The new patch has negatively impacted my weapon damage accuracy. I would be very curious to hear your thoughts as to what you think is happening behind the scenes (if you know). Here is my experience:
a) Prior to the patch, I used a given interpolate script setting, and I felt that I had satisfactory damage accuracy (given my connection and location). If I shot at the player model's current position, then I had a reasonably good chance of hitting them. Player model movement was relatively smooth for the most part. Only when players moved very quickly did I notice their movement wasn't 100% smooth, but I could easily handle it.
b) After the patch, that same interpolate setting causes players to "jitter" at a substantially worse rate. With relatively slow movement, players are twitching back and forth, which is not easy on the eyes.
c) Since installing the patch, I have trialed the following:
c.1) Remove my IP script altogether. Result: Player models are now smooth, but my accuracy is awful. When I hit a player moving at moderate speeds, the location where the ammo hits and the location of the player model is not predictably close.
c.2) Find a new IP script setting that doesn't cause players to jitter. After trialing a large number of combinations, there is only one setting that doesn't cause players to jump around, which is: 1 50 0 0. (The second number can be moved a little (25-75). All other numbers cannot be touched. The problem is that my accuracy is still awful with these settings. It's basically the same as removing the IP script.
c.3) Use my old IP settings. Using my old IP settings, I feel my accuracy is close to what it was before the patch. However, it's not as good, because players jump around so much. Plus, it's just not an enjoyable experience.
All of the above typed out, do you have any thoughts? I know that others have also complained about not being able to find a reasonable IP setting that prevents players jumping around. I assume they're just living with the experience. I don't know about how accurate they are now.
I've been getting this glitch since the latest patch update.
$pref::Terrain::softwareBlender = 1;
or possibly limited a bit with$pref::Terrain::textureCacheSize = 220;
– the former of these needs to be in ClientPrefs before the game launches, while the latter is going to need to be set after the renderer initializes (i.e. it won't work in ClientPrefs).SetDeviceGammaRamp
which, as a general rule, is not advisable to use on modern systems because it applies globally and may conflict with display drivers, ICC profiles, HDR and night light color management settings, multi-monitor handling, hot-plugging, and so on... It'll still work untouched on certain systems under certain conditions, but not for long. The gamma correction applied now occurs in a post-process shader when compositing the output frame.Solid color emoji fonts might be supportable to a degree, but since all the text is built around the original systems, it only supports a single-byte character encoding. Modern standard color emoji fonts would require a more comprehensive replacement of the whole system to support MB encodings, and a text rendering library like FreeType or DirectWrite that has support for those glyph. Of course, there's technically the possibility of something like the
<bitmap:smile>
tags used in GuiMLTextCtrl to just draw bitmap emojis if you had something to parse out and manage it clientside I guess.Thank you for responding to those questions.
Specifically regarding the topic of IP:
Pre-patch, I would say that using an IP script was necessary for me. My aim without IP is significantly worse than my aim with good IP settings.
Given your response, and given the changes you made with the new patch, would you say that there is *no* scenario where a person should be using an IP script with your new patch? Or, is there some scenario where using IP makes sense? For example, a person with a slower internet connection or a person who lives a significant distance from the server.
Measuring the power usage of twitch.tv/t2vault with this patch is interesting. Testing wasn't stringent (random demos, nvenc encoding 1080p @ 60fps) yet there is a clear improvement. Measurements were taken after a few hours for each scenario.
Modified RC2a (framerate limited with long sleeps)
94.32KWH / 30 days
QOL Low latency (for the highest power usage scenario)
113.18KWH / 30 days, +20% compared to MRC2a
QOL 120fps
84.61KWH / 30 days, -10% (-9.71KWH) compared to MRC2a
QOL 90fps
82.6KWH / 30 days, -12.4% (-11.72KWH) compared to MRC2a
The low latency setting can reduce GPU churn in pure OpenGL mode, but it's largely dependant on display sync estimates that aren't always available, and it might need to be hidden in the future unless some vendor-specific features can be tied in to stabilize it. Its original implementation was for the DXGI interop extension, where it does have precise hardware timing information, and on supported devices can effectively provide a much more reasonable power saving experience for clients with vsync enabled – all the benefits of only rendering as needed without additional perceived input latency (nor locking up the CPU preventing i/o and other actions from processing).
By all means, adjust to whatever you feel improves your gameplay, but don't set the values blindly, they're not meaningless. The best case you're going to get with the nonsense settings people have shared in the past is some very rough alignment with stale server state, at worst far overshooting at every opportunity and having to be corrected every tick.
Your client's version of the simulation and the server's simulation are running separately, in parallel, and anything your client knows about activity on the server is an echo delayed by at least half your ping. Previous versions of the game had severe timing issues (compounded by modern operating system changes), so you were processing limited data much slower and needed to compensate more for arbitrary delays on both ends; e.g. unpatched servers running on modern Windows will completely skip roughly every third tick and advance by two the next time.
Network updates of players has relatively complex special handling that'll advance them to differently from simple objects like projectiles, but due to how sparse updates are, the lack of precise synchronized timing information, and latency limitations inherent in networking of any kind, a fast moving controlled object is only ever going to be "close enough" to the remote position at any given moment. If you have a low ping, say sub-70, you have info at most one full tick behind a patched server (where the update rate is generally very stable rather than arbitrarily delayed), so if you're telling it to fast forward a player before processing local simulation updates for the same tick, you're often going to overshoot. If you're instead trying to force player objects to move directly to the last position update sent by the server, you're going to undershoot.
The higher your latency, the bigger the difference from the server will be, and fast-forward latency ticks can help in extreme conditions, but with a low ping the most you'll usually want to slightly tweak are the number of warp ticks to smooth the offsets.
Consider for example you have a 100ms ping and midair someone moving at 200+ metres per second, if you don't get a packet from the server registering the hit for say, 50ms, that person will have already moved 10+ metres from the explosion by the time you see it. Would you rather see where the player was when it hit, or would you rather see (more or less) where the player is now? The latter can always be adjusted closer to the truth, but the former is always going to be wrong: the server has moved on since it sent you that explosion. It's ultimately unfortunately never quite going to line up a hit position perfectly in those high velocity scenarios without sync timings and projectiles set to use dynamic client-side detection to pre-empt the hit – and even then, those wouldn't necessarily be correct if the player stops on a dime.
One of the first things I'd do if supporting existing demo playback and legacy connections wasn't necessary would be to change the protocol to sync timing to better line up the physics between both simulations, also giving servers the option to allow adjusting the offset of shots for exactly when the trigger was sent... and if I were doing a complete rebuild of the engine I'd allow a higher server tick rate. As it is now though, those are a bit impractical.
Very interesting, thank you for the explanations. Same again for the low latency information.
Deleted all the additional noise - nothing conclusive yet. Haven't gotten to the bottom of my i7-7700k machine's issues. Fiddling around with
winedbg --gdb
to try and find out why it's so crash-y without the hybrid terrain renderer.Happy to run any test shaders if it would help.
Krash, not needing a fix for this crash I just discovered, if time allows in the future I will
try to replicate it & post a crash log & message here......... It is unimportant to me now & I
can work around it by editing the map in a different sequence.
I managed to get my first crash when making a new nav graph on a map that had Sparky's
teleporters in it. I can work around it for now by just adding Sparky's teleporters after the
Nav & LOS have been built.
winedbg has borne fruit. A single
CRASHLOG.TXT
file was acquired too! Disabling audio by selecting Miles and removing *.asi and *.m3d files results in no crash. Occurs approximately 32 seconds into demo 18919, as demonstrated in https://www.twitch.tv/reverse_engineeer/clip/CredulousObliquePartridgeSSSsss-IApRM-N3rS7A1LDzOS configuration has been ruled out. OBS was open in crashreports2 but not streaming. Hardware and software involved: i7-7700k + nvidia 1070 8GB + Debian 12.12 + wine 10.15 + 64GB system memory (60GB free).
Linux kernel option
iommu=pt
was tested (as per the filenames containing _iommu_ and _iommuoff_). The only side effect was the crash occurring sooner than the linked clip on occasion.Crashes at 0x40ae83 would indicate that the resource manager was unable to open a file stream to read, preceding any processing of the file or use in the audio driver integrations. There's not really any new code run at this stage, it's simply failing in preparing to read; this crash should only happen if a file in the resource manager's hash table were removed outside of the game during gameplay, if there's no read access to the file, or if it encounters some issue reading it from a vl2. In theory you could hit it if a filename reference were somehow added to the hash table without the file actually existing, but that wouldn't be the case on a known file in a vl2 like "voice/Male5/vqk.help.wav"
Removing the .asi/.m3d files would of course stop it from trying to read audio files up to this point at all, lacking a provider
The file marked "miles_audio_crash_RC2a" is a crash attempting to read a missing Sky DML – the patch had a fix thrown in for this since I ran into it while cycling through demos on maps I didn't have once.
The references to crashing swapping providers/drivers multiple times aren't too surprising, there's a ton of teardown and potential conflicts in each subsystem, especially going between Miles and OpenAL, where a few hundred pointers need to be rewritten and library functions need to be loaded before any initialization occurs, and a lot of trust is placed in the game having fully stopped previous playback. Being multithreaded and waiting on async callbacks, with some possible variation in response from system level drivers, there might need to be an additional shutdown wait added before it starts back up.
Was thinking the other day, for fog sniping there's always been an offset between IFF and player position but not so with flag carrier. I don't know if I've tried sniping just IFFs with the patch.. Any idea what the root of the offset would be?
Since none of these files appear to be missing or being deleted, I don't know where to go from here. Removing the audio provider to prevent the crash appears noteworthy, especially when the crash location appears benign.
It was necessary to locate all terrains, interiors and skies when attempting to run every demo through RC2a Tribes2 for the first version of the demo archival project. While a few interiors couldn't be found,
dawn.dml
used byFinal.mis
is present inside a set of maps namedstalker.vl2
.18990 was apparently recorded on a QOL preview patch. It crashes at around 00:13s when played back on the machine having problems with QOL. It lasts a lot longer on the same machine using RC2a.
Here's an excerpt from a recent stream on the QOL preview patch with hybrid terrain enabled. The terrain seems misaligned but interiors are not because the player collides with them normally: https://www.twitch.tv/t2vault/clip/QuaintFitMinkResidentSleeper-NGiFE4DZedCZr42r
I look forward to future preview releases. t2vault will return to modified RC2a now.
You can try it with
setEchoFileLoads(1);
on to see if the filename it's trying to open a FileStream for is as expected, ormemPatch("40AE4B", "8B5D0885DB74799090909090");
for the sake of curiosity just to see if what's sent to the readWAV call is somehow empty. For the alxGetWaveLen calls specifically, it should actually be checking that the file exists and creating an instance well before reaching the point where it's crashing with a null stream, which is why it's a very odd place to crash. It'd be easy to add some safety checks in that function of course, but it's not likely the source of the issue... I'd suspect there's either stack corruption or it's failing to allocate buffers, but being unable to reproduce it, and given there aren't any direct changes likely to cause this, I'd have to spend more time looking over the reports.That particular demo's interesting though, and I wouldn't expect it to fully play back anywhere at the moment; whether due to a spike on the user's system (maybe during alt-tabbing or moving the window, maybe briefly freezing) or networking issues, the recorded blocks are getting backlogged at multiple points. The local simulation and ghost record-keeping ends up falling apart due to sequencing issues, and the playback eventually ultimately crashes.
I've written some changes tonight to the backlog detection to, instead of bypassing it entirely during demo playback, advance the demo playback read to the next packet received from the server whenever this happens, which allows the demo to play to the end – though for the moment it's possible this speeds up the timeline by a few seconds in extreme cases such as this demo.
For simplicity's sake the hybrid terrain square size scaling and vertical offset uniforms were submitted to the shader only upon certain triggers marking it as dirty, and it seems like there might've been an edge case that was missed. It might need to be changed to actively check whether these need to be updated before render.
The global offsets of markers would be in the navHud rendering, e.g. navHud.playerEyeZOffset for the vertical offset of IFFs over a player.