It's a good question. This is one of those rare situations where it actually is pretty easy to make these changes. Setting this up actually shouldn't be that difficult - we already must handle device input mapping to in-game actions no matter what we do, so whether we do so using a set of internally built presets or a saved map generated by the player still results in the same action mapping. It's a little bit harder to collate a dynamic set of mappings than a prebuilt one, but not by much.

This sort of thing is usually dependent on available UI/UX dev time, since you need to do this on a per-platform basis. We already do this for many other UI screens. The main hurdle I can think of is that we tend to need to use special cert-required images for button inputs (we call them glyphs, like the triangle and square button on Playstation or the A/B/C/D button on Xbox) on these screens which makes them a bit more complex on the testing side. But these are still not that heavy in terms of validation time needed - this feature should not be super expensive to implement.

I'm really not sure why this doesn't happen more often - like I said, it's actually pretty easy to do, it just requires some more UI/UX dev time. That small amount of dev time might be a deal breaker for some studios, but this is the sort of thing that does confuse me.
[Join us on Discord] and/or [Support us on Patreon]
Got a burning question you want answered?
- Short questions: Ask a Game Dev on Twitter
- Short questions: Ask a Game Dev on BlueSky
- Long questions: Ask a Game Dev on Tumblr
- Frequent Questions: The FAQ