This article recounts the experience of a small independent game development studio that has used the Rust programming language to develop games over the past few years. The studio is composed of two developers who are passionate about game development and have also accumulated rich experience in other areas such as web or desktop application development. The studio is dedicated to making games with their self-developed Rust-based game engine—Comfy Engine.
This article is a summary of the thoughts and challenges the developers faced while developing games with the Rust programming language. The developers have thousands of hours of experience using Rust for game development and have successfully released several games. The purpose of the article is not to boast about achievements but to challenge common misconceptions, such as “If you think Rust is not user-friendly, it’s because you lack experience.” This sharing of experiences is not based on scientific evaluation or strict comparative research. Instead, it comes from a small team with the ambition of developing games using Rust and earning a sustainable income from it.
The team’s goal is clear: to complete the development and release of a game within three to twelve months and achieve economic returns through this. This is the cornerstone of the article, which discusses how to find the possibility of self-sufficiency on a commercial level with Rust, unrelated to the joy of learning and exploring Rust. While learning and exploring is a valuable goal, the current discussion focuses on whether Rust can serve as a viable game development tool for earning a living.
The development team has experience in developing games on Rust, Godot, Unity, and Unreal Engine, and some of their games may have appeared on Steam. They have also built their own 2D game engine from scratch and used a simple renderer to create games. In addition, during years of practice, they have often used libraries like Bevy and Macroquad for game development, working on some significant projects. The author of the article also has a full-time job where he is responsible for Rust backend development tasks. It’s worth noting that this experience does not come from a beginner, as the author has written over one hundred thousand lines of Rust code in the past three-plus years.
Therefore, the author’s intention in writing this article is to refute certain false views that have misled many beginners, and he hopes that readers will understand why they have decided to give up Rust as their game development tool after reading the article. The author explicitly states that although he will abandon Rust, he will not stop pursuing a career in game development.
If you are committed to mastering Rust and appreciate its leading technology, and you enjoy facing challenges, then dedicating yourself to learning is worthwhile. As an experienced developer, I will make clear distinctions regarding application scenarios, unlike some Rust enthusiasts who blindly promote it regardless of context. Unfortunately, the Rust community seems to have poured all their enthusiasm into the technological aspect, overlooking the “game” element of game development. For example, I once attended a meetup on Rust game development, where I saw it was laughably suggested to showcase a game, which was met with silence.
People often find that during the process of learning Rust, the so-called unique problems are actually common challenges faced by many developers. Therefore, to increase efficiency, learners have to change their way of thinking and adapt to these “quirks.” These issues usually occur at a fundamental level, such as distinguishing between &str
and String
, or the use of .iter()
versus .into_iter()
. Things we instinctively consider indistinguishable are clearly separated in Rust. I believe that despite some painfully rigid features being necessary, after gaining sufficient experience, developers can easily foresee potential issues and improve efficiency. Personally, I am very fond of using Rust to write utilities and command-line tools, and in most cases, its efficiency even surpasses that of Python.
However, in the Rust community, when people raise basic issues encountered while using Rust, they often receive curt responses like “you’re just not familiar enough.” This is not unique to Rust; similar situations occur when using ECS or Bevy, and even when making GUIs, whether opting for reactive solutions or immediate-mode frameworks.
Throughout my long learning and practice, I have encountered similar issues across different programming languages. As my experience has accumulated, my efficiency has improved, and I’ve gradually learned how to predict and avoid these problems. However, even after dedicating about three years in the environment and engine ecosystem of Rust and having written over one hundred thousand lines of game-related code, I find that many problems persist. So, if you do not wish to endlessly refactor your code, or view programming as a challenge, but rather hope to smoothly use it to get work done, then you might want to reconsider whether to choose Rust.
The process of writing code often comes with constant thinking and iteration, and Rust’s borrow checker frequently forces code refactoring at critical moments, preventing developers from easily and happily translating the inspirations in their mind into code. Although some proponents of Rust argue that this mechanism helps in writing better quality code, I have my doubts. An efficient coding process should allow developers to freely experiment and fix, rather than constantly being stuck on borrow checks.
In other programming languages, once the code is written, it can be set aside, without being forced to think about the various complex connections at every step. For example, when developing a game, the initial goal might be to create a basic character controller and then gradually establish the other elements of the game, instead of requiring the controller to be perfect from the start. However, in Rust, complexity frequently escalates, and the compiler’s constraints mean that even one-off code might need to be repeatedly refactored.
Rust’s strong refactoring capability, is actually to solve the problem of forced refactoring inherent to the language itself. Although this design makes refactoring safe and worry-free without concerns of runtime errors, such convenience is built on the foundation of frequent mandatory refactoring. Developers might be forced to refactor when adding new features, and this experience is not always pleasant.
Experienced Rust developers might say, complaints about Rust often stem from a lack of experience. Indeed, using Rust proficiently can avoid many problems, but the complexity and dynamic nature of game development decide that Rust might not be the best tool in this field. A game acts as a state machine, with continuously changing requirements, especially those changes that need fundamental alterations. The high level of static emphasis that Rust advocates doesn’t always accommodate the evolving needs of game software.
While many may consider borrow checkers and code refactoring as good ways to improve code quality, in the field of game development, it’s often more important to be able to quickly test gameplay and adjust design direction. Compared to so-called “high-quality code,” for developers, a game that can be played earlier is more significant, as it helps to verify the design’s effectiveness sooner.
Rust’s adhered principles often place developers in a dilemma: either disrupt existing processes and invest a lot of time in code refactoring, or involuntarily reduce code quality. This choice is a great psychological burden for programmers. Sometimes, especially for independent game development, the maintainability of the code does not seem to be the most pressing issue; rapid iteration and solving immediate problems are the priorities. When using other programming languages, we may solve problems more easily without affecting code quality too much.
In Rust, we often need to make design decisions between various options such as the number of function parameters, using complex types like Lazy<AtomicRefCell>, introducing indirect layers (such as function pointers), etc. These decisions not only affect the code design but also often have a negative impact on the development experience. Therefore, sometimes it is more reasonable to opt for redesigning parts of the code.
Rust emphasizes the use of indirect layers to fundamentally solve problems, and this method has been proven to be especially effective. Take the Bevy event system, for example, it is often the solution for handling complex tasks that require coordination of multiple parameters. Bevy is highly inclined to manage features through events, sometimes perhaps too reliant on encapsulating logic within a single system. Using indirect layers to deal with problems is a common practice; for instance, one might extract part of the logic from the main operation, handle it separately, and then reintegrate it into the main process, or store commands in a command buffer to be executed at an appropriate time.
This method can often bring unexpected benefits in design patterns, for example, by using World::reserve to reserve entity IDs in advance, combined with command buffer usage, issues like changes in the World in hecs can be resolved. Sometimes this pattern is very effective and can solve a range of complex issues. For example, the get2_mut method discussed by Thunderdome may seem illogical at first glance, but once mastered, developers will find that it can solve many tricky problems.
Although I am reluctant to speak too much about the steep learning curve of Rust, after all, every programming language has its own characteristics. However, even after accumulating considerable experience in Rust, one still encounters many fundamental issues. Sometimes, these issues cannot be resolved solely by well-designed library functions, but require “deferred handling” through command buffers or event queues to achieve practical and effective solutions.
In the field of game development, specific problems often involve multiple interconnected events and particular timing, while managing numerous states. If data movement crosses the boundaries of events, the logic of the entity’s code gets split into two parts—even if the business logic itself is coherent, in our perception, it must be regarded as separate parts.
For experienced members of the Rust community, they firmly believe that Rust’s design philosophy guarantees clarity and separation of concerns in code, and they consider this makes code tidier. In their eyes, the designers of Rust possess remarkable wisdom. Hence, if some common functionalities are difficult to implement in Rust, they believe that this is not a flaw in the design but intentional, pushing developers towards more correct implementations.
This design philosophy sometimes makes certain functionalities that are simple in other programming languages, like C# where a task can be completed in just three lines of code, to require perhaps thirty lines in Rust and likely split into multiple parts. Consider this scenario as an illustration: “While iterating over the current query, I want to check a component on another system, such as generating particle effects, playing audio, etc.” Devoted fans in the Rust community would probably tell you, “This is clearly an event (Event), and so you should not inline this functionality in your code.”
Imagine, following such rules, implementing the feature would become fraught with complexity. For example, in Unity, the relevant code might be simply as follows:
if (Physics.Raycast(…, out RayHit hit, …)) {
if (hit.TryGetComponent(out Mob mob)) {
Instantiate(HitPrefab, (mob.transform.position + hit.point) / 2).GetComponent
}
}
Although this is just a simple example, similar needs emerge in abundance. Especially when introducing a new mechanism or testing a new feature, what we hope for is to be able to code directly, without considering issues like maintainability just yet, merely wishing the code to work correctly where needed. In some situations, we don’t need an event abstraction like MobHitEvent because we might also need to check the raycast and other related functionalities at the same time. We won’t check whether a Transform component is present on a game entity because usually, every game entity (entity) obviously has a transform.
However, in the Rust language, you might not be able to do this. Rust does not allow direct use of .transform properties, and if you aren’t careful and cause an overlapping of prototypes that lead to a query, the situation of a double borrow could immediately cause the program to crash. Even checking if an audio resource exists has limitations, we can use .unwrap().unwrap(), but Rust would notice that you may have not passed a world object. According to Rust’s understanding, a runtime scenario is not a global world object, so shouldn’t queries be part of the system’s parameters through dependency injection, with all content prepared in advance? And does .Choose assume that there is a global random number generator, as well as how to handle threading issues?
Many advocates of Rust might point out, “But this view is not conducive to future expansion,” “It could bring the risk of crashes,” “You cannot assume that there is a global world because…” “Have you considered the case of multiplayer games?” and “Would you dare use this code quality?” and so on. Although they may have good intentions, these are issues that are often considered by Rust community members when focusing on code quality and maintainability.
It’s worth mentioning something while narrating the bits and pieces of my coding journey: even as I busied myself picking out errors, I had already completed the functionality, continuing to brave the winds and waves in the ocean of technology. Code is often a one-off production; what matters is not obsessing over the correctness of technical issues, such as the right choice of random generator, the feasibility of a single-thread assumption, or handling overlapping prototypes in nested queries. Instead, my focus was on how the current implementation of game functionality could enhance the player experience. For me, a simple engine and language are enough, as long as they allow me to concentrate on the essence of game logic.
Solving Error Type Issues with ECS
The type system and borrowing checker of Rust naturally provide solutions to the problem of “how something refers to something else,” that is, the Entity Component System (ECS). Although ECS is very powerful, there’s also some confusion about the term. This confusion stems from different people’s definitions of ECS and the situations within the community where non-ECS content is imposed upon it. Here, we need to clearly define and discuss the essence of ECS.
First, there are specific types of problems that are difficult for developers to implement for various reasons:
- “Pointer-y” data with actual pointers: If character A needs to follow character B, but if B is deleted and deallocated, the pointer of A becomes invalid.
- Combining Rc<RefCell<T>> with weak pointers: Although theoretically feasible, such implementations are usually not adopted in game development due to performance considerations. The impact on memory locality and resource consumption can cause a noticeable performance decline.
- Indexes of Entity arrays: When invalid pointers exist, if we have an index pointing to a deleted element, that index might still be valid but it would point to entirely new content.
Facing these problems, a wonderful solution emerges — the “generational arenas” that I personally highly recommend. This is a compact and lightweight data structure that can stably implement the desired functionalities while maintaining the readability of the codebase. In the Rust ecosystem, this capability is quite rare. Generational arena is essentially an array, but our identifier is not just an index, it is a tuple consisting of an index and a generation (index, generation). The actual content stored in the array is also a tuple (generation, value). It is very simple to understand, whenever content is deleted at a specific index, we only need to increase the generation at that index. Thereafter, every time we try to index the arena, we just need to ensure that the generation of the provided index matches the corresponding generation in the array. If an entry is deleted, its slot will have a higher generation, thus invalidating the index as if the entry didn’t exist.
This method solves several simple problems for us, such as maintaining a plug-and-play list of free slots, so that we can add elements at any time to improve the efficiency of operations — of course, users usually do not notice these behind-the-scenes technical details. Most importantly, such a strategy allows languages like Rust to effectively avoid the limitations of the borrow checker, allowing developers to manually manage memory without touching any pointers, while ensuring absolute safety of all operations. This feature is particularly appreciated in Rust. For certain libraries such as thunderdome, the combination of Rust and the library has implemented this data structure, which perfectly fits the design philosophy of the language.
In fact, the advantages of ECS (Entity Component System) perceived by many people are actually attributable to the features of generational arenas. When people talk about the “excellent memory locality provided by ECS”, they are actually using a Query<Mob, Transform, Health, Weapon> query, which in essence is similar to an Arena. Take the Mob structure as an example:
struct Mob {
typ: MobType,
transform: Transform,
health: Health,
weapon: Weapon
}
Although this definition cannot fully encompass all the advantages of ECS, my view is: in the Rust environment, avoiding the use of Rc<RefCell> as much as possible is not only limited to ECS. A generational arena might be a more effective means to achieve this goal.
For ECS, we can understand its function from multiple perspectives. ECS can act as a framework for dynamic composition, allowing developers to combine different components and handle their coexistence, querying, and modification without binding them to a single type. A classic example is that many developers use state components to tag entities in Rust (since there is no better method currently). For instance, we might want to query all the Mobs in the game, but some of these Mobs may have transformed into different types of entities. We can simply execute world.insert(entity, MorphedMob) and then perform queries that include (Mob, MorphedMob), (Mob, Not) or even (Mob, Option) as needed. Alternatively, directly check the code to see if a specific component exists. The specific implementation of these methods might vary across different ECS frameworks, but the core idea is to “tag” or “distinguish” entities in some form.
Similar scenarios are not limited to Mob but may also occur with Transform, Health, Weapon, or other components.
In an entity system, an entity may not initially include a weapon component. However, once the entity picks up a weapon, we need to add a weapon component to it. This design allows us to handle all entities equipped with weapons in a separate system.
When discussing dynamic composition, I’ll mention the “entity component” approach adopted by Unity, which, although not a pure system-based ECS, makes extensive use of components when composing. Furthermore, even considering performance, its operation is very similar to that of pure ECS. I would also like to commend Godot’s node system, in which child nodes are commonly used as components. While this is not directly related to ECS, it is closely tied to dynamic composition because it allows nodes to be inserted or removed at runtime, thereby altering the entity’s behavior.
It is noteworthy that decomposing components into fine-grained parts for maximum reusability has become a best practice. I’ve been involved in this debate several times, and some have tried to persuade me to separate properties like position and health from the object to prevent the code from becoming as messy as spaghetti. However, after multiple attempts, I am convinced that, aside from scenarios requiring extreme performance, such detailed decomposition is not necessary for most development contexts. I have also tried the “Fat Component” approach and found it more suitable for games that need to build unique logic for a multitude of events. Generic mechanisms might work for modeling health in simple simulations, but in different games, the health of players and enemies represent different logics. It makes sense for non-player entities to use different logics, for instance, monsters and walls may have different health values. Broadly categorizing these as “health” would make the code vague, filled with if statements distinguishing players from walls in the health system.
From an ECS perspective, dynamic structures as arrays can benefit from the way components are stored in memory. We can iterate over the health components and store them compactly in memory. This means we can opt for structured entity storage like:
struct Mobs {
types: Arena<MobType>,
transforms: Arena<Transform>,
healths: Arena<Health>,
weapons: Arena<Weapon>,
}
In such a structure, values at the same index belong to the same entity. Although manually performing these operations can be cumbersome, influenced by our past development experience and the programming language used, sometimes we may feel the need to intervene manually.
Modern Entity Component Systems (ECS) allow us to easily implement specific functionalities just by declaring types in tuples. The underlying storage mechanisms automatically combine the right contents in an orderly fashion. A noteworthy application is ECS’s role in performance, where its use stems not from a need for composition but a pursuit of memory locality.
Indeed, this application of ECS is beneficial in certain domains, yet for most already released indie games, such complexity is unnecessary. Particularly considering that in the market, simple prototypes and design patterns unrelated to player experience are sufficient to meet needs.
In the Rust ecosystem, many developers choose to use the ECS framework to avoid dealing with complex borrowing checkers. Indeed, this is often their main motive for selecting ECS. ECS is very popular and one of the recommended practices by Rust officials, indeed capable of solving a variety of issues. For example, there is no need to manage lifetimes in Rust as finely as is typical for a simple struct Entity(u32, u32)
that is basic enough to be directly Copy.
The reason I discuss this part separately is because I’ve noticed that many developers using ECS are not actually leveraging it for component composition or performance optimization, but rather looking for solutions to the problem of “where should objects be placed”. Although this approach is not inherently faulty, it often leads to online disputes, with people trying to convince others that their practices are wrong and that ECS should be used in a specific way. However, people often fail to understand the real reasons behind others’ use of ECS.
ECS can be seen as generational arenas that are dynamically created, with the main purpose being to implement basic functionality. To perform operations like storage.get_mut::()
and storage.get_mut::
on storage simultaneously, we either need to reimplement some traits of mutability internally or we simply adopt ECS.
Rust has certain characteristics—when you work according to its design principles, the development process can be both fun and efficient; however, when you try to do things that Rust isn’t naturally good at, you might find yourself needing to reimplement features like RefCell
.
Indeed, generational arenas are a nice concept, but one of their main drawbacks is that we need to define a variable and type for each area we use. If each query uses only one component, ECS is undoubtedly a good solution. But wouldn’t it be more convenient if we didn’t need a full-blown prototype ECS, and instead used arenas associated with specific types as needed? Although there are now various ways to achieve this, at this point, I don’t wish to spend significant effort reinventing the wheels of the Rust ecosystem.
Now, I can bid farewell to Rust with a light heart. Some believe that Bevy’s success has helped popularize the concept of ECS (Entity Component System) in the development world, but that might just be an overstatement. Nonetheless, it is undeniable that with its immense popularity and versatile approaches, Bevy has indeed carved a unique place for itself in the ECS landscape. For many engines and frameworks, ECS is just another option, a library that developers can choose to use. In contrast, in Bevy, ECS is at the core, often forming the foundation of the entire game.
It is worth noting, although personally not completely satisfied, that Bevy has undeniably made significant efforts in improving the user-friendliness of the ECS API and systems. Just look at those developers who are familiar with or have used something like specs, and you will understand how much Bevy has contributed to simplifying ECS accessibility, which has made great strides in recent years. However, this is precisely what touches upon my dissatisfaction with the Rust ecosystem, especially towards Bevy. ECS is a tool, highly specialized, aimed at solving certain problems, and comes with its own set of costs.
Let us expand the topic slightly and talk about Unity. Despite any changes in its licensing, high-level decisions, or business models, it must be acknowledged that Unity has played a significant role in driving the success of indie games. According to SteamDB statistics, there are nearly 44,000 games made with Unity on the Steam platform, with the second-ranked Unreal Engine having only about 12,000, and other engines lagging far behind. Those who have been following Unity for a long time may have heard of Unity DOTS—essentially Unity’s implementation of ECS (Entity Component System) and other data-oriented technologies. As a long-time user of Unity, I am tremendously excited about the launch of Unity DOTS, and this excitement stems from the fact that DOTS can coexist with the traditional game object methods.
Although this may entail a lot of complexity, fundamentally, users expect such updates to come. We can implement specific functionalities using DOTS within the same game project while maintaining the standard game objects and scene trees, seamlessly combining these two approaches. I don’t think any Unity developer who understands the value of DOTS would consider it a worthless add-on. Likewise, I wouldn’t imagine anyone viewing DOTS as the entire future of Unity to the extent of thinking that we should completely abandon the game object system and demand all Unity developers to switch to using DOTS. Even without considering the issues of maintainability and compatibility, such an idea is illogical. Because there are still many workflows that naturally fit with the game object system.
I believe developers who have used Godot, especially those who have used the gdnative interface (such as through godot-rust), will have a similar view.
Although the node tree might not always be the optimal data structure for all scenarios, undeniably, it provides great convenience under various conditions. Speaking of Bevy, I find that many people are not aware of how generalized the concept of handing almost everything over to the Entity Component System (ECS) is. To illustrate with a clear example, I see Bevy’s interface system as a notable mistake, a problem that has existed for a long time, particularly considering their promise of “we plan to start developing the editor this year”. One only needs to browse through Bevy’s UI examples to realize how scarce its content is; omitting the code tracing process, such as a button that changes color on hover and click, the source of trouble is obvious. In fact, after attempting to use Bevy’s UI for substantial development tasks, I have to admit that the difficulty is much greater than imagined. This is mainly because dealing with anything UI-related using ECS brings significant inconvenience and pain. Therefore, the closest solution to an editor on Bevy has to rely on a third-party crate like egui.
Moreover, the problem is not just the UI; entrusting so many tasks to ECS really goes against user-friendly operations. In Rust, ECS has been transformed from a conventional tool in other programming languages to an almost ritualistic theory. The use of a tool should be for its simplicity and practicality, but nowadays it has become a mandatory choice. Programming language communities often have their unique preferences, and having used various languages, I have observed these interesting tendencies. The obsession with ECS in Rust seems quite similar to the attitude of the Haskell community towards Haskell. While this comparison may be oversimplified, my personal feeling is that the Haskell community is more mature, with people having a friendlier and more open attitude towards different methods and simply viewing Haskell as an interesting tool for solving appropriate problems.
On the other hand, the way Rust expresses itself often seems self-admiring, like a teenager in a rebellious phase. They express their viewpoints uncompromisingly and are rarely willing to engage in more detailed, in-depth discussions. Programming is like an artful craft requiring programmers to make choices among many trade-offs and to optimize continuously to achieve timely results. The pursuit of perfectionism and the obsession with “the right way” in the Rust ecosystem often give me the feeling that it attracts a lot of newcomers to programming, who take a certain philosophy as truth once they hear about it. Although I know this is not universally applicable, I believe the blind obsession with ECS is likely a byproduct of this phenomenon.
As for a universal solution to prevent the aforementioned situations, one approach is to comprehensively implement a generic system. If the components and systems could be refined and applied properly, all these specific issues should be avoidable, right? However, apart from the argument that “generic systems can’t create interesting gameplay,” currently I don’t find many strong opposing arguments.
In the Rust game development community, it’s common to see many developers share their projects and advice, closely related to the games they are developing. But sometimes, systems are overly clever and universal, actually failing to focus on the game-making itself. Programming becomes an imitation of game logic, and simple actions like “character movement” are mistaken to represent the entirety of gameplay. Developers are passionate about procedurally generated worlds, planets, space, dungeons, and focus on voxels, rendering, world size, and performance. They pursue a universalized interaction and optimized rendering performance. Building a good type system and framework for the game, creating an engine for subsequent works, considering multiplayer needs, using a lot of GPU particles, aiming for visual effects, and writing well-structured, clean, and clear ECS and code are all their concerns.
However, while technical exploration is a good learning path, my main goal is not just to improve programming skills or learn Rust. My real goal is to develop commercial indie games, sell them to as many players as possible within a reasonable time, ensure players are willing to pay for them, and receive recommendations on platforms like Steam.
Please do not misunderstand my intention; developing games for me is not about pursuing profits at all costs. Instead, I approach it from the perspective of a dedicated game developer, discussing how Rust, beyond its technical aspects, affects the focus on gameplay and player experience. Passion for technology is indeed important, but we should also reflect on our true objectives, especially to honestly confront our initial intentions.
Sometimes, I feel that some projects have deviated, turning into a showcase of technology, forgetting the essence of the game. From the standpoint of a serious game developer, this is not normal.
Next, I’ll talk about the principles of game design. I believe the following are key to creating great games, but they conflict with the generic ECS approach: manually designed levels, emphasizing how design guides player behavior; carefully choreographed interactions, not just relying on particle effects but on synchronized events across all game systems; repeatedly testing the game to filter out fun content; getting the game into the hands of players as quickly as possible, because the longer the release, the more players’ enthusiasm wanes; providing a unique and memorable gaming experience.
I understand that some readers might think that I lean towards games full of artistic flair rather than works with a strong engineering style like “Factorio.” However, what I want to clarify is that, regardless of style, a good game should have depth, engage players, and bring a pleasant experience.
I have a special affection for games with rigorous structure and outstanding code aesthetics, as I am also a programmer. I often find that people misunderstand player interaction in game design, considering it a part of artistic creation, when in fact it is at the heart of game development. Game development is not just about building physical models, designing renderers, developing game engines, or creating scene trees; it’s not just about reactive UIs with data binding.
“The Binding of Isaac” as an example, demonstrates how to vary the game experience through careful interaction design. This seemingly simple game has hundreds of upgrades, changing gameplay in much more complex ways than simple “+15% damage” upgrades. For instance, players can choose to stick bombs to enemies, turn bullets into lasers, or ensure that the first enemy defeated in each level does not reappear in subsequent levels.
Some may believe that a generic system can be used to design such games, but this is one of the common mistakes people make in the game development process. Game development is not a closed process; we cannot simply build a generic system in isolation and expect it to perfectly meet the needs of an outstanding game. Instead, we should start with a simple prototype with limited interactions, allow players to experience it, and then determine whether the core gameplay is attractive before adjusting and adding new designs based on feedback.
Some interactive elements require players to fully understand them only after hours of play and trying out different strategies. The approach of the Rust language, however, is entirely different, with any new upgrade possibly necessitating a refactor of all existing systems. Many might consider this to be a good thing, as it implies an improvement in code quality and the addition of more features. Yet, this view overlooks the fact that Rust’s characteristics may force developers to waste a lot of time on redundant issues.
Conversely, other more flexible languages, such as C++, C#, Java, or JavaScript, allow developers to add new features in a more direct manner and quickly get the game operational, evaluating the practical fun of the new mechanics. This approach can achieve rapid iterative development. While Rust developers are still busy refactoring, developers using other languages may have added many new features and gained better insights into the direction of their work through playtesting.
Game design expert Jonas Tyroller has explained this in his video tutorials, which are well worth studying for every game developer. If you don’t know why your game isn’t engaging (neither is mine), the answers are likely to be found in these tutorials. Outstanding games aren’t forcibly produced in isolated labs but evolve through feedback from elite players.
For game creators, mastering the game they’ve designed is a crucial part of the process. They should understand every detail of the design concept and, before releasing the final product, it is necessary to experience and accept failure multiple times. In short, an excellent game is a masterpiece crafted through a series of non-linear processes and numerous trials of imperfect ideas, eventually refining and polishing what works best.
However, as for the Rust game development ecosystem, it’s widely recognized within the community that it is still in a developmental phase. By 2024, we can see that the community is facing and accepting this reality. The external view of this ecosystem, however, is quite different, largely thanks to the excellent marketing strategies of projects like Bevy. Not long ago, when Brackeys released their video on returning to Godot for game development, I immediately watched it and held high hopes for the mentioned open-source game engine. At around the 5:20 mark, the video showed a chart of game engine market share, to my surprise listing three Rust game engines: Bevy, Arete, and Ambient.
I feel it necessary to emphasize here that Rust has become more than just a programming language; it has become a symbol, a cultural phenomenon, akin to Internet memes, serving as a tool for people to express stances and humor, which shouldn’t be the case. In the Rust ecosystem, promotional strategies often focus on projects that dare to make bold promises, with slick websites/readme files, flashy gif animations, and the ability to showcase abstract values. Actual usability seems to be less of a concern.
Despite this, there are many developers who quietly dedicate themselves to actual creation. They don’t make unattainable promises but seek feasible solutions instead. However, since they are not “flashy” enough, they often go unnoticed, and even when mentioned, are disparaged as “second-class projects.” Macroquad is a good example. It is a practical 2D game library, compatible with all platforms, with easy-to-use APIs, fast compilation speeds, hardly any dependencies, and astonishingly, it was built by just one person. It also has an accompanying library, miniquad, responsible for providing graphics abstraction on platforms including Windows, Linux, MacOS, Android, iOS, and WASM.
Yet, Macroquad has committed what’s considered a significant “sin” in the Rust ecosystem—it adopts global state, which may lead to unsound practices. The “may” is actually too cautious to use; from the viewpoint of those who are technically meticulous, as long as you refrain from using rawest-level APIs like OpenGL, it is completely safe for all intents and purposes. I myself have been using Macroquad for nearly two years, without encountering any issues. However, although Macroquad is outstanding, whenever it is mentioned, it is often met with sneers and sarcasm, simply because it doesn’t fully conform to Rust’s proclamation—100% safety and correctness.
Delving into the current 3D game engine development landscape, we see the emergence of Fyrox, a feature-rich 3D game engine. It comes equipped with a comprehensive 3D scene editor, a flexible animation system, and various tools necessary for game creation. Impressively, this project was completed independently by a single developer, who also successfully developed a 3D game using Fyrox.
Although I have not personally used Fyrox, I must admit that I was once deceived by the exquisite website designs, the high GitHub star counts, and the flashy but insubstantial marketing slogans. Recently, Fyrox has started to gain attention on Reddit. Regrettably, despite its robust editor functions, it rarely appears in promotional videos. In contrast, the Bevy engine seems to always be omnipresent, frequently dominating the media spotlight.
Another example is godot-rust, the Rust bindings for the Godot Engine. It has made an impressionable “sin” by not being a purely Rust solution but a binding to a “C++ engine”. From the perspective of the Rust community, this seems somewhat contradictory to the philosophy, as Rust is considered pristine, right, safe, while C++ is seen as flawed, outdated, gauche, dangerous, complex.
For this reason, the Rust game development community avoids tools like SDL, because we have winit; we don’t use OpenGL, we opt for wgpu; we say no to Box2D or PhysX, thanks to Rapier; for game audio, we have kira rather than old technologies; we don’t use ImGUI, because we have egui. Most importantly, the Rust community refuses to use existing game engines written in C++ to honor the high standards of the Rust language.
For developers eager to create games with Rust, my first recommendation is the use of Godot and godot-rust, especially when developing 3D games. These not only provide the necessary functionality, but they are mature engines capable of helping developers truly deliver finished products. While our team experienced profound pain when developing the game BITGUN with Godot 3 and godot-rust, we later realized that the discomfort did not stem from the bindings themselves. It came from our decision to mix GDScript with Rust using various dynamic methods.
This was our first and largest Rust project, and also one of the reasons why we chose Rust as the development language. However, we later recognized that every attempt to create a game with Rust is not just about the game; it is a complex exercise in resolving issues with Rust’s language limitations, ecosystem shortcomings, or design decisions, all of which are always accompanied by challenges inherent to Rust’s unique features.
Although integrating GDScript with Rust is not technically easy, Godot still provides a path to temporarily set aside problems and move forward. This is not given enough emphasis within the community of developers who choose to write fully in code, especially those who are keen on the Rust ecosystem. The Rust language, with its distinctive complex designs, has limited my creativity.
As for the Ambient project, since it’s relatively new and I haven’t personally used it, I won’t comment too much on it. Although it appeared in Brackeys’s promotional videos, I haven’t heard of others using it. On the other hand, Arete released version 0.1 a few months ago, but due to its vague claims and closed-source code, it has received some negative feedback within the Rust community. Nevertheless, I still often hear mention of this project, possibly due to the founding team’s bold publicity efforts.
Regarding Bevy, I believe there are good reasons for its status as a mainstream Rust game engine. It stands out in the field due to the scale of its project and the number of developers involved. They have built a vast technical community, and even though I don’t fully agree with their promises and some decisions made by their leadership, I have to admit that Bevy is indeed very popular.
Next, we need to discuss the somewhat misleading and confusing state in the Rust community. Those unfamiliar with Rust might take the marketing content and promotional articles of these engines at face value, and I myself have been deceived multiple times by seemingly compelling statements, only to find that they are all bluster and poor in functionality.
Another detail worth paying attention to is that Rapier itself is not a game engine. It is a highly-regarded physics engine, hoping to be a pure Rust alternative to solutions like Box2D, PhysX, and so on. Because Rapier is written in pure Rust, it has various advantages such as WASM support, extremely fast speed, multicore parallel processing capabilities, and high security.
My view on Rapier is mainly based on its application in a 2D environment. Its core functions do work effectively, but there are fundamental issues with many of the advanced APIs. For instance, the convex decomposition feature can crash when handling simpler data, and deleting multibody joints could also cause the program to crash. This latter issue is particularly interesting—it makes me wonder, has no one tried to delete joints before me? This should not be a rare or extreme usage. Overall, I found Rapier’s simulation results to be highly unstable, which eventually pushed me to develop my own 2D physics engine which, in some ways like preventing object overlap, even performed better than Rapier. While this does not mean I am heralding my own physics library—as it has not been extensively tested—if newcomers to Rust are looking for a physics engine, the community is likely to recommend Rapier. It is generally perceived as an excellent and popular library, especially considering it has a nice website and wide community recognition.
In the world of technology and programming languages, especially in ecosystems like Rust’s, a phenomenon can often be observed: many projects dictate or force developers to program according to their design philosophies in an almost compulsive manner, leading sometimes to developers feeling misled and developing the misconception that they are using it wrong. Certain practices within the Rust community seem to signal that developers should not conceive or implement certain functionalities a certain way, reminiscent of the discomfort encountered when dealing with side-effects in Haskell—as if it’s saying, “You shouldn’t do that.”
When it comes to global state, many developers tense up because it is considered a major taboo in programming. The Rust community seems to hold this banner high, taking an adversarial stance against global variables, but in reality, such extreme sentiment is not applicable to all projects. In the field of game development, the aversion to global state is often overemphasized, yet not everyone really understands the inherent issues, on which the Rust community also seems to have gone off course.
In the gaming industry, especially when it comes to the development of the games themselves, there are established truths—usually there’s only one audio system, one input system, one physics world, one deltaTime, one renderer, one resource loader in a game. While sometimes the avoidance of global state can be beneficial, especially when building things like physics engine-driven multiplayer online games, for most 2D platformers, vertical shooters, voxel-based exploration games, and others, the use of global state is appropriate and practical.
After extensive trials and exploration, and the continual use of parameter injection to achieve what’s called a “pure” programming paradigm, I believe that completely avoiding global state is a painful design. In game development, for example, the command play_sound("beep")
to play a sound effect directly reflects the convenience of global state. When more fine-grained control is needed, methods like play_sound_ex(id: &str, params: PlaySoundParams)
become particularly important, and these all revolve around the concept of global state.
Some projects like Macroquad deviate from this common practice and are a minority within the Rust ecosystem. Mentioning the example of the Bevy ecosystem is not to single it out—it’s just making its presence known within the community. Most of the game development features I frequently use in Comfy rely on global state, such as the sound playing function play_sound("beep")
, and the creation of a TextureHandle using texture_id("player")
to reference resources. Both are typical examples of using global state.
In the least ideal situation, when there is a lack of asset servers at hand, using paths as unique identifiers becomes an unavoidable choice. Due to the unique nature of paths, such identifiers are naturally unique.
When it comes to drawing graphics, common methods such as draw_sprite(texture,position,…) or draw_circle(position,radius,color) need to be called. However, since most mature engines batch process drawing operations, they typically do not support enqueuing drawing commands, thus limiting more complex functionality.
For someone like me, who wishes to draw a circle freely whenever needed, the ideal situation is to have a global queue.
As a Rust developer (not limited to game developers), some may wonder upon reading this: How can all systems run in parallel? The Bevy engine mentioned here is an excellent case that addresses this issue and tries to solve it in a general way. This raises a logical question: What are the effects of having all systems run entirely in parallel? For beginners in game development, this seems to be a good approach as it allows for good performance on thread pools in a seemingly asynchronous backend environment.
Unfortunately, in practice, this may be a significant design mistake made by Bevy. Rust developers have gradually realized, although few are willing to publicly admit it, that Bevy’s parallel system model is not always flexible in ensuring consistency in operation order between different frameworks. To maintain the order of operations, certain constraints must be explicitly specified.
While this sounds reasonable, in practical development, especially when attempting to develop large games with Bevy, developers often have to mark a large number of dependencies. This is because game events usually need to happen in a specific order to avoid frame drops or errors due to random execution sequences. However, attempts to communicate this point to the community often face strong opposition.
Technically, Bevy’s design is impeccable, but when applied to game development, it always brings some issues. Although the parallelized part may make the game run faster, there isn’t much left that can be parallelized after strictly ordering systems. From a practical standpoint, this is like trying to parallelize a purely data-driven system for just a slight performance gain. Considering the effort and the payoff, such an action is not worth it.
Looking back at many years of game development experience, the parallel code I wrote in Unity using Burst/Jobs far exceeds my achievements in Rust. Whether in Bevy or in custom code, technical issues continuously consumed my time, leaving me with little opportunity to think about how to make games more fun. Indeed, I often find myself fighting with programming languages, or making design decisions to circumvent certain language features, all in an effort to ensure that the peculiarities of Rust do not significantly impair the development experience.
static mut
: This option is unsafe, so it needs to be marked as unsafe
every time it is used, and misuse could lead to undefined behavior (UB).
static X: AtomicBool
(or AtomicUsize
, or other supported atomic types): Though this is a viable solution and relatively easy to use, it usually only works for simple types.
static X: Lazy<AtomicRefCell<T>>
: Defined as a lazily initialized AtomicRefCell
, it is suitable for most types. However, this method is not only cumbersome to define and use, but also prone to runtime crashes due to double borrowing.
for (entity, mob) in world.query::<&mut Mob>().iter() {
if let Some(hit) = physics.overlap_query(mob.position, 2.0) {
println!("hit a mob: {}", world.get::<&mut Mob>(hit.entity));
}
}
The issue with this code is that it operates on the same object in two places. Here’s another simplified example, suppose we need to iterate two sets of objects at the same time:
for mob1 in world.query::<&mut Mob>() {
for mob2 in world.query::<&Mob>() {
// ...
}
}
According to Rust’s rules, we cannot set two mutable references for the same object, so any behavior that could lead to such a situation is not allowed.
Runtime crashes often occur when we encounter non-overlapping queries. Some Entity Component System (ECS) frameworks try to address this issue, like Bevy’s mechanism which can at least guarantee partial functionality of overlapping queries. For example, when Query<(Mob, Player)>
and Query<(Mob, Not)>
do not overlap, they can at least perform partially overlapping queries. However, this kind of solution can only deal with scenarios that are completely non-overlapping.
As I have also mentioned before when discussing global state, once things are treated as global, the limitations become particularly evident, making it easy for other parts of the code to inadvertently cause a global RefCell
borrowing issue. Rust developers usually see this as a good thing to prevent potential bugs, but personally, I think such restrictions are not very helpful, as I have not encountered similar issues when using programming languages without these restrictions.
Next, let’s address the threading issue. I’ve noticed a misconception among Rust game developers, who often conflate game development with backend services, thinking that everything must run asynchronously to ensure system stability. In game code, this often means encapsulating content in Mutex
or AtomicRefCell
to “avoid issues like missing synchronized access that might occur in C++ programming.” However, this is just to satisfy the compiler’s requirement for thread safety, and it might be the case that thread::spawn
isn’t even used throughout the entire codebase.
I would like to specifically point out the unexpected crashes caused by dynamic borrowing checks. While writing this article, I encountered a game crash issue due to overlapping
World::query_mut
queries. We have been using hecs for nearly two years now, so the problem isn’t as simple as inadvertently nesting two queries like a beginner might. In reality, part of the problematic code was in a top-level system, responsible for executing certain operations, while another separate piece of code deeply used ECS for some basic functions. After considerable refactoring, these two parts inevitably overlapped.
The common advice from the community is “it only happens because the code structure is bad,” so one should rethink and adjust the design. While there is some truth to this criticism—because indeed, part of the crash can be attributed to less-than-ideal design in sections of the codebase—the problem is that other languages do not force such refactoring. In non-Rust ECS solutions, such as with flecs, the prototype overlap is not seen as an issue.
Problems aren’t confined to just ECS. With RefCell
, we encountered similar overlapping issues where two .borrow_mut()
calls overlapped and caused a crash. What’s unacceptable is that crashes are not only due to “poor code quality.” The usual community advice is “borrow as little as possible,” but fundamentally, this still requires developers to construct their code correctly.
In the field of game development, not all of our time and energy can be devoted to structural optimization of our code. Sometimes, we may need to use data from a RefCell
within a loop, letting the borrow extend throughout the entire loop, and there’s nothing wrong with that. However, if the loop is slightly more complex, involving calls to internal systems that need the same cell
, especially in situations involving conditional logic, challenges arise immediately.
Some advocates might suggest executing conditional operations indirectly, coupled with events, but in reality, we’re faced with dispersed game logic throughout the entire codebase, which isn’t as simple as just a few lines of code. In an ideal world, all content would be tested with each refactoring, each branch of code reviewed, and the code flow would be linear, clear from top to bottom—but such a scenario is almost impossible.
The truth is, even without using RefCell
, it’s necessary to carefully plan our function design to ensure they pass the correct context objects or only the required arguments. For independent game developers, this kind of design work is often impractical. Performing significant refactoring for a feature that may be phased out in a few days is undoubtedly a waste of time. In this case, the borrowing issues with RefCell
become particularly thorny. Otherwise, we must reorganize data into different contextual structural forms, change function arguments, or adopt indirect methods to separate things.
Due to a range of unique constraints that Rust brings to programming, developers often encounter problems that wouldn’t arise in other programming languages. Passing context objects is one such issue. In most other languages, introducing global state or singletons isn’t a big deal, but Rust complicates simple issues.
The first solution might be to “only keep references to what’s necessary,” but experienced Rust developers know that this is nearly impossible to achieve. The borrow checker requires us to track the lifetime of each reference, and because lifetimes become generic parameters, they contaminate nearly every place that uses the type, making even simple experimentation difficult.
Here, I want to point out a problem that many inexperienced Rust developers may overlook. On the surface, “just using lifetimes” seems harmless:
struct Thing<'a> {
x: &'a i32
}
But the problem lies in that if we need to define a function like fn foo(t: &Thing)
…… it turns out to be unworkable. Since Thing
is generic over its entire lifetime, it must be converted to fn foo<'a>(t: &Thing<'a>)
or some other, worse form.
If we try to store Thing
inside another struct:
struct Potato<'a> {
size: f32,
thing: Thing<'a>,
}
Even if Potato
may not actually be affected by Thing
, Rust still takes its lifetime seriously and forces us to continuously pay attention to it.
Reality is often much more complicated than it looks on the surface. In the world of Rust programming language, things become tricky when we encounter problems with code fragments that need to be removed. For example, when we have the following struct:
In a certain refactoring, we might plan to change it to:
Such modification is clearly untenable as it would produce an unused lifetime. Although these types of issues also frequently occur in other languages, Rust’s lifetime traits often require developers to invest a lot of time in solving and debugging.
Deleting a lifetime in Rust means you have to thoroughly remove it from various parts of the code, which often leads to extensive cascading refactoring. Based on years of experience, one major challenge I encountered is that a change intended to be implemented via a simple lifetime iteration ends up necessitating modifications in up to ten different places in the code.
Furthermore, when you cannot simply maintain a reference to an entity, Rust offers an alternative: using Rc or Arc to share ownership. While effective, this method is often strongly opposed by some. After some time and experience, I found that best practices often involve quietly using them without drawing too much attention.
However, this method of sharing ownership is not always the ideal solution, especially in scenarios where performance is critical. Sometimes, we must adhere to the strategy of only obtaining references. In Rust’s game development, a top technique is “passing down references from the top”. Indeed, this method is highly effective, similar to the way props are passed in React. But this also means that we need to pass all necessary content down to every function.
While this sounds simple, it might not be easy to execute. Many insist that as long as the code is well-designed, no problems will arise. However, whether the code can always be correctly designed, or if this is just an exaggerated standard, remains to be seen. Fortunately, we have another option: creating a context struct for passing, which includes all necessary references. As shown below:
player: &’a mut Player,
camera: &’a mut Camera,
// …
}
With this approach, each function in the game only needs to receive a
This might seem like a perfect solution, but it still has its limitations. For example, if we want to run a player system and keep the camera content unchanged at the same time, the player_system would also need
In programming, it’s common to encounter situations where you need to operate on multiple fields of a specific object within one function or scope. For example, in an update loop, you might write your code like this:
let cam = c.camera;
player_system(c);
cam.update();
However, this may result in a compilation error that says “cannot borrow c because it is already b