Posted by  kovarex

Hello,
long time no see 🙂

We obviously have a lot to talk about when it comes to the game changes we recently did, or plan to do, but we don’t want to share any of it yet.

Yet, there is currently a topic very relevant to us and we can share it without revealing any specific changes to the game. Today’s post will be quite technical and related to programming, so if you just came for the game news, you can safely skip this one.

Uncle bob

Now that there are only developers here, I can share my new discovery of Uncle Bob and his really nice explanation of some of the fundamental principles related to programming project management, and more. If you have 8.5 free hours on your hands, I propose you watch it, as there will be some references to what he mentions later on.

My general thought was, that we maintain the quality of the code to be quite high, and we have a reasonably good work methodology. But we were the victims of selective blindness in many places actually. It is interesting, how some pieces code were just good from start and stayed pretty good throughout all the years, even when it was expanded a lot… and some of the code just deteriorated heavily.

And the answer is explained with the metaphor of the wax foundation.

What is a wax foundation and how is it related to programming you might ask? My grandfather was a very enthusiastic bee-keeper. My childhood was spent in our garden, where you had to be careful where you step, where you sit down, and you could never leave anything sweet just laying around, because you would find a big pile of bees on top of it quite soon. I had to help him and learn about the bees from time to time, which I honestly hated, because I knew that I will never have any bees of my own. But he was right about one thing, everything you learn will be useful to you in one way or another.

One of the jobs you do around bees, is that when the honey is taken away from the bees, you put the wax foundation in the hive, which looks like this:

Its primary function is that the bees build their tiles evenly and quite fast, as it is just natural to follow the optimised structure that is already there. And this is exactly what happens with code that has a good and expandable design from the start.

On the other hand, there is code that either had a lazy original design, or it was never expected to grow so much in complexity, and each of the changes were just a small addition to the mess. Eventually we got used to the idea that this part of the code is just hell, and that making small changes is annoying. This implies that we just don’t like this part of the code, and we want to spend as little time as possible working with it. And the result is that the problem is slowly spiralling out of control.

When I took the Uncle Bob glasses and started looking around, I quickly identified several problematic places like this. It is not a coincidence, that these places were eating away an disproportionately large amount of the dev time, not only because making changes is hard, but because they are full of regression bugs and generally are a never-ending source of problems.

This is the beautiful thing about having a company that isn’t on the stock market. Imagine you have a company that goes slower and slower every quarter, and then you confront the shareholders with the statement, that the way to solve it, is to do absolutely no new features for a quarter or two, refactor the code, learn new methodologies etc. I doubt that the shareholders would allow that. Luckily, we don’t have any shareholders, and we understand the vital importance of this investment in the long run. Not only in the project, but also in our skill and knowledge, so we do better next time.

This is the timeline of the lines of code in Factorio

It would look pretty reasonable if there was the same amount of people working from start to finish, but it is not. It was just me at the very start, and now there are 9 programmers. It could be explained by the game getting bigger and growing a lot of interconnected mechanics, which is harder to maintain. Or it could also mean that the density of the code improved so much. Both of these are not enough explain why having more programmers doesn’t result in faster development.

This indicates that the problems Uncle Bob describes are relevant to us, and the solution is actually to improve the way we develop rather than just scaling the amount of people. Once we have a nice clean foundation, then hiring new programmers and getting them up to speed with the code will be much faster.

Let me now explain a few typical examples of the problems we had, and how we proceeded to fix them:

Fig. 1 – the GUI interaction

We wrote a lot about the GUI (for example FFF-216) and how we iteratively raised the bar of what we find acceptable both from both the user and programmer perspective. The common takeaways from the FFF and from the coding was, that we always underestimated how complicated GUI logic/styles/layouting etc. can become. This implies that improving the way the GUI is written has large potential gains.

We are happy with the way the GUI objects are structured and laid out since the 0.17 update. But codewise, it still feels much more bloaty than it should be. The main problem was the amount of places you needed to touch to add some interactive element. Let me show you an example, a simple button used to reset presets in the map generator window.

In the class header:

class MapGeneratorGui
{
  ...

We had a button object definition

...
IconButton resetPresetButton;
...

In the constructor of MapGenerator, we needed to construct the button with parameters

...
, resetPresetButton(&global->utilitySprites->reset, // normal
                    &global->utilitySprites->reset, // hovered
                    &global->utilitySprites->resetWhite, // disabled
                    global->style->toolButtonRed())
...

We needed to register as a listener of that button

...
this->resetPresetButton.addActionListener(this);
...

Then, we needed to override the method of the ActionListener in our MapGeneratorClass, so we could listen to the click actions.

...
void onMouseClick(const agui::MouseEvent& event) override;
...

And finally, we could implement the method, where we if/else through the elements we care about, to do the actual logic

void MapGeneratorGui::onMouseClick(const agui::MouseEvent& event)
{
  if (event.getSourceWidget() == &this->resetPresetButton)
    this->onResetSettings();
  else if (event.getSourceWidget() == &this->randomizeSeedButton)
    this->randomizeSeed();
 ...

This is way too much boilerplate for one button with one simple action. We had over 500 places where we registered actionListeners in the code, so imagine the amount of bloat.

We kind of noticed, that when we use lambdas to signal callbacks and similar things in the GUI, it tends to be much more pleasant to use. So what if we made it the primary way to do the GUI?

We decided to completely rewrite the way it works, so instead of adding listeners and the filtering from the event catching functions, we can just specify:

this->resetPresetButton.onClick(this, [this]{ this->onResetSettings(); });

Which is a big improvement already, as adding and maintaining the new logic only requires you to look at one place instead of several, and it makes it generally more readable and less prone to errors.

And since we don’t need to hold the pointer object for comparisons, we can completely remove its definition from the class, and make it anonymous on many places in this fashion:

*this << agui::iconButton(&global->utilitySprites->reset,
                          global->style->toolButtonRed(),
                          this, [this]{ this->resetPreset(); })

Rewriting all the GUI internals (again) was a big task, but in the end it really felt to be well worth it, as now I can’t imagine how we could stand doing it the old way. It also resulted in several thousands of lines of code being removed.

The only way to go fast is to go well!

Fig 2 – The manual building

There are several main goals to pursue when you try to make the code cleaner. Removing code duplication is the first and biggest priority. It is reasonably easy to solve when the code isn’t structured well, functions are too long, or names are weird, but if you have 5 versions of the same pile of code with slight changes here and there, it is the worst beast. It is just a question of time, until bugfixes/changes are only applied to some variants, and it becomes less and less obvious whether the differences between the variants are intended or circumstantial.

The manual building logic is a monster, because of all the things it supports already:

Then, all of this logic needs to be multiplied by 2 (when you are lazy and copy paste), as you can have normal building and ghost building.

And then, you multiply this whole code abomination by 2 again. Why? Because we also need to do all this logic in the latency hiding mode. Sounds bad already, but it isn’t all of it, since this logic was continually being patched and touched by different people throughout history, the core of the code was a crazy long method with the code looking like the horizon mentioned by Uncle Bob.

Now imagine that you need to change something about this code, especially when you take into consideration, that the code naturally had many corner cases wrong, or fixed only in some variant of the code. This is a great example of how lazy long-term design leads to poor productivity.

Long story short, this was approached like a hobby side project of mine that took weeks to be finished, but in the end, all the duplications were merged, the code is well structured and fully tested. Managing the code requires a small fraction of the time compared to the previous state, because the reader is not required to read a huge pile of code just to get the big picture and to be able to change anything.

This reminds me a quote from Lou after a similar kind of refactoring: “Once we are done with this, it will be actually a pleasure to add stuff to this code.“. Isn’t it beautiful? It is not only more efficient and less buggy, it is also more fun to work with, and working on something enjoyable tends to go faster regardless of other aspects.

The only way to go fast is to go well!

Fig 3 – GUI tests

No, we obviously didn’t get to this point without automated tests, and we mentioned them several times already (FFF-29FFF-288, and more). We try to continuously raise the bar of what code areas are covered by tests, and this lead us to cover yet another area, the GUI. This aligns with the ever repeating underestimation of the amount of engineering care the GUI needs. Having it not tested at all is part of this underestimation, how many times it happened, that we made a release, and it just crashed on something stupid simple in a GUI, just because we didn’t have a test that would click the buttons. And in the end, it proved to not be hard at all to automate the GUI tests.

We just have a mode in which the testing environment is created with GUI (even when tests are run without graphics). We declared some helper methods, that allow a very simple definition of where we want the cursor to move, or what we want to click, like this:

TEST(ClearQuickbarFilter)
{
  TestScenarioGui scenario;
  scenario.getPlayer()->quickBar[0].setID(ItemFilter("iron-plate"));
  CHECK_EQUAL(scenario.player()->getQuickBar()[0].getFilter(), ItemFilter("iron-plate"));
  scenario.click(scenario.gameView->getQuickBarGui()->getSlot(ItemStackLocation(QuickBar::mainInventoryIndex, 0)),
                 global->controlSettings->toggleFilter);
  CHECK_EQUAL(scenario.player()->getQuickBar()[0].getFilter(), ItemFilter());
}

The clicking method is then calling the low level events of input, so all layers if event processing and GUI logic are tested. This is an example of end-to-end test, which is a controversial topic, because some “schools” of test methodology say, that everything should be tested separately, so in this case, we should theoretically test only, that clicking the button first, which creates an InputAction to be processed, and then, have an independent test of the InputAction working properly. I like this approach in some cases, but most of the time I really like that I can penetrate all layers of the logic with only a few lines of code. (more in the Test dependencies part)

The only way to go fast is to go well!

Fig.4 – TDD – Test driven development

I have to admit, that I didn’t know what TDD really was until recently. I thought that it is some nonsense, because it sounds really impractical and unrealistic to first write all the tests for some feature (without the ability to try it or even compile it), and then try to implement something that satisfies it.

But that is not TDD, and it had to be shown to me in a “for dummies” way for me to realize how wrong I was.

So after the “AHA” moment of realizing what TDD really is, I started to be instant fan. I’m now putting a lot of effort to try to follow the TDD methodology as much as possible, and to force it on others in the team as well. It feels slower, to write tests even for simple pieces of logic that just bound to be right, but the test proved me wrong several times already, and prevented annoying low-level debugging sessions in the near future.

The only way to go fast is to go well!

Fig.5 – Test dependencies

This is a continuation of the test dependency topic from GUI tests.

If tests should be truly independent, test C, should have some mocks of A and B, so the test of C is independent of the system A + B working correctly. The consensus seems to be, that this leads to more independent design etc.

This might be applicable in a lot of cases, but I believe that trying to have this approach everywhere is close to impossible, and it would lead to a lot of clutter and I’m not the only one having problem with this.

For example, lets say that we have a test of electric poles connecting properly on the map. But I can hardly test it when I don’t know that searching for entities on the map works properly.

My conclusion is, that having dependencies like this is fine, as long as the dependencies are also tested, but the problem comes when you break something and a lot of tests start to fail suddenly. When you do small changes the yes/no indication of tests is enough, but it isn’t always an option, especially when you refactor some internal structure, in which case you kind of expect to break a lot of stuff, and you need to have a way to fix them step by step once it can compile again.

If the tests don’t have any special structure, the situation when 100 tests all fail at the same time is very unfortunate, all you are left with is to try to pick some test semi-randomly, and start debugging it. But it is really misleading when some complicated test case fails in the middle, and you spend a long time debugging it, only to realise that it is some very simple low level bug that is causing the failure.

The goal is pretty easy, I want to be given the most simple fail case of my change.

For this, I implemented a simple test dependency system. The tests are executed and listed in a way, that when you get to debug and check a test, you know that all of its dependencies are already working correctly. I tried to search if others use the dependencies as well, and how they do it, and I surprisingly didn’t find anything.

This is an example of a test dependency graph related to electric poles:

I built and used this structure when refactoring away the duplication of ghost/real poles connection logic and it certainly sped up the process of making it work properly, and I’m confident that this is the way to structure test for us in the foreseeable future. Not only it makes test results more useful, but it forces us to split test suites into smaller more-specialised units, which certainly help as well.

The only way to go fast is to go well!

Fig.6 – Test coverage

When Boskid joined the team as the QA guy, one of his main roles was making sure that any discovered bug is first covered by a test before it gets actually fixed, and generally improving our code test coverage. This made the releases much more confident, and we had less regression bugs, which directly transitions into long-term efficiency. I strongly believe that this clearly indicates and supports what Uncle bob is saying. Working with tests feels slower but it is actually faster.

Test coverage is an indicator of which parts of the code are executed when the application runs (which usually means running the tests in this context). I never used a tool to measure test coverage before, but since it was one of the topics Uncle Bob talked about, I tried to use it for the first time. I found a tool that works only on Windows, but requires the least amount of setup, which is OpenCppCoverage, which provides an HTML output like this:

It is immediately visible, that both of the conditional commands are not triggered in tests. It basically means that either the code just isn’t tested, so it should be covered, or it’s dead code, so it should be removed. I’m quite confident (again), that using this can help us a lot to write clean high quality code.

The only way to go fast is to go well!

Conclusion

The only way to go fast is to go well!

If you are moved by this, if your emotion when you read this is, “I wish my boss had these priorities“. Consider applying for a job at Wube!

A majority of a network’s bots being stuck in battery range limbo

On the 37th week during which an issue of Alt-F4 is being released, we present: Issue #37! What a surprise! In it, long-time contributor pocarski is back with yet more very approachable explanations of how you can spice up and optimize your base with but a few combinators!

Combinators 2: Augmented Logistics pocarski

Several weeks ago, I wrote an article about using combinators to improve specific builds. This time we’ll take a look at ways to apply the circuit network more generally, to make your whole factory more efficient. We will look at the pitfalls of conventional design, we will come up with ways to solve them, and we will implement those solutions using the circuit network. Such improvements can be done to both bots and trains, and the circuitry is so simple it almost doesn’t require decider combinators at all. Let’s dive right in!

Written by pocarski, T-A-R, edited by stringweasel, Nanogamer7, Conor_, Therenas, Firerazer

After a quick one-week break, Alt-F4 is back with issue #31. In it, pocarski returns to talk about yet more ways to build computer logic in Factorio, featuring combinators this time, which turn out to be simpler to use than you’d think! Afterwards, Big Community Games announce another exciting event of theirs, this time with Industrial Revolution as the central focus.

Combinators and why you shouldn’t fear them pocarski

There are many technologies in the research tree that aren’t necessary to finish the game, and are therefore often sidelined. Some of those are perfectly understandable, for example military tech on peaceful mode. Others are sometimes not even considered, even though they can provide exceptional improvement. One such technology is the circuit network, which I will explore in this article.

There are four main components of the circuit network: wires, constant combinators, decider combinators and arithmetic combinators.

The 3 combinator types connected with wires

Constant combinators continuously output whatever you set them to (and also don’t need power); decider combinators output some signal when a certain logical condition is met; arithmetic combinators perform mathematical operations. Wires act like a sort of “signal cloud”, where all signals being output into a wire can be read by everything connected to it. Red and green wires have identical functionality, but can both be connected to the same device without interfering with each other.

Basic elements

Let’s look at three very simple single-combinator modules which are widely used. These modules are: the pulser circuit, the RS latch and the counter. We’ll start with the pulser, which looks like this:

Pulser circuit using Arithmetic combinator

The pulser is the easiest to understand. The input is immediately passed to the output through the red wire, and the inverted input is added onto the same red wire after the standard one tick of combinator delay. Both values being on the same wire cancel each other out, meaning the output is exactly equal to the input, but only lasts for a single game tick. Here, use of the “each” signal makes sure that the circuit can take any signal as input. If you wish to make it signal-specific, you can replace the “each” in input and output with the desired signal. This circuit has a truly colossal amount of uses, especially if used in combination with a counter.

RS Latch made with a Decider combinator

Next is the RS latch. Its inputs are either 1 “S” signal or 1 “R” signal, standing for Set and Reset. When it receives an “S” signal, the condition of the combinator becomes true. It is looped to itself, so the 1 “S” that it outputs will be added to the input, and keep the condition true even after the original “S” input turns off. Similarly, when it receives an “R” input, the condition becomes false, turning off the “S” output and breaking the cycle. This circuit is best used for systems where you want some kind of hysteresis, where one state triggers the “S” input, and another state triggers the “R” input.

Counter circuit with Decider combinator

Finally, the counter. Structurally it is identical to the RS latch, but this time the output is set to “input count of everything”. This means that while the decider’s condition is followed, it will keep giving its own outputs to itself, thus remembering them. For every tick it receives a signal, it will increment the value of that signal in its memory by the amount received. As soon as the condition is broken, the memory is cleared, since the decider no longer allows signals to pass. Similarly to the pulser, if you wish to make it remember only one signal, replace the “everything” in the output with the desired signal. This circuit, just like the pulser, has an immense number of uses, but the most popular one is to keep track of item amounts.

Basic examples

Now, let’s explore some cases where each of these modules might come in handy.

Say you have a nuclear reactor blueprint where the extraction of a used fuel cell triggers the insertion of a fresh one. Such a design would have to be manually started since reactors are built empty. What you ideally want to add is a circuit which, once all fuel cell chests have items in them, triggers the fueling inserters exactly once. This is where the pulser comes in. Have a combinator in each chest checking if there are enough items in it, and then wire all of those together into a single combinator that checks if all chests are ready. This decider then outputs a “used fuel cell” signal into a pulser, which is wired to every fueling inserter of the reactor. This causes all fueling inserters to trigger exactly once the moment there is fuel available to all of them, starting the reactor automatically. By extension, this also makes the reactor automatically restart if it ever runs out of fuel.

Reactor fueling circuit setup

Next, a classic example: backup power. Imagine you have an array of accumulators and you want to activate your boilers if the stored energy gets too low. You could just wire a switch directly to an accumulator and tell it to activate if accumulators are below, say, 20% charge, but that would just cause it to rapidly switch on and off, keeping the accumulators at exactly 20% all the time. Instead, you should use an RS latch. Have a combinator output “S” when accumulator charge is below 20%, and another one output “R” when charge is above 70%. Hook them both to the latch, and wire the output of the latch to a switch set to activate if S > 0. The switch will activate as soon as charge drops below 20%, and keep the backup running until charge rises above 70%.

Backup power circuit setup

Finally, a process that many fear to set up: uranium enrichment. We need to look at 3 inserters: input, output, and recycling. That last one isn’t a single inserter, but we only care about the first link of the inserter chain. Input inserter doesn’t need any control logic, it simply grabs 3 items of U-238 and loads them whenever they’re needed. Output inserter must be disabled while recycling happens, to not take out any of the catalyst items. Recycling inserter must take out exactly 40 U-235, as well as 2 U-238. The recycling inserter is receiving a constant signal of U-238, which makes it blacklist it. It begins to take out U-235, and increments the counter by the grabbed amount every time it does. The inserter is also receiving a constant signal of -39 U-235, which doesn’t affect the filter. Eventually, the inserter will be reading 40 U-235 from the green wire, and -39 U-235 from the red wire. It now sees a positive total amount of U-235, and since U-235 is earlier in the signal list, it takes priority over the U-238 signal. The inserter now blacklists U-235, which means it switches to taking out the 2 items of U-238. This does two things: clears the counter and triggers the output inserter, which now has no choice but to take out the remaining U-235. The 2 recycled U-238 items will be inserted at the start of the next cycle. U-238 recycling doesn’t need any extra logic, because the input inserter is limited to a maximum of 3 items, leaving the other 2 spots for the recycled uranium.

Kovarex enrichment circuit setup

Conclusion

Each of the given examples can be improved and made more specific to the user’s needs. Sometimes it can be done with basic math and logic, other times you’d need to add a couple more basic modules. For example, you could add a second counter to the enrichment circuit to prevent the centrifuge from overfilling and stalling if there’s some U-235 in the input stream.

Every single milestone in circuit networks was worked towards step by step, by splitting the whole into parts, and then splitting the parts even further. After all, that’s how modern computers were developed – make a logic gate out of transistors, then make a memory latch and an adder out of logic gates, then make a RAM and ALU out of memory latches and adders, then make a computer out of those. If you can manage to sometimes think “hey, I’ve solved this before”, then you can achieve anything with circuits.

Full steam ahead! T-A-R

Big Community Games is happy to announce another Factorio MMO event. A very ore-rich piece of Nauvis has been scouted, bringing us a great opportunity to launch a rocket together this very Saturday! The theme of this party will be Steampunk. Deadlock989’s Industrial Revolution 2 will bring all the steam and smouldering fuel we love, and possibly even a bit more.

 

IR assemblers as featured in FFF #311

Compared to vanilla, our toolbox gets expanded with all kinds of technologies. New materials and processes will make crafting the rocket a bit more complex in a very enjoyable way. The event page has the full modset and further details. The server will go live in the regular multiplayer lobby on Saturday at 18:00 UTC/GMT.

Visit our Discord for chat- and voice channels. Engineers are already gathering and compiling plans. BCG also would love to welcome people who would like to participate in organizing similar events in the future.

Get your exoskeletons greased up, and enjoy the event!

Contributing

As always, we’re looking for people that want to contribute to Alt-F4, be it by submitting an article or by helping with translation. If you have something interesting in mind that you want to share with the community in a polished way, this is the place to do it. If you’re not too sure about it we’ll gladly help by discussing content ideas and structure questions. If that sounds like something that’s up your alley, join the Discord to get started!

Rail grid

Written by Ph.X, edited by stringweasel, Nanogamer7, Conor_, Therenas, Firerazer

This fine week in March, first-time contributor Ph.X talks about their very compartmentalized system for laying out a base using isolated modules and connecting them through a Logistic Train Network. Taking inspiration from software development and the lessons learned there, Ph.X uses the concepts of Modular Programming to their advantage.

Also, in other news, we now offer an email list that you can subscribe to! If you don’t browse reddit or the forums or even our Discord regularly, we now offer the option of the ever-popular concept of an email list. Just enter your email here and you’ll be notified every Friday on release of the newest issue. We will of course only ever use this for Alt-F4 posts, and not spam you with irrelevant crap.

Recipe-Oriented Factorio Life Ph.X

Factorio has a complex network of production lines (i.e. spaghetti) that make the game fun and challenging. It’s a complex engineering problem with similar challenges to software engineering, so I think it is worthwhile to use some real-life experience to improve the game experience.

What is ‘ROFL’

People with programming experience should have heard of Object-Oriented Programming Modular Programming, which is the theory that Recipe-Oriented Factorio Life (ROFL) aims to mimic. Modular programming is a software design technique that emphasizes separating the functionality of a program into independent, interchangeable modules, such that each contains everything necessary to execute a single aspect of the desired functionality. In ROFL, we divide the whole factory into independent, interchangeable subfactory modules, such that each contains everything necessary to process only one recipe of the desired factory.Читать полностью