BcacheFS is a B-tree filesystem, somewhat like BTRFS or ZFS, but with a bunch of extra features not seen in these filesystems, such as storage-tiering.
If youâre not aware what makes BTRFS or ZFS unique, itâs their B-tree filesystem layout. A B-tree Filesystem is a type of file system that uses a B-tree data structure to organize and manage data on storage devices like hard drives or solid-state drives (SSDs). The B-tree, short for âbalanced tree,â is a hierarchical data structure that allows for efficient insertion, deletion, and retrieval of data. In the context of file systems, B-trees are used to index and manage the allocation of data blocks and metadata. B-tree filesystems are known for their scalability and ability to handle large amounts of data efficiently. They are designed to provide features like snapshotting, data integrity checks, and support for advanced storage management operations.
A Copy-On-Write (COW) filesystem is a type of filesystem that implements a data storage strategy where data is not overwritten directly when it is modified but is instead copied to a new location. This strategy ensures data integrity and allows for efficient snapshots and versioning.
Hereâs how it works and its connection to B-tree filesystems in these cases:
The connection to B-tree filesystems lies in how they manage the metadata and pointers to data blocks. B-trees are well-suited for COW filesystems because they provide an efficient way to manage and update these pointers. When a change is made to a file or directory in a COW filesystem, the B-tree is updated to reflect the new location of the modified data, ensuring that the filesystem remains consistent and that snapshots can be efficiently created.
Iâm glad you asked! One of the primary things that comes to mind is Storage Tiering, but it also implements native encryption, and (tiered) compression, and can work with various-sized various-performing devices, being much more flexible than RAID arrays or even BTRFS/ZFS.
BcacheFS is a filesystem that incorporates storage tiering as one of its features. Storage tiering is a technique used to manage data across different storage devices based on their performance characteristics and usage patterns. Hereâs an explanation of BcacheFSâs storage tiering mechanism based on the information you provided:
BcacheFSâs storage tiering mechanism allows you to manage data efficiently across different storage devices by caching copies of data on faster devices and providing options for moving data between storage tiers based on your desired caching strategy, whether itâs for read acceleration or optimizing write operations. This flexibility is amazing to have when you need more performance and efficiency tuning parameters. ZFS has some caching methods, like having a separate Intent Log, and a L2 Adaptive Replacement Cache, but BcacheFSâs method allows you to make more tiers, and also assign e.g compression to the background tiers, giving you both the advantage of more efficient storage, and more performant caching.
Besides storage tiering, native encryption, and tiered compression, BcacheFS offers several unique features and advantages that make it stand out from other filesystems like BTRFS and ZFS:
Like BTRFS/ZFS and RAID5/6, BcacheFS supports Erasure Coding, however it implements it a little bit differently than the aforementioned ones, avoiding the âwrite holeâ entirely. It currently has a slight performance penalty due to the current lack of allocator tweaking to make bucket reuse possible for these scenarios, but seems to be functional. Hereâs the manualâs take on it:
2.2.2 Erasure coding
bcachefs also supports Reed-Solomon erasure coding - the same algorithm used
by most RAID5/6 implementations) When enabled with the ec option, the
desired redundancy is taken from the data replicas option - erasure coding of
metadata is not supported.
Erasure coding works significantly differently from both conventional RAID
implementations and other filesystems with similar features. In conventional
RAID, the âwrite holeâ is a significant problem - doing a small write within a
stripe requires the P and Q (recovery) blocks to be updated as well, and since
those writes cannot be done atomically there is a window where the P and Q
blocks are inconsistent - meaning that if the system crashes and recovers with
a drive missing, reconstruct reads for unrelated data within that stripe will be
corrupted.
ZFS avoids this by fragmenting individual writes so that every write be-
comes a new stripe - this works, but the fragmentation has a negative effect on
performance: metadata becomes bigger, and both read and write requests are
excessively fragmented. Btrfsâs erasure coding implementation is more conven-
tional, and still subject to the write hole problem.
bcachefsâs erasure coding takes advantage of our copy on write nature -
since updating stripes in place is a problem, we simply donât do that. And since
excessively small stripes is a problem for fragmentation, we donât erasure code
individual extents, we erasure code entire buckets - taking advantage of bucket
based allocation and copying garbage collection.
When erasure coding is enabled, writes are initially replicated, but one of
the replicas is allocated from a bucket that is queued up to be part of a new
stripe. When we finish filling up the new stripe, we write out the P and Q
buckets and then drop the extra replicas for all the data within that stripe - the
effect is similar to full data journalling, and it means that after erasure coding
is done the layout of our data on disk is ideal.
Since disks have write caches that are only flushed when we issue a cache
flush command - which we only do on journal commit - if we can tweak the
allocator so that the buckets used for the extra replicas are reused (and then
overwritten again) immediately, this full data journalling should have negligible
overhead - this optimization is not implemented yet, however.
Of course it also brings a lot of the good stuff that the aforementioned filesystems also have, such as:
Of course thatâs a lot of nice marketing for the thing, but if youâre like me youâd want to immediately dive into how to use it and set it up at home. Iâve done exactly that with my secondary server that was previously running a ZFS array; itâs now a fresh Fedora Rawhide install (Iâll explain why after this) that runs BcacheFS, which seems to be handling the workload flawlessly thusfar.
First things first, installing it! At the time of writing, itâs a little complicated to get it, because you need the right kernel to be able to run it, or you need to compile your own, or you need to use the userspace FUSE based tools (probably slow, havenât tested myself).
The good news is that from kernel 6.7 it will be in the mainline kernel, so if youâre in the future and already have Linux kernel 6.7 installed, you should be able to just install bcachefs-tools
(NOT bcache-tools!), and get it to run.
Personally I found that the easiest and most stable way to get the latest 6.7 kernel is by installing Fedora Rawhide, which is the streaming version of Fedora, that automatically grabs the bleeding edge versions of software from everywhere, including the 6.7 kernel.
Note that bcache
and bcachefs
are two entirely separate products that should not be confused, you may see them both showing up in packages sometimes; make sure you select bcachefs
and not bcache
options.
Straight from the manual:
To format a new bcachefs filesystem use the subcommand bcachefs format,
or mkfs.bcachefs. All persistent filesystem-wide options can be specified at
format time. For an example of a multi device filesystem with compression,
encryption, replication and writeback caching:
bcachefs format --compression=lz4 \
--encrypted \
--replicas=2 \
--label=ssd.ssd1 /dev/sda \
--label=ssd.ssd2 /dev/sdb \
--label=hdd.hdd1 /dev/sdc \
--label=hdd.hdd2 /dev/sdd \
--label=hdd.hdd3 /dev/sde \
--label=hdd.hdd4 /dev/sdf \
--foreground_target=ssd \
--promote_target=ssd \
--background_target=hdd
The above will give you a filesystem with encryption, 2 replicas of every file, and a SSD and HDD storage tier, the SSD one being used for caching purposes.
If you want something more specific, say for example you only want your background tier to be compressed, you can use one of the other options it gives you: --background_compression zstd
will make sure the background
tier is compressed with zstd compression.
This is the full list of options available, pulled straight from the manual. substitute spaces in the options with _ to be able to use them on the command line (e.g background compression
becomes background_compression
)
Probably my main source of information for all of this was the official user manual, that has all the information you need to understand whatâs going on, understand the nits and grits of it, while still being fairly to the point for the basics of operation.
]]>Probably one of the most logical questions to ask first, is âwhy even bother?â
And while I agree itâs not a magical one size fits-all fixes-all solution does not exist, there are definitely benefits to using these tools in your daily work and even spare time projects.
Do keep in mind that Iâm listing the examples below without their caveats, as those are for the next chapter.
Here are a few examples:
There are, of course, some obvious problems with these tools, especially in these early days of this still fairly new technology, which is arguably still in its infancy.
Here are some of the obvious problems:
The problems listed above are not the only problems however, and while you mightâve guessed a lot of the above were the case, here are some examples of things that a lot of people accidentally mess up- even the developers of these tools!
The list of problems shown above may look daunting, and make you wonder if AI is âeven worth itâ, but the truth is that the majority of these problems do not just apply to AI/ML- They also apply to humans and interactions with others around you.
As long as you keep in mind to anonimize data you input, or make sure you use tools that explicitly state to not log or use the input, itâs quite fine to use these tools in basically any context, even with the most confidental or personal information; just as much as youâd otherwise use computers and software tools to achieve your goals.
In addition to the things mentioned in the possibilities section, hereâs a few real world examples of what you can do:
I am of course not able to list every tool, API, software project or method out there, but if you have suggestions, feel free to let me know at https://mastodon.derg.nz/@anthropy, and I will try to add it to this page.
Non-exhaustive list of selfhosted code completion tools:
Any suggestions are welcome!
]]>Motor oil is the lifeblood of any vehicleâs engine, providing necessary lubrication to prevent friction and wear among moving parts. One critical aspect of motor oil is its viscosity, which essentially refers to its resistance to flow. The viscosity of motor oil is denoted by a unique system established by the Society of Automotive Engineers (SAE), which is crucial for ensuring that the oil flows smoothly under a variety of temperature conditions. In this post, weâll delve into the nitty-gritty of motor oil viscosity, exploring the SAE standards, and how they impact your engineâs performance.
Motor oil viscosity ratings, such as 0W30 or 5W30, are broken down into two parts:
The SAE standards define viscosity ratings through a series of standardized tests:
A glance at the table below helps visualize the correlation between minimum operating temperatures and high-temperature viscosity ratings through different oil types.
 | 5.6 - 9.29 cSt | 9.3 - 12.49 cSt | 12.5 - 16.29 cSt | 16.3 - 21.8 cSt | 21.9 - 26.09 cSt |
---|---|---|---|---|---|
-35°C | 0W20 | 0W30 | 0W40 | 0W50 | 0W60 |
-30°C | 5W20 | 5W30 | 5W40 | 5W50 | 5W60 |
-25°C | 10W20 | 10W30 | 10W40 | 10W50 | 10W60 |
-20°C | 15W20 | 15W30 | 15W40 | 15W50 | 15W60 |
-15°C | 20W20 | 20W30 | 20W40 | 20W50 | 20W60 |
In this table:
The SAE (Society of Automotive Engineers) viscosity ratings on motor oil containers are often associated with the oilâs kinematic viscosity measured in centistokes (cSt) at specific temperatures. Kinematic viscosity is a measure of a fluidâs resistance to flow under the force of gravity. Itâs crucial to note that the SAE ratings like 20, 30, 40, etc., correspond to a range of viscosity values, not a single absolute value.
For instance, SAE 30 oil corresponds to a kinematic viscosity range of 9.3 to 12.5 cSt at 100°C. Similarly, SAE 40 corresponds to a kinematic viscosity range of 12.5 to 16.3 cSt at 100°C.
This range-based classification helps accommodate slight variations in oil formulations while still ensuring that the oil provides a consistent level of protection and performance under specified temperature conditions. The SAE viscosity grading system provides a standardized framework for comparing oils, ensuring that motor oils provide reliable, predictable performance across a wide range of operating conditions.
Understanding the SAE viscosity ratings and how they correlate with your engineâs performance under varying temperature conditions is critical for ensuring its longevity and efficient operation. By choosing the right motor oil, you not only safeguard your engineâs performance but also enhance its fuel efficiency and reduce emissions. So, the next time you are up for an oil change, a closer look at the SAE ratings can guide you towards making an informed decision.
Sources (worth a read with much more fine details):
]]>Introduction: Handling multiple branches and managing changes in Git can initially seem daunting, especially when working with remote repositories like GitLab. However, with a systematic approach, managing your code becomes a breeze. In this post, weâll walk through a scenario and provide a step-by-step guide to effectively manage and merge your local changes.
Scenario: Letâs consider a situation where youâve made several changes in your local Git repository. You then checkout a specific branch, update some other file, and commit that specific file. Online on GitLab, you merge that branch back into the main branch. Now, you want to check the other files that you changed into the main branch on your local Git repository. How would you go about doing this? Hereâs a breakdown of the steps to follow:
git stash
git checkout main
git pull origin main
git stash pop
git checkout -b new-branch-name
git add file1 file2 ...
git commit -m "Your commit message"
git push origin new-branch-name
new-branch-name
into main
. Review the changes, and once you are satisfied, complete the merge.main
branch and pull the latest changes from the remote repository to ensure your local main
branch is up-to-date.
git checkout main
git pull origin main
Conclusion: By following this sequence of steps, you can manage and merge your changes effectively in Git and GitLab. Keeping a clear history and organization is crucial for managing your repository efficiently, both for you and your team. Remember, practice makes perfect, so donât hesitate to experiment and learn from each experience to become proficient in Git.
]]>Original prompt:
Could you expand this into an article? feel free to augment it with your own conclusions and ideas extrapolating from this to make it longer, I think it's a complex thing that people need a lot of context on to properly understand:
People are weird and I often see mentioned that you can't understand them, but I don't necessarily think that's true, it's just that, it's really hard to see that when everyone has their own idea of the world, how they 'collectively' see it, because they.. don't. there's no collective sense of 'this is how everything needs to go', we actually debate that and then put that on lists somewhere, but how every individual sees it is basically random with a bias towards their direct environment.
Iâm not sure if it fully conveys my message, but I guess itâs worth posting. Itâs also more a thought than something I hinge hard beliefs on so, take all of the below with a healthy dose of salt or salt-substitutes:
Understanding Human Complexity: The Individual and the Collective
Human beings are, without a doubt, one of the most intricate and multifaceted species on Earth. Their behaviors, beliefs, and perceptions are shaped by a myriad of factors, from genetics to environment, from culture to personal experiences. Itâs often said that understanding humans is an impossible task, but is it really? Or is it just that the complexity of human nature makes it challenging to grasp the full spectrum of human thought and behavior?
At first glance, it might seem that thereâs a collective understanding of the world. After all, societies have norms, values, and shared beliefs that guide behavior. Laws are enacted based on societal consensus, and cultural practices emerge from shared histories and experiences. However, upon closer examination, it becomes evident that this âcollective understandingâ is more of a negotiated consensus than a unanimous agreement.
Every individual has a unique perspective, shaped by their upbringing, experiences, and personal beliefs. While there might be overarching themes and commonalities, the nuances of individual perspectives can vary widely. This means that even within a seemingly homogenous group, there can be a vast array of opinions and beliefs.
One of the most significant influencers of individual perspective is the environment. The family weâre born into, the culture weâre immersed in, the experiences we have â all play a pivotal role in shaping our worldview. A person raised in a bustling urban environment might have a vastly different perspective on life than someone brought up in a remote village. These environmental factors create biases, which in turn influence our perceptions and reactions to the world around us.
While it might seem that individual perspectives are random, theyâre more accurately described as being influenced by a unique combination of factors. Itâs like a personal recipe â a dash of upbringing, a sprinkle of personal experiences, a pinch of cultural background, and so on. While the exact mix might differ from person to person, the ingredients are often similar. Itâs the proportions and interactions of these ingredients that create the rich tapestry of human perspectives.
One of the reasons why itâs challenging to understand humans is the inherent difficulty in communication. Language, while a powerful tool, is often inadequate in capturing the nuances of human emotion and thought. Misunderstandings arise not just from the words we use but from the myriad of non-verbal cues and cultural contexts that accompany them. This is why debates and discussions are so crucial â they allow for the clarification of ideas and the bridging of understanding gaps.
While it might be tempting to label humans as âweirdâ or âincomprehensible,â the reality is that the complexity of human nature is what makes us fascinating. The diversity of perspectives and beliefs is not a hindrance but a strength, allowing for innovation, creativity, and growth. By recognizing and celebrating these differences, we can move closer to a world where understanding is not just possible but celebrated.
]]>I havenât used ChatGPT to detail out blogs yet, but it seems like a nice way to turn information snippets into something more useful, because I canât always be bothered to write a full blog post for something I can explain in two lines. So I think Iâm gonna start doing that more.
Without further ado;
When using jwilderâs nginx-proxy for containerized web applications like Mastodon, you might find that the streaming endpoint doesnât work out-of-the-box. This is because the real-time streaming functionality of Mastodon requires WebSockets, and some additional configuration is needed to route these requests correctly.
Hereâs a step-by-step guide to get it working:
Identify the Streaming Service Endpoint: Mastodon separates its services, and one of them is the streaming service, typically running on port 4000. Ensure you have this service set up and running in a container.
Custom Configuration for nginx-proxy:
jwilderâs nginx-proxy allows for custom configuration snippets via the vhost.d
directory. These configurations can be used to modify the behavior of Nginx for specific virtual hosts.
_location
. For a Mastodon instance on mastodon.derg.nz
, the file should be named mastodon.derg.nz_location
.vhost.d
directory of your nginx-proxy setup.location /api/v1/streaming {
proxy_pass http://streaming:4000; # "streaming" should match the name of your Mastodon streaming service container
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Proxy "";
proxy_http_version 1.1;
}
Ensure the proxy_pass
directive points to the name of your Mastodon streaming service container and the correct port (commonly 4000).
Restart nginx-proxy: After adding the custom configuration, restart the nginx-proxy container to apply the changes.
This setup ensures that the WebSockets requests for Mastodonâs real-time features are correctly proxied to the streaming service, ensuring a seamless user experience on your instance!
]]>Mastodon at least provides you with a useful little tool to manage and reset things in case of breakage; tootctl
.
One thing that is very worth attempting in the case you ran your migrations, but there are still problems, is to clear the cache.
To clear the cache, simply run: tootcl cache clear
This is probably going to be your main pain point, as both PostgreSQL can be a pain between upgrades, as well as Rails and Mastodon itself.
When I started writing this post, the reason was that I just spent a good few hours trying to fix Mastodon after it had half-completed a bunch of migrations.
The main giveaway for this would be if you start your server, it appears to run, but occasionally thereâs some failed executions of Puma with some PostgreSQL related errors that say something like â ActionView::Template::Error â ⌠âDid you mean? {SOME TABLES OR USERS OR ENTITIES}â
To check all migrations have run successfully, from your mastodon installation, run: RAILS_ENV=production bundle exec rake db:migrate:status
If you run docker-compose, you probably want to run something like docker compose run -i web bash
(Mastodon does not have to run for this)
If you see any status marked as âdownâ here, now would be a good time to upgrade. NOTE: Mastodon MUST be stopped before doing this, or you will corrupt your database.
If you see any status marked as *** NO FILE ***
then something went wrong, you might have to update your local git repo (that is updated separately from the container itself), and then run the migrations again. If it persists, see âFailed migrationsâ.
After you stopped Mastodon:
docker compose run -i web bash
again if you use docker-compose- this should also automatically bring up the postgres database and redisRAILS_ENV=production bundle exec rake db:migrate
and do NOT interrupt this process, it should automatically finishIf you still see *** NO FILE ***
or something else is wack and you suspect that the migration didnât go over succesfully, you might have to remove the migrations from the migrations table and run them again. Iâm going to use pgAdmin to edit the database, also since showing this here may be useful for other reasons in the future.
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: "admin@admin.com" # Set your desired email and password
PGADMIN_DEFAULT_PASSWORD: "adminpassword"
volumes:
- ./pgadmin_data:/var/lib/pgadmin
ports:
- "127.0.0.1:8080:80" # This would make pgAdmin accessible at http://localhost:8080 on your machine
networks:
- internal_network
- external_network
depends_on:
- db
DELETE FROM schema_migrations WHERE version::bigint > [last_known_good_migration]
where âlast_known_good_migrationâ is the last migration that you know worked out, or where you want to start (donât go too far back because itâs an intensive process)INSERT INTO schema_migrations (version) VALUES ('MIGRATION_THAT_YOU_JUST_RAN');
Note that Howdy uses general face detection algorithms, some webcams also work, although the IR webcam yields much faster and more stable results.
The central component in making this work this way is Howdy, which is a Python application that works more or less how Windows Hello works, and provides a module to use with PAM.
You can find more information about this project here: https://github.com/boltgolt/howdy
The Github readme itself explains a good part of the installation and set up, although it depends a little on the distro you have whether everything will work out of the box or require a bit more tweaking. I run Fedora myself, so Iâll write down a few notes regarding that below, as nothing will be done for you beyond the installation of the package files, and you have to set up the config manually.
If perchance not everything works out of the box, the Github page provides a good Common Issues page that explains how to fix a good deal of problems.
They also provide a bash script to automatically configure a good deal of the application if youâre on Fedora, but for me not everything was in place after that, PAM didnât work with KDE for example (explained in next section) but you also still have to set the device manually;
run sudo howdy config
to open the config in the default editor, and search for âdeviceâ. device_path
should be set to something like /dev/video2
, but it may differ per laptop. Note that /dev/video0
is probably your main webcam, which will also work, but not well, and can be spoofed much more easily. If you donât see any video devices you may be missing some drivers, which is unfortunately out of scope for this blog.
To actually make use of Howdy when authenticating, you must alter the PAM (Pluggable Authentication Modules) configuration to check with Howdy and either deem it sufficient to automatically log you in (see security note at the bottom), or use it in addition to your password or fingerprint reader.
PAM stores its configs in /etc/pam.d/. The file you have to edit depends on what you want to do;
If you want to:
/etc/pam.d/kde
and /etc/pam.d/sddm
/etc/pam.d/gdm-password
/etc/pam.d/sudo
If your DM or auth-requiring application isnât mentioned, try look in the directory, the naming should be fairly logical, and the idea is the same for all window managers I know of. Also note the last two wonât be needed if you ran the above bash script they provided.
You need to add the following line:
auth sufficient pam_python.so /lib64/security/howdy/pam.py
BEFORE the other auth lines. They are evaluated in order, and override each other as such.
Also note that if you want to use it in addition to your password you need to replace sufficient
with [success=ok default=bad]
(this took me some time to figure out, PAM is complex holy sh*t)
.. though Iâm not sure if I recommend that until you got Howdy to run 100% smoothly and never not detect you. You can also only alter certain PAM files to do this, such as the screen locker, as you can circumvent that with a different terminal such as ctrl+alt+f3 and fix your config. Be careful with sudo though, obviously.
Howdy simply uses facial recognition algorithms to recognize your face, by default it uses HOG for this, if you configure it to use CNN itâs slightly more accurate but it requires GPU acceleration to run smoothly (it worked in 2-3 seconds on my elitebook so Iâll keep using it, though HOG is < 1 second).
If you have an infrared camera, you definitely yield harder to fake unique pictures, but at the end of the day an attacker whoâs crafty enough may still be able to get it done. Beware of this.
The Github page for Howdy says explicitly not to use it as single authentication method, although depending on your environment and personal security needs it may be secure enough to unlock your screen, although stuff like sudo may be more sensitive to abuse.
It can also definitely provide some extra security by using it as a second factor in your authentication chain, combined with e.g a password, fingerprint reader, smartcard, external U2F dongle, etc (PAM is quite flexible)
]]>Getting your hands on a Renault Twingo is fairly simple especially in Europe; theyâre one of the most commonly sold cheap cars that are out there. Their light weight and small engines and total size make them the perfect city car, but could they be more than that?
Also known as the Renault TCe (Turbo Control efficiency) engine. A savage marriage between a tiny 1.2L 4 cylinder short stroke engine, and a beefy intercooled turbo that happily delivers 1.5+ bar to the engine if you tickle its settings a little.
By default, the D4F engine only gives you a meager 75HP, tuned for efficiency long engine mileage, its light size still means you can get the Twingo from 0 to 100km/h in a respectable 10 seconds. You can, if you really want, add a turbo to this, as it does have the pressure sensors required to deal with it, but note that the D4FT has more oil, stronger piston rods, and makes it much easier to actually tune the engine as the turbo and intercooler are already installed.
By default, the D4FT gives you 100HP, which isnât that much of an increase, but it does definitely make a difference, with the car easily making it to 100km/h in about 7-8 seconds. Especially the torque is a nice difference that you can feel there. I donât think the turbo revs up much beyond 0.75 bar on stock
However if you unlock the D4FTâs turbo settings, and make the turbo valve close to spin it to max, something magical happens. 1-1.5 bar is well within reach of this turbo, the main issue actually being the intercooler not keeping up, air can get 60C+, meaning the turbo valve will open a little again to allow the turbo to cool down a little. But even despite that, it is an insane kick, the torque almost doubles, and you can easily get 130-150HP with this mod alone.
There is another easy mod you can do after that, which is replacing the injectors with bigger ones. In most engines this would require drilling out the injector holes, but not with the D4FT!
The Renault Clio 172 2.0L RS injectors fit flawlessly into the sockets without needing to change much at all, the threading is the same, theyâre just slightly bulkier.
I bought them from here: https://www.winparts.nl/motordelen-toebehoren/injectoren-verstuivers/c1187/injector-iwp042-magneti-marelli/p563503.html (dutch website)
The exact type is: Magneti Marelli IWP042
This will much increase the available engine power, as more burned gasoline equals more power! Do note that to make full use of this you need to do a retuning with a professional dyno. I might be able to share my own profile, but no guarantee that itâll work as well for yours as it did for mine (pending update).
After you did this, the first next step will be to replace the intercooler with something that can actually keep up. I havenât done this myself, I might first try to remove my left front fog light to force extra air into it.
]]>