Messaging is an old technology that's famous for reliability, resiliency and throughput. Message based systems can scale easily and suffer outages without loss of data, and can self-recover once outages are resolved.
@node-ts/bus is a message bus library that makes it easier to build message based systems in node. It takes care of configuring an underlying transport (eg RabbitMQ), routing messages to handlers, propagating attributes etc. By abstracting away the technical complexities of working with message systems, more of your codebase remains dedicated to the concerns of your application.
Message based systems can suffer environmental instability in a way where data is not lost. Environmental instability can be caused through network partitioning, system restarts, data corruption, bugs, security misconfigurations etc, and can all lead to part or all of your system being unavailable.
During this type if your application is still processing user input, it may be throwing errors, corrupting the state of the data and discarding requests as a result. Once the system is back online, it can take a lot of time and effort to restore the state to an uncorrupted form if at all.
Message systems don't suffer from these issues. When part of the system is down a message will attempt to be processed and will fail. Processing of the message will automatically retry a number of times at which point the message will be routed to a dead letter queue. When the system comes back online, teams can replay messages in the dead letter queue to bring the system back into a valid state.
@node-ts/bus helps improve scalability by ensuring services don't get overloaded. Since operations are pull-based your application will only fetch the next piece of work when it is ready to do so, and when overloaded the number of messages in the queue will grow. This compares to HTTP/REST based services which may start to drop requests when they become overloaded.
This can make rules around autoscaling easier since the number of instances of your services can increase based on metrics such as the number of messages waiting in the queue, or the age of the oldest message. These metrics are more accurate than scaling on cpu, memory, response time etc which aren't an accurate representation of load and scale.