image

Affordable & High Quality system administration & development services.


Our Statement

Growing a business is a complex process. Designing a good idea, building your dream-team and distributing your resources efficiently. Behind the curtains though, time consuming maintenance work can prevent you from focusing on the right things.

Introducing EntryRise

EntryRise is a team formed from able-minded individuals with a diverse IT background. We can help break the walls keeping your business behind, and pave the way with modern solutions to your difficult problems.

Commercial

Learn about us

Problem Solving Expertise

Along the years, we have formed a strong methodology which we use to solve difficult problems. This gives us a better understanding of the customer base our clients serve and allows us to cater our solution to your business.

With experience working in several IT niches, we build a complete view of the issue and can judge what solutions work best on a case by case scenario.

System Administration

We can help you install and configure programs, automatize work, load balance your services, manage domains, creating docker images, optimizing your applications and more!

Development

With extensive java experience, strong programming principles and easy to read code, we ensure your end product will be secure, bug free and performance efficient.

Business

Our business experience means we understand what you face. With numerous projects ranging from game servers to java development and system maintenance services, we know how to take educated decisions that benefit you and your clients.

Learn about us

Empower your Work

ERTools is a complete toolset that helps you keep connected with your business, understand market trends and increase staff efficiency. Being fully managed, any potential question or issue is one call away from being resolved.

Analytics

Know your clients better with EntryRise Analytics. From understanding market trends to being able to quickly respond to changes in client's behaviour, this is a must for every modern business.

Uptime Watcher

make sure you know about disruptions happening on your service and how they impact your business in real time and how fast we can solve it for you.

Kanban Boards

Increase productivity and team cooperation with kanboard, a centralized place where your team can brainstorm, implement and debug quickly and efficiently.

Application Hosting

Host your web services, discord bots, mail service, APIs and more using our fully managed hosting service. Please

Backup Storage

Keep your data safe with EntryRise Backup Storage. We provide free storage (SFTP access). See storage amount on plans.

ERPanel

Increase security and manage all your services through ERPanel, our managed dashboard solution. Add team members, run routines and more with an easy to use interface

Learn about us

Case Studies

We are proud of our innovations and how they shaped our client's business. Over the years, we have had to deal with several complex and multi-faceted issues that needed to be understood and approached from several angles. We are happy to share the thought process behind some of them below.



Problem description:
In minecraft, hoppers are a tile entity used to collect items that fall on top of them. Players have found them to be extremely useful for automating their AFK farms, and since they are quick and easy to use, are a tool in every minecraft redstoner's pocket. In production environments though, we notice that most players have inefficient hopper designs that cause performance issues down the line. Until now, the only effective way to reduce lag was to remove hoppers, make them hardly accessible in-game or limiting the amount available to each player. While effective, most of these methods anger the player base and drastically limit their possibilities, making the game less enjoyable and in turn causing more and more issues down the line. The server administrator is now stuck with 2 bad decisions: Keep the server as it is and support lag or take action and anger the playerbase.

Analysing the Problem:
Our first step was to analyse why hoppers are used, and what players like and dislike about them. We have learned that players decide to use large hopper carpets (16x16) in order to catch items from mob spawners that randomly distribute the items in a radius. Because of that, a single hopper is usually not enough, and the vanilla alternative of using water to direct the drops is often too tedious for players to consider.

Finding a solution:
Knowing that players do not REALLY care about how many hoppers they use, we have decided the best course of action here would be to implement "chunk hoppers". When a item is attempted to be spawned in a chunk (16x16 area), the area will be scanned and if possible the item will be added to a compatible ChunkHopper. To convince people to switch themselves (and reduce anger due to forced restrictions), we have implemented a filter and sell feature which can be used to get people to switch and even monetize mob farms. Combined with chunk limits, this solution has reduced lag from 40% (20% hopper, 20% item entity) to less than 0.3%. To further optimize the system, we have ran profiler tests and found out that checking for hoppers in a chunk takes over 2ms to compute. To solve that, we have found that caching hoppers with a lifespan of 2 seconds has further reduced the load to under 0.03%, and since players rarely have to interact with their chunk hoppers, has almost no visual impact in-game.

Results:
The system is now used in our public plugin LagAssist which has been purchased over 1000 times and is used on enterprise networks with over 1000 players active. We have noticed a 1000x decrease in time spent on hopper calculations while also making players happy and providing an additional monetization scheme in the process.


Problem description:
The Gamster Technical team had difficulties with unexplained player disconnects. The issue was occurring each few hours, when around 30% of the playerbase would drop out of nowhere.

Analysing the Problem:
The first step I took was finding out what the root cause of the issue is, starting with Game Server logs -> Game Proxy logs -> System logs. Since the issue seemed to affect all subservers, and there were no logs to indicate any fault on the game servers themselves, we have quickly deduced that the issue was not caused by the application layer. We have found that the issue was caused by the network driver resetting each few hours.

Finding a solution:
After we managed to pinpoint the reason the issue was occuring, the solution was quick to be found. We have found that the OVH kernel network driver was faulty and causing constant resets. We have disabled several optimization features and have managed to solve the situation with no visual performance decreases. The solution has proved to be useful in several situations where the same issue has popped up. For stability and risk reduction purposes, gamster now disables the features each time a new game server is rented and installed.

Results:
Gamster was able to sustain over 3000 constant players across over 15 minigames without any networking issue occuring. We have also been able to resist to temporary spikes in usage due to denial of service attacks.


Problem description:
Minecraft botting has always been a major issue with both cracked and premium minecraft servers. The low processing requirements associated with having tens of thousands of bots join per second make it easy for low skilled attackers to cripple even large and well designed networks. Until now, the solutions (found entirely on the application layer) have been relying on proxy checking (which is expensive both in terms of time and money), or even blocking connections altogether. Even with these drastic measures in place, the application layer mitigation is too slow and doesn't prevent denial of service attacks from happening.

Analysing the Problem:
While bots are intended to be as close to real players as possible, they often lack a full implementation of the minecraft protocol and only use essential packets in order to connect to the server. By analysing valid vs illegal traffic, we have found that bots can be detected by checking for an array of packets that normal players send in a specific order. With bots, these packets do not get sent (either entirely or in a poor order), which gives us a seamless yet effective way to distinguish real players from bots.
While application layer solutions are unable to prevent downtime by themselves, they can be combined with iptables with connlimit and ipsets to make a vertiable defense against hundreds of thousands of connections per second.

Finding a solution:
Our approach to solving this issue is storing verified player IP's in an ipset and only allowing X (by default 30) connections per second for unverified players. This means that during an attack, we will get a maximum of 30 bots joining a second, which is enough for our layer 7 firewall to be able to slowly block the proxies and mitigate the attack. To allow this system to work on top of a load-balanced network, we use an API which automatically propagates the cache globally each 10 minutes.

Results:
Since attacks no longer cause downtime for active players, attackers are often discouraged and close down their attacks entirely in a few minutes. In the case that the attacker does not, the tool is usually able to mitigate 90% of the attack in 10 minutes (~18,000 proxies per attack) and prevents the attacker from further launching attacks on the server. This approach was briefed via spigot post a few years back, with 90% of anti-bot products now found on the market using a form of the presented approach. Gamster suffers denial of service attempts each day, but now without suffering any downtime.


Problem description:
With gamster pulling in tens of thousands of unique players each day, crashes are bound to happen. Because of the large strain that our lobby servers face at peaktime, along with the fact that one server crash can lead to thousands of database requests a second on the lobby, we need an alternative solution to still keep players on-line.

Analysing the Problem:
Since lobbies are expensive and have difficulties handling high load, opening more is not a valid method to solve this issue. Additionally, letting the players get kicked decreases the chance of a player selecting Gamster in their server list (reduced players online) along with driving kicked players to other servers.

Finding a solution:
Since the mojang server software is unable to handle large amount of connections, we have decided to start working on a "Limbo" minecraft server which uses the same protocol as mojang to keep players online while barely using any resources. By implementing netty, we solve the C10k problem and improve networking speeds. Additionally, our use of caching to reduce the amount of calculus being done on the server means less processing power wasted and increases the amount of players the limbo can handle. To solve player transfer to lobbies, we have implemented a queue which keeps connections per second to a safe level, while also not being excessively restrictive.

Results:
The limbo implementation has been able to sustain over 1000 concurrent real player instances and over 10.000 bots in our tests, while only using under 4% of a cpu core and under 100MB of RAM. The intuitive queue implementation means players have no issue using the limbo server.


Problem description:
Knowing the limits of your infrastructure can help plan upgrades and prepare you for the worst. Because of that, being able to benchmark your system indifferent of OS or type of hosting provider is essential to your server's growdth.

Analysing the Problem:
While benchmarking clients on dedicated servers does not pose an issue, our analytics show that the large majority of our clients actually use hosting providers that jail the user in the minecraft process, unable to run external commands or reliably get hardware information. In many cases, hosting providers have been found to give false information in relation to their hardware, and if not, oversell their hardware to the point that all clients on the respective machine suffer performance issues.

Finding a solution:
While there is no way for us to fully prevent hosting providers from less than ideal practices, we can provide tools to understand the provider's infrastructure in a safe way without breaching their terms of service and risking service suspension. In order to support some plugins, hosting providers have enabled java to access files found in /proc/ or execute certain processes. LagAssist uses this information to automatically find the CPU information of a hosting provider from MACOS, Windows and Linux without having to implement native code. On top of that, LagAssist uses test FTP servers to benchmark network speeds and uses the gathered information to approximate the player counts that the node or server will be able to sustain. On top of that, additional tools such as finding the real system load helps find hosting providers overselling their resources or providing sub-par services.

Results:
The LagAssist benchmark is now used by over 1000 servers worldwide, and has helped many people to understand their machine's potential and plan for the future.


Problem description:
With google being meant for consumers to find meaningful search results, IOT software, routers, personal sites, device control panels and more are ommited from search results. This reduces the ability of security reasearchers to understand the extent of a vulnerability and reduces the ability of hosting providers to understand their clients use cases and adapt to their needs.

Analysing the Problem:
While scanning each site one by one is not impossible, the large amount of time that would be spent doing so is often too large to provide reasonable results in time. This lack of feasibility means that large scans are often out of the question.

Finding a solution:
With most sites being static and rarely suffering any changes, a viable method to scanning over large amounts of ips is using caching to store all pre-scanned ips cyclically, constantly refreshing old results with the newest one available. This opens the gate to locally running regex over the cached website database and significantly reduces scan time.

Results:
Our tool is able to scan for regex over 2 million of websites in less than 10 minutes, with the bottleneck being SSD speeds. The tool has been used to analyse the extent of several found vulnerabilities related to IOT devices.


Problem description:
PGXPO is an arab minecraft server focused on providing players with a competitive space for them to PvP. The server reaches about 200 players at peaktime, with more and more players joining each day. Because of the way their server was built, they were facing numerous performance issues that hindered their growdth.

Analysing the Problem:
We began with taking profiler information on all of their under-performing server, analysing the causes. We found that a major cause for lag was sync database access, along with un-optimized SQL statements for updating player information. On top of that, the decision to use TCPShield was further increasing ping without any additional tangible benefit for players.

Finding a solution:
Once found, the solution was to work on optimizing database calls to a minimum: We used local caching to improve performance, used update statements instead of selects and inserts where possible, and moved most of the workload on the SQL server by modifying the statements used. On top of that, we moved to a custom-made alternative to tcpshield to prevent DDoS attacks without having to pay extra.

Results:
We managed to significantly improve performance across the whole network, and also saved money in the process. Currently, at the time of the writing, we're still working on slowly improving PGXPO for the benefit of the players and the network.


Problem description:
The OVH Strasbourg fire left numerous servers affected, with many servers completely being lost to the fire, and some made unavailable by OVH stopping electricity to prevent the fire from spreading. One of our previous clients, Provanas, was affected.

Analysing the Problem:
Provanas uses a hybrid OVH-MYLOC infrastructure designed by entryrise, to reduce costs while keeping ping good, and keeping stability up. The OVH frontend is used for mitigation while the backend is used for hosting the games on better and cheaper hardware provided by myloc. With their frontend hosted on SBG2, the fire incident left the server unavailable and most account entries for the server lost.

Finding a solution:
The client was quick to contact Entryrise to mitigate the damages that insued with the fire. We quickly moved their mitigation infrastructure on Entryrise Shield to mitigate DDoS while OVH was working on provisioning replacement servers in a different location. We managed to salvage most of the frontend and notified the players in less than 2 hours after being notified of the problem. We managed to start the server back up in less than one day after the incident occured, with registration disabled temporarily to ensure security for newly registered accounts that were not available in the manual backups taken 6 months in advance.
We further negotiated with OVH to receive 6 months of replacement hosting for provanas for the damage incurred, leading to a reduction in hosting expenses of 500EUR.

Results:
With the server up and running in less than 1 day after the incident was identified and we were contacted, and with OVH granting us 6 months of mitigation after negotiation, provanas was quickly able to get past the incident and continue growing.
Learn about us

Meet our team

We believe that the people behind a brand is what matters most. No matter the size of the company or the domain you're working in, the people that make up the team are what make it rise to the top. We hope this section will help you know our team better. To read about a person's resume, please click on his image.

Deleanu Stefan

Director

  • Attention to detail
  • Fast thinker
  • Passionate

Elliot Nichols

Lead Client Manager

  • Dedicated
  • Knowledgeable
  • Quick to reply
Learn about us

For all sizes

We have 3 pricing plans to cover the 3 stages of business: starting, entering quick growdth, and reaching your peak. This allows us to better fit your budget while slowly building up the work we do for you, from critical to commodity.

Pricing Table

  • Incident resolution
  • Infrastructure Design
  • Performance Profiling
  • Security Profiling
  • System Administration
  • Java Development
  • Business Consultation
  • Staff Training
  • Technical Advice
  • Monthly Server Maintenance
  • System Installation Service
  • Up to 3 managed machines
  • Optimization Advice
  • Performance Profiling
  • Technical Advice
  • 100GB Backup Space
  • Monthly Server Maintenance
  • System Installation Service
  • Up to 10 managed machines
  • Optimization Advice
  • Performance Profiling
  • Business Advice
  • 2000GB Backup Space
  • Java Development Work
  • Customized Work