Read all the latest musings and news from the Llamas.
The TradeLlama Promise.
When I was a young QA engineer working for MySQL on their Enterprise Monitoring application, I used to get into heated discussions with the team architect quite often. The reason to argue was always the same; I would find a bug that, in my opinion, was very important, and we needed to hold the release back. But he would disagree and move forward with the release. This went on until I left the team to take a role as a software engineer.
Later in my career, as a VP of Engineering, I found myself on the other side of the argument. Now I had to see where to draw the line between a critical bug and a bug we could fix on the next release. I could hear my younger self trying to stop everything in order to fix the bug we just found, but I also knew more of the business side; I knew there were customers waiting for the features we had completed.
Over time, this task became more comfortable. I also made sure to inform our team why specific bug fixes were pushed to a later time, which is part of our Llama open communication culture (Check out our post on this here: https://www.tradellama.com/posts/engineering-and-open-communication/ ).
How is TradeLlama applying this?
I recently had to get a POC (proof of concept) engine ready for a llama demo. This is an exciting time at TradeLlama because we get to apply learned lessons to a blank canvas. But at the same time, we needed to focus on the task at hand and find the balance between reaching the demo day and having a strong framework to build upon.
The TradeLlama team likes to be pragmatic; in this case, it meant having a good test coverage on the core engine, but not going overboard with unit tests. Covering the core logic with plenty of tests ended up saving us a lot of time while making refinements to the trading engine and not introducing bugs. Still, we know there are some less critical areas of the code that we'll be adding tests for over the coming weeks.
After some back and forth with the llama product team, we got enough code ready to process the trade data for our demo, and rumor has it that it went well. 😀
TradeLlama Engineering is client-aware.
Over time, the cloud-native world with our team became synonymous with multi-tenant. One thing that factors into how we deliver components of the Llama platform to our constituents is the refined relationship between cloud, tenancy, and efficiencies against the reality that our clients are all unique. Unique in how they use us, what they expect from us, and what they ask of us.
Our ability to handle delivery and evaluate when to push a change, relative to risk-reward, depends somewhat on where that work is being performed. Our core platform consumes trade data and runs all of our proprietary ML logic, and our delivery frameworks interact with this platform via API - in many cases, these are unique endpoints per client. This allows us to offer scale and efficiency as well as customization. With respect to that latter component, we take advantage of those sandboxes and have more flexibility in delivery cycles.
Also, by emphasizing where work is happening, this impacts how we choose to balance delivery. The llama team requires our engineers to, in effect, be very client-aware in terms of usage and expectations - we don't abstract that away from engineering - we force our engineers to think that way.
Because engineering is the value proposition, not just the processes we use to get results (which can and should change to reflect the dynamic nature of our client relationships).
The TradeLlama Promise.
One of our goals is to make sure we help our clients. We help them by delivering features and analyses on their data. We aim for the best quality possible, and we know how to set the right priorities to get us there.