Lunch was quick today, I find it sensible when working-from-home to try and get out of the house once a day, but a brisk walk down to the local shops in the ever-present Mancunian rain was enough for me today.
I’ll take this time to recap my morning, in case you missed it: I had a series of routine work calls that dealt with different aspects of the tech element of Traydstream and its products. There is a lot to handle when you are the head of technology at a FinTech – making sure that things stay above water is just where it starts.
I dry off and sit down for my next call – a regular catch up with Microsoft with whom we are a key partner, as one of their top 100 global fintech accounts. The Traydstream platform is delivered as a SaaS offering based on Microsoft Azure cloud services, so coming from a world of proprietary data, having a solution that centers this has been a revelation to me – both in the ease of adoption and the level of support and interest we have had from Microsoft. I guess at this point I should say in true disclaimer fashion that ‘other cloud hosting providers are available’.
It’s now mid-afternoon, and having spent some time reviewing security & compliance onboarding reviews for a couple of new client banks, I can now get to grips with a more technical conversation with our Machine Learning team.
As a quick aside we use Mongo to store our data and that too is cloud hosted as a service in Mongo Atlas. We have found that using Azure & Atlas cloud hosting really helps us give confidence to our clients as to the safety of their data, as it’s all encrypted at rest and in transit for us by them.
Back to Machine Learning.
One of the more interesting areas of the Traydstream application is our proprietary ‘Trade Finance Discrepancies Machine Learning & Rules Engine’ which embeds our trade finance documentation experience coupled with obfuscated client experience learnt from our client userbase. Just in case you are worrying, the process of learning from our client experience doesn’t extract any client data. The backend is trained by the rules input by our rule experts (which are in line with industry standard regulation); the OCR then interprets the input documentation, and then stores the patterns, such that it will recognize respective patterns when it encounters it again.
Today’s deep dive is looking at how we optimally tune our infrastructure to scale as document-processing load grows. Some of our bigger Banks use the system in volume, so the team are using JMeter to simulate users uploading documents and running them through our Microsoft Vision base OCR engine and proprietary machine learning engine with the objective of gaining linear characteristics as the volume increases.
Following this meeting there’s another context switch, we are currently midway through a regular review of our Security and our officers want to walk through the current status. Thankfully, though I know it’s not luck but through the hard work of our team, there are no significant observations so far so at least I will sleep soundly tonight until my iPhone starts its dreaded beeping tomorrow morning.
As I lay in bed, I recount what I’ve done for the day – all in all, we have made strides in ensuring that our services are working with efficiency and ease. The days are always slightly different, with a slew of challenges and opportunities to wade through. I am grateful for the fact that I have a reliant network of colleagues, that we are all in this together, and that we make little wins, every day. I get to bed satisfied, and hoping for better Mancunian weather tomorrow.