Docker Compose

While I was working for Trading Technologies, we always planned on using Linux Containers to provide secure hosting of user-written TT SDK algos. Docker was supposed to be the way of ensuring proper isolation, resource management, as well as means of convenient deployment of client strategies. The infrastructure was in place, but I left before we had a chance to materialize our plans.

Recently I decided to give Docker another go, this time as a way of managing my various cloud projects. I was happy with DigitalOcean’s offering, so I began migrating my websites and jobs to their cloud. Everything, of course, is containerized and managed together. I am using Ubuntu 14.04 with Docker Compose.

I must say I am pleased with the results. The migration has been rather painless, and I like how simple the config ended up. I am running a container with a shared MariaDB (MySQL) database, a reverse-proxy Nginx to manage routing, several WordPress blog containers, few C# ASP.NET sites (Mono, not CoreCLR), some Python sites (Flask), and bunch of Python/C++ apps. A sample docker-compose.yml file is shown below.

As you can see, I am using dmp1ce/nginx-proxy-letsencrypt image as a web proxy. Just like with jwilder/nginx-proxy, it’s dead simple to configure routes. All a container needs to do is to specify VIRTUAL_HOST=something.com variable, and the web traffic will be forwarded to its exposed port. Image dmp1ce/nginx-proxy-letsencrypt has LetsEncrypt.org support built right in. Specify LETSENCRYPT_HOST and LETSENCRYPT_EMAIL and voilà- free SSL for your site 🙂

Hacktoberfest

I finally got my Hacktoberfest gear from DigitalOcean. It was a fun opportunity to contribute to open-source projects while scoring a sweet t-shirt.
I fixed some annoying bugs in the weld build system that I’ve been using for my personal C++ projects. Hope you guys enjoy.

Hacktoberfest T-Shirt from DigitalOcean

Naturally this “hacktoberfest” made me take a closer look at DigitalOcean and what they have to offer. I’ve gotta say: I’m impressed! Seldom one finds such fast and cost-effective cloud providers. I’ve been using AWS and CloudAtCost, but now I am seriously thinking of migrating at least some of my projects to DigitalOcean. Well done, sirs, well done!

8 years at Trading Technologies

I just noticed it has been a whole 8 years since I started working for Trading Technologies. I was hired fresh out of college and began my journey on May 15, 2006 (well, I did some consulting on the side before coming to ChiTown).

trading_technologies

Company logo when I started

 

New TT logo

New TT logo

 

 

During my career at TT, I pretty much worked on everything the company has to offer.

I spent countless hours massaging our flagship X_TRADER product. I was a key player in bringing .NET into the program and integrating it with native C++ and MFC. This really made adding new windows easier (have you ever had to deal with Win32/MFC?!) and allowed for tight X_STUDY integration.

X_TRADER's .NET toolbar

X_TRADER’s .NET toolbar. Click for larger image

 

In order to achieve maximum performance, I completely rewrote the Time & Sales window. Both the stand-alone and one integrated into MD Trader. While working on T&S I learned a lot about grids and data virtualization.

Time & Sales window

Blazing fast Time & Sales window with the default (butt-ugly) X_TRADER color scheme

 

Every trader who’s copying/pasting links to and from Microsoft Excel goes through my Link Manager. That was a very fun project which definitely provided a ton of value for our customers. There are thousands of traders who design their strategies in Excel spreadsheets and want their numbers to be shown on grids, charts, ladders and algos. And this works both ways – I allow them to copy cells from X_TRADER and insert them into Excel. The data flows flawlessly and everything just magically works.

X_STUDY OLE linking

Look ma! I has Excel linkage! (Click for larger image)

For fun I also added preview to the Color settings window, which finally made it usable.
I wrote so many things for X_TRADER that I actually lost track. Not to mention countless prototypes to fool around with new features. Definitely good times 🙂

 

I am the architect and author of TT API – our high performance trading API for Windows. It was a great few years designing and implementing all the different features. I certainly learned a lot. TT API lets you trade any exchange that TT supports, including Autospreader SE and Synthetic SE engines. You can really go to town. Just check out TT API samples on GitHub.

It’s also worth mentioning that internally our Algo SE server and ADL (Algo Design Lab) are both powered by my TT API.

 

Two years ago I was selected to start on TT’s future platform. The codename for it used to be “Nextrader“, but due to trademark conflicts a new name was chosen. I coined the name and designed many low-level communication and security details (EdgeServer-to-client path, authorization, protocols, etc) which are now the foundation of the new system. I also led and directed the client-side team.  The TT platform is written from the ground up using modern technologies. It’s optimized for speed. Trust me – you will feel it 🙂

In addition to web-based interface, TT Platform will ship with an Android mobile app. That’s another one of my babies. I designed the flow and general layout of the main screens for both phone and tablet form factors. Our in-house designer Kevin made them look awesome. I’m sure you will love it! Working on mobile is challenging, as it forces you to think from a different perspective and face a whole new set of problems. Limited screen real-estate, battery life, disconnect scenarios, butt dialing (or shall I say: butt trading) are all issues you have to deal with. I had a blast 🙂

Side menu

“Nextrader” for Android prototype. Side menu. Click for larger image. The name has since been changed to “TT Mobile”.

 

MD Trader on Android

TT Mobile for Android. MD Trader on a phone. Click for larger picture.

 

I started writing iOS version of TT Mobile with my team, but I didn’t get too far (enough to master Objective-C). I was needed on the new Algo project. Currently I am working with Andrew Gottemoller on our next-generation trading API, which we internally call TT SDK. The plan is to allow our customers to hand-craft their algos and run them in our co-lo facilities for minimum latency and maximum speed.

 

TT SDK is lean. It is fast. Linux and plain C. It is powerful, yet feels delightfully simple.  In addition to C we will eventually provide wrappers for higher-level languages. I, naturally, already have a C++14 and Mono C# version going. Stay tuned!

 

As you see, I’ve been having fun. Trading Technologies is a great company, but its most important asset are definitely the people. Everyone is smart and easy-going. I made many good friends at TT and I’m happy to see them every day.
Let’s see what the future will bring 🙂

SafeBuffer and UnmanagedMemoryStream

At work I have a situation where I have some binary data allocated in the native code. It’s pretty much a raw char*. I then would like to access that same information from the managed side. But how? Of course I can just copy the data to a byte[] but that’s just wasteful.

I did some googling around and found the obvious solution – UnmanagedMemoryStream. It can take pointers, or a SafeBuffer. The latter is basically a smart wrapper around memory handle. Take a look at the code I came up with. I hope somebody will find it useful. It’s still work in progress and can use some love, so any comments are welcome.

I needed to pass the std::unique_ptr as r-value, otherwise the linker complains (thunks for non-existent copy constructor). I still need to clean this code up to handle custom deleters. But it serves my needs for now.

Edit: There is a bug in this code. Can you spot it? 😉

Async Patterns with C# – Handling Requests

This is a continuation of my earlier blog post regarding asynchronous operations in .NET. We are now going to discuss the various ways of issuing and receiving results from asynchronous requests.

There are three main ways of issuing an asynchronous request and later receiving the result. They are described in the sections below.

Callback methods

The pattern of specifying a callback method goes way, way back. And it is still being widely used today. The idea is simple:

  • Call a method that starts an asynchronous operation
    • Pass in a delegate to the function you want called when the operation completes
    • Optionally specify user data
  • The specified callback methods gets called asynchronously. The arguments passed to it usually contain:
    • The result of async operation
    • Error indication (Exception object or error code)
    • User data

Depending on the needs the callback method could be a simple delegate or an interface.

Delegates

The approach of passing delegates is convenient, because one could easily take advantage of closures/lambdas. That way objects in scope become available to the “callback” method.

A main disadvantage would be that it is difficult to “cancel” the asynchronous operation. There also is no “identity” to the operation (unless we consider the pair <delegate, user data>).

Although there are no technical reasons, in practice only 1 delegate gets passed as a callback.

Interfaces

When viewed simply as “collections of methods,” interfaces could be considered a special case of “callbacks” or, as Java folks like to call them, listeners. In Java it is convenient to pass interfaces to receive callbacks, because Java allows for creation of inline anonymous classes. This is unfortunately not the case for C#, which is why the “request object” is a more interesting pattern for the .NET crowd. More on that in the next section.

Since interfaces represent concrete objects, they do have identities. This could allow for “cancelling” async requests by passing the same observer instance that was used when making the request.

Request objects

This is a very common pattern that most coders are familiar with. The premise is simple:

  • Create an object representing the asynchronous operation
  • Optionally set properties on the request object, for instance store some user data.
  • Hookup the event handler (or handlers).
  • Call a method starting the asynchronous action.
  • Receive the event with results. Extract the information from event arguments.

The main advantage of using the request object is that the object itself is responsible for emitting the events signaling the completion/failure of the async operation. In other words, the “sender” passed to the event handling function is the request. Users can store custom data with the request and easily get to it from the handler. This could be accomplished via composition (for example by having a Tag property), or by inheritance (by allowing the users to extend the request object).
Depending on the needs, the request object can have multiple events. For instance, one for signaling completion, and one errors.
The request object could also optionally provide a “Cancel” method that would stop the asynchronous action, or at least prevent the events from being fired.

Once can classify the request objects depending on how they are created, and who actually starts the asynchronous operation. Several main categories are briefly described below.

Self-created, self-submitting

The user can create the request object by calling its constructor directly. The method starting the async action is off the request object itself.
This variation of the pattern is convenient as it could allow for extending the request object through inheritance.

Factory-created, self-submitting

The user creates the request object by calling a factory method. Typically the factory is some kind of main “API” object that contains multiple operations. Once created, the request object is self-sufficient and can be “submitted” using its own method.
Since the request is constructed by a factory, it usually means that extending the request object is not allowed.

Self-created, factory-submitted

The user is able to create the request object by calling its constructor. This could potentially allow for inheritance scenarios.
Asynchronous action is started by calling a method on another object, which usually ends up being the main “API” object.

Hybrid – BeginInvoke/EndInvoke

The pattern of BeginInvoke/EndInvoke that is build into the .NET framework can be seen as a hybrid of callback method and request object. The usage is a follows:

  • Call a “BeginInvoke” method to start an async operation
    • The method accepts a callback delegate and user data.
    • The return of the method is an IAsyncResult instance that the caller needs to hold on to
  • IAsyncResult could be used to wait until async operation completes and to cancel it.
  • When the callback function gets called
    • Call “EndInvoke” and pass it the IAsyncResult instance acquired from the “BeginInvoke”
    • The “EndResult” will return the “outcome” of the async operation.

Async patterns with C#

Lately I have been doing a lot of asynchronous development. At work I’m designing a trading API that is very asynchronous in nature, and in my spare time I have been messing around with Silverlight.

I would like to share a few different patterns for retrieving data asynchronously that I have encountered. Hopefully my blog posts will be a good resource for you.

Diamond – Java 7 new feature

I spotted an article on InfoQ summarizing new futures of the upcoming JDK 7. Coming from a Java background, I was really curious to see what’s in store. The article noted several “small language enhancements”, so I binged around (yes, I actually used bing), and found the presentation slides from JavaOne by Joe Darcy.

While many of the enhancements make sense, I was really stunned by the “diamond operator” (page 64 of the slides). The point of it is to reduce typing when declaring generic types. Why specify the type twice, if you can do it in one spot?

This reminded my of the var keyword in C# and the C++0x’s new incarnation of the auto keyword. The difference is that the diamond operator occurs on the right-hand-side, while the type is specified on the left. Doesn’t seem like a big deal, but in my opinion this idea is very short sighted.

By having the type on the right-hand-side, both C# and C++ allow much more flexibility. They can handle not only type inference of generics, but pretty much of anything. Specifying type on the right is very natural, and mimics assignment “from right to left”. It seems silly of Java to ignore this obvious and proven strategy and instead introduce a new operator. This will be confusing for those programmers who do work with many different languages, and will make Java feel out of place and underpowered. For comparison, this is what Java could have had if it adopted the “auto” keyword:

The above (or their equivalents) are already available in C# 3.0 and in draft C++0x. Unfortunately (judging by Joe’s comments) Java 7 will be released without such type inference mechanisms. Disappointing, very disappointing…