Why Developer Marketplaces & Software Outsourcers No Longer Work

About Me: I’ve created 3 software companies: Five9 (IPO), DoctorBase (cash sale) and JetBridge (current co), much of it using offshore software developers. I’ve made a lot of expensive mistakes you should avoid.

TLDR: Adopt something like our Developer Handbook and give it to every developer before the interview. A majority of offshore developers after reading it decline to interview with us, which is nice because it saves my HR team lots of time by automatically weeding out the bottom 75% (marketplaces and outsourcing companies recruit the bottom 50%).

Most customers who go to marketplaces or outsourcers for competent software developers do not get what they are advertised, often leading to expensive failure for non-technical founders and professional risk for enterprise managers. Here’s how the schemes work (and how to solve for them) –

“Hire ex-Googlers on our marketplace!”

Why would a software engineer who makes $250k+ for FAANG companies (and can WFH) lurk on marketplaces to work on an idiot’s idea with no stability or benefits at a fraction of the wages?

They don’t.

This marketing gimmick works because it’s human nature to go online and believe we found a $100 bill for $20, but the international labor market, especially post-Covid, has become extremely efficient. Even average developers get multiple messages from recruiters every week – they no longer need to go to marketplaces. And they’re mostly remote work offers.

So what kind of developers go onto marketplaces looking for work?

The kind that don’t know how to reverse a string (basic stuff) or will pull a bait-and-switch on you (the developer you talk to will not be the one doing the work).

Marketplaces can be very effective for sourcing UIUX designers, graphic design, mechanical turk tasks and more, but increasingly competent software developers are not on them.

“Two weeks free if you’re not satisfied!” is often a marketing gimmick by marketplaces. Two weeks is simply not enough (especially for non-technical founders) to determine if a developer is good.

If you find a developer you like on a marketplace, ask them if they’re willing to take a technical pair programming session with an onshore CTO you trust.

Try it – the response is often amusing.

And if you get an amusing response, walk away with your money still in your bank account.

“We take you from MVP to IPO!”

Almost every software outsourcer I know (except the huge ones) are trying to build their own software apps. Outsourcing can be a grueling business and many SMB outsourcers dream of becoming a SaaS company. If they knew how to build successful MVPs they wouldn’t still be in outsourcing.

Because outsourcing is a labor arbitrage game, outsourcing companies try to pay a 3:1 ratio from fees to wages. Meaning if an outsourcing company charges $75/hour, they’re paying the developers about $25/hour.

So why would a competent developer work for an outsourcer that takes 66% of the revenue created when they’re doing all of the work post-sale?

They don’t.

Enterprise outsourcing companies are places for semi-competent software developers to hide on over-staffed teams (and not give a shit), the exact kind you don’t want. Besides, the large outsourcing companies only take “digital transformation” projects that have $1M+ budgets (just look at the earnings reports of the publicly traded outsourcing companies).

SMB outsourcing companies are a great place for project managers and junior developers to cut their teeth (on your dollar).

This gimmick works because enterprises often have legacy projects that smart, ambitious software developers don’t want to work on (like 20 year old banking applications built on J2EE), and SMB outsourcing companies hire a “Product Manager” that can speak decent english (their developers will mostly not).

Most MVPs do not need a Product/Project Manager, they need a competent Tech Lead (a senior fullstack engineer who has led teams before). Ask any experienced Tech Lead and they will confirm.

If an SMB outsourcing company quotes you for both the developers’ time as well as a Project/Product Manager, walk away. It’s a technique used to hide the fact that they don’t have competent developers.

BTW, most competent developers speak English (since most of the educational content, workshops, conferences, etc are in English). Which also makes sense because good developers are supposed to be good at learning languages.

So what’s the solution(s)?

In my experience, and according to all of our data, anywhere from 3% – 9% of offshore software developers are commercially competent. Since software development is a non-licensed profession that often makes 10x more than most jobs in emerging economies, this makes sense.

Outsource the technical interviews to a highly competent 3rd party.

If you’re a technical manager working in an enterprise, you likely don’t have the time (or risk profile) to conduct 11 – 30 technical interviews for every 1 offer to an offshore developer. And if your company needs to make 2 offers for every 1 accepted offer, obviously double that time-number. There are cost savings to be had going offshore, but the risk profile is much greater.

If you’re a non-technical founder, and if you have the money, first hire a technical co-founder or technical employee onshore (or an elite dev offshore). I get it, believe me, it’s incredibly difficult and expensive, it’s easier to just hire someone on a marketplace and test your MVP idea.

The problem is this – it is highly unlikely your idea is the one with Product Market Fit. According to David Rusenko (sold his startup Weebly for $365M to Square) it will likely take 20-30 iterations of your idea before achieving a product that has any chance of making you money. This requires an in-house technical founder/employee who can then manage an offshore team. Relying solely on an offshore team just won’t work.

In 23 years of being in startups, I’ve never seen a non-technical founder without a technical founder ‘get rich’ by partnering with an outsourcing company. Not once.

And if you can’t find a technical founder, honestly ask yourself – how will you find your first 10 enterprise customers or your first 10,000 consumer users? How will you sell investors or sell your company?

And if you’re an enterprise hiring manager, outsource the professional risk by having a highly competent 3rd party conduct the first round of technical interviews for your offshore dev team.

Feel free to disagree (I’m always open to learning) at john.k@jetbridge.com

Mastering JavaScript Tree-Shaking

One fantastic feature of JavaScript (compared to say, Python) is that it is possible to bundle your code for packaging; removing everything that is not needed to run your code. We call this “tree-shaking” because the stuff you aren’t using falls out leaving the strongly-attached bits. This reduces the size of your output, whether it’s a NPM module, code to run in a browser, or a NodeJS application.

By way of illustration, suppose we have a script that imports a function from a library. Here it calls apple1() from lib.ts:

If we bundle script.ts using esbuild:

esbuild --bundle script.ts > script-bundle.js

Then we get the resulting output:

The main point here being that only apple1 is included in the bundled script. Since we didn’t use apple2 it gets shaken out like an overripe piece of fruit.

Motivation

There are many reasons this is a valuable feature, the main one is performance. Less code means less time spent parsing JavaScript when your code is executed. It means your web app loads faster, your docker image is smaller, your serverless function cold start time is reduced, your NPM module takes up less disk space.

A robust application architecture can be a serverless architecture where your application is composed of discrete functions. These functions can function like an API if you put an API gateway or GraphQL server that invokes the functions for different routes and queries, or can be triggered by messages in queues or files being uploaded to a bucket or as regularly scheduled events. In this setup each function is self-contained, only containing whatever code is needed for its specific functionality and no unrelated code. This is in contrast to a monolith or microservice where the entire application must be loaded up in order to handle a request. No matter how large your project gets, each function remains about the same size.

I build applications using Serverless Stack, which has a terrific developer experience focused on building serverless applications on AWS with TypeScript and CDK in a local development environment. Under the hood it uses esbuild. Let’s peek under the hood.

Mechanics

Conceptually tree-shaking is pretty straightforward; just throw away whatever code our application doesn’t use. However there are a great number of caveats and tricks needed to debug and finesse your bundling.

Tree-shaking is a feature provided by all modern JavaScript bundlers. These include tools like Webpack, Turbopack, esbuild, and rollup. If you ask them to produce a bundle they will do their best to remove unused code. But how do they know what is unused?

The fine details may vary from bundler to bundler and between targets but I’ll give an overview of salient properties and options to be aware of. I’ll use the example of using esbuild to produce node bundles for AWS Lambda but these concepts apply generally to anyone who wants to reduce their bundle size.

Measuring

Before trying to reduce your bundle size you need to look at what’s being bundled, how much space everything takes up, and why. There are a number of tools at our disposal which help visualize and trace the tree-shaking process.

Bundle Buddy

This is one of the best tools for analyzing bundles visually and it has very rich information. You will need to ask your bundler to produce a meta-analysis of the bundling process and run Bundle Buddy on it (it’s all local browser based). It supports webpack, create-react-app, rollup, rome, parcel, and esbuild. For esbuild you specify the --metafile=meta.json option.

When you upload your metafile to Bundle Buddy you will be presented with a great deal of information. Let’s go through what some of it indicates.

Bundle Buddy in action

Let’s start with the duplicate modules.

This section lets you know that you have multiple versions of the same package in your bundle. This can be due to your dependencies or your project depending on different versions of a package which cannot be resolved to use the same version for some reason.

Here you can see I have versions 3.266.0 and 3.272 of the AWS SDK and two versions of fast-xml-parser. and The best way to hunt down why different versions may be included is to simply ask your package manager. For example you can ask pnpm:

$ pnpm why fast-xml-parser

dependencies:
@aws-sdk/client-cloudformation 3.266.0
├─┬ @aws-sdk/client-sts 3.266.0
│ └── fast-xml-parser 4.0.11
└── fast-xml-parser 4.0.11
@aws-sdk/client-cloudwatch-logs 3.266.0
└─┬ @aws-sdk/client-sts 3.266.0
  └── fast-xml-parser 4.0.11
...
@prisma/migrate 4.12.0
└─┬ mongoose 6.8.1
  └─┬ mongodb 4.12.1
    └─┬ @aws-sdk/credential-providers 3.282.0
      ├─┬ @aws-sdk/client-cognito-identity 3.282.0
      │ └─┬ @aws-sdk/client-sts 3.282.0
      │   └── fast-xml-parser 4.1.2
      ├─┬ @aws-sdk/client-sts 3.282.0
      │ └── fast-xml-parser 4.1.2
      └─┬ @aws-sdk/credential-provider-cognito-identity 3.282.0
        └─┬ @aws-sdk/client-cognito-identity 3.282.0
          └─┬ @aws-sdk/client-sts 3.282.0
            └── fast-xml-parser 4.1.2

So if I want to shrink my bundle I need to figure out how to get it so that both @aws-sdk/client-* and @prisma/migrate can agree on a common version to share so that only one copy of fast-xml-parser needs to end up in my bundle. Since this function shouldn’t even be importing @prisma/migrate (and certainly not mongodb) I can use that as a starting point for tracking down an unneeded import which will discuss shortly. Alternatively you can open a PR for one of the dependencies to use a looser version spec (e.g. ^4.0.0) for fast-xml-parser or @aws-sdk/client-sts.

With duplicate modules out of the way, the main meat of the report is the bundled modules. This will usually be broken up into your code and stuff from node_modules:

When viewing in Bundle Buddy you can click on any box to zoom in for a closer look. We can see that of the 1.63MB that comprises our bundle, 39K is for my actual function code:

This is interesting but not where we need to focus our efforts.

Clearly the prisma client and runtime are taking up sizable parcels of real-estate. There’s not much you can do about this besides file a ticket on GitHub (as I did here with much of this same information).

But looking at our node_modules we can see at a glance what is taking up the most space:

This is where you can survey what dependencies are not being tree-shaken out. You may have some intuitions about what belongs here, doesn’t belong here, or seems too large. For example in the case of my bundle the two biggest dependencies are on the left there, @redis-client (166KB) and gremlin 97KB). I do use redis as a caching layer for our Neptune graph database, of which gremlin is a client library that one uses to query the database. Because I know my application and this function I know that this function never needs to talk to the graph database so it doesn’t need gremlin. This is another starting point for me to trace why gremlin is being required. We’ll look at that later on when we get into tracing. Also noteworthy is that even though I only use one redis command, the code for handling all redis commands get bundled, adding a cost of 109KB to my bundle.

Finally the last section in the Bundle Buddy readout is a map of what files import other files. You can click in for what looks like a very interesting and useful graph but it seems to be a bit broken. No matter, we can see this same information presented more clearly by esbuild.

esbuild –analyze

Your bundler can also operate in a verbose mode where it tells you WHY certain modules are being included in your bundle. Once you’ve determined what is taking up the most space in your bundle or identified large modules that don’t belong, it may be obvious to you what the problem is and where and how to fix it. Oftentimes it may not be so clear why something is being included. In my example above of including gremlin, I needed to see what was requiring it.

We can ask our friend esbuild:

esbuild --bundle --analyze --analyze=verbose script.ts --outfile=tmp.js 2>&1 | less

The important bit here being the --analyze=verbose flag. This will print out all traces of all imports so the output gets rather large, hence piping it to less. It’s sorted by size so you can start at the top and see why your biggest imports are being included. A couple down from the top I can see what’s pulling in gremlin:

   ├ node_modules/.pnpm/gremlin@3.6.1/node_modules/gremlin/lib/process/graph-traversal.js ─── 13.1kb ─── 0.7%
   │  └ node_modules/.pnpm/gremlin@3.6.1/node_modules/gremlin/index.js
   │     └ backend/src/repo/gremlin.ts
   │        └ backend/src/repo/repository/skillGraph.ts
   │           └ backend/src/repo/repository/skill.ts
   │              └ backend/src/repo/repository/vacancy.ts
   │                 └ backend/src/repo/repository/candidate.ts
   │                    └ backend/src/api/graphql/candidate/list.ts

This is extremely useful information for tracking down exactly what in your code is telling the bundler to pull in this module. After a quick glance I realized my problem. The file repository/skill.ts contains a SkillRepository class which contains methods for loading a vacancy’s skills which is used by the vacancy repository which is eventually used by my function. Nothing in my function calls the SkillRepository methods which need gremlin, but it does include the SkillRepository class. What I foolishly assumed was that the methods on the class I don’t call will get tree-shaken out. This means that if you import a class, you will be bringing in all possible dependencies any method of that class brings in. Good to know!

@next/bundle-analyzer

This is a colorful but limited tool for showing you what’s being included in your NextJS build. You add it to your next.config.js file and when you do a build it will pop open tabs showing you what’s being bundled in your backend, frontend, and middleware chunks.

The amount of bullshit @apollo/client pulls in is extremely aggravating to me.

Modularize Imports

It was helpful for learning that using top-level Material-UI imports such as import { Button, Dialog } from "@mui/material" will pull in ALL of @mui/material into your bundle. Perhaps this is because NextJS still is stuck on CommonJS, although that is pure speculation on my part.

While you can fix this by assiduously doing import { Button } from "@mui/material/Button" everywhere this is hard to enforce and tedious. There is a NextJS config option to rewrite such imports:

  modularizeImports: {
    "@mui/material": {
      transform: "@mui/material/{{member}}",
    },
    "@mui/icons-material": {
      transform: "@mui/icons-material/{{member}}",
    },
  },

Webpack Analyzer

Has a spiffy graph of imports and works with Webpack.

Tips and Tricks

CommonJS vs. ESM

One factor that can affect bundling is using CommonJS vs. EcmaScript Modules (ESM). If you’re not familiar with the difference, the TypeScript documentation has a nice summary and the NodeJS package docs are quite informative and comprehensive. But basically CommonJS is the “old, busted” way of defining modules in JavaScript and makes use of things like require() and module.exports, whereas ESM is the “cool, somewhat less busted” way to define modules and their contents using import and export keywords.

Tree-shaking with CommonJS is possible but it is more wooley due to the more procedural format of defining exports from a module whereas ESM exports are more declarative. The esbuild tool is specifically built around ESM, in the docs it says:

This way esbuild will only bundle the parts of your packages that you actually use, which can sometimes be a substantial size savings. Note that esbuild’s tree shaking implementation relies on the use of ECMAScript module import and export statements. It does not work with CommonJS modules. Many packages on npm include both formats and esbuild tries to pick the format that works with tree shaking by default. You can customize which format esbuild picks using the main fields and/or conditions options depending on the package.

So if you’re using esbuild, it won’t even bother trying unless you’re using ESM-style imports and exports in your code and your dependencies. If you’re still typing require then you are a bad person and this is a fitting punishment for you.

As the documentation highlights, there is a related option called mainFields which affects which version of a package esbuild resolves. There is a complicated system for defining exports in package.json which allows a module to contain multiple versions of itself depending on how it’s being used. It can have one entrypoint if it’s require‘d, a different one if imported, or another if used in a browser.

The upshot is that you may need to tell your bundler explicitly to prefer the ESM (“module“) version of a package instead of the fallback CommonJS version (“main“). With esbuild the option looks something like:

esbuild --main-fields=module,main --bundle script.ts 

Setting this will ensure the ESM version is preferred, which may result in improved tree-shaking.

Minification

Tree-shaking and minification are related but distinct optimizations for reducing the size of your bundle. Tree-shaking eliminates dead code, whereas minification rewrites the result to be smaller, for example replacing a function identifier e.g. “frobnicateMajorBazball” with say “a1“.

Usually enabling minification is a simple option in your bundler. This bundle is 2.1MB minified, but 4.5MB without minification:

❯ pnpm exec esbuild --format=esm --target=es2022 --bundle --platform=node --main-fields=module,main  backend/src/api/graphql/candidate/list.ts --outfile=tmp.js

  tmp.js  4.5mb ⚠️

⚡ Done in 239ms


❯ pnpm exec esbuild --minify --format=esm --target=es2022 --bundle --platform=node --main-fields=module,main  backend/src/api/graphql/candidate/list.ts --outfile=tmp-minified.js

  tmp-minified.js  2.1mb ⚠️

⚡ Done in 235ms

Side effects

Sometimes you may want to import a module not because it has a symbol your code makes use of but because you want some side-effect to happen as a result of importing it. This may be an import that extends jest matchers, or initializes a library like google analytics, or some initialization that is performed when a file is imported.

Your bundler doesn’t always know what’s safe to remove. If you have:

import './lib/initializeMangoids'

In your source, what should your bundler do with it? Should it keep it or remove it in tree-shaking?

If you’re using Webpack (or terser) it will look for a sideEffects property in a module’s package.json to check if it’s safe to assume that simply importing a file does not do anything magical:

{
  "name": "your-project",
  "sideEffects": false
}

Code can also be annotated with /*#__PURE__ */ to inform the minifier that this code has no side effects and can be tree-shaken if not referred to by included code.

var Button$1 = /*#__PURE__*/ withAppProvider()(Button);

Read about it in more detail in the Webpack docs.

Externals

Not every package you depend on needs to necessarily be in your bundle. For example in the case of AWS lambda the AWS SDK is included in the runtime. This is a fairly hefty dependency so it can shave some major slices off your bundle if you leave it out. This is done with the external flag:

❯ pnpm exec esbuild --minify --format=esm --target=es2022 --bundle --platform=node --main-fields=module,main  backend/src/api/graphql/candidate/list.ts --outfile=tmp-minified.js

  tmp-minified.js  2.1mb 


❯ pnpm exec --external:aws-sdk --minify --format=esm --target=es2022 --bundle --platform=node --main-fields=module,main  backend/src/api/graphql/candidate/list.ts --outfile=tmp-minified.js

  tmp-minified.js  1.8mb 

One thing worth noting here is that there are different versions of packages depending on your runtime language and version. Node 18 contains the AWS v3 SDK (--external:@aws-sdk/) whereas previous versions contain the v2 SDK (--external:aws-sdk). Such details may be hidden from you if using the NodejsFunction CDK construct or SST Function construct.

On the CDK slack it was recommended to me to always bundle the AWS SDK in your function because you may be developing against a different version than what is available in the runtime. Or you can pin your package.json to use the exact version in the runtime.

Another reason to use externals is if you are using a layer. You can tell your bundler that the dependency is already available in the layer so it’s not needed to bundle it. I use this for prisma and puppeteer.

Performance Impacts

For web pages the performance impacts are instantly noticeable with a smaller bundle size. Your page will load faster both over the network and in terms of script parsing and execution time.

Another way to get an idea of what your node bundle is actually doing at startup is to profile it. I really like the 0x tool which can run a node script and give you a flame graph of where CPU time is spent. This can be an informative visualization and let you dig into what is being called when your script runs:

For serverless applications you can inspect the cold start (“initialization”) time for your function on your cloud platform. I use the AWS X-Ray tracing tool. Compare before and after some aggressive bundle size optimizations:

The cold-start time went from 2.74s to 1.60s. Not too bad.

Serverless Bot. Infrastructure with Pulumi

I like to learn something new: new technologies, languages, countries, food, and experiences — all of that is a big part of my life’s joy.

During thinking about my next OKR goal with folks from JetBridge I came up with an idea for how to build something interesting which can help me learn new words and phrases in a more productive way.

The main idea sounds simple: to implement a with a Telegram app bot for learning new words using Anki periodic repetition algorithm. To make it more interesting (and complex 🙃) I decided that it needs to be serverless with AWS lambda and have infrastructure managed by pulumi. This article describes my approach.

The infrastructure contains:

  • API Gateway to handle incoming requests and pass them forward
  • Lambda to process user commands
  • RDS Postgres database instance for storage
Infrastructure

Pulumi allows managing code infrastructure in a convenient manner using any programming language. It’s great when you can define and configure the infrastructure using the same language as the app. In my case, it is Go.

Another benefit that pulumi provides out of the box is using variables directly from created resources. For example, you create an RDS Instance, and then an endpoint and DB properties are available to be used as lambda env variables. It means you don’t need to hard-code such variables or store them anywhere, pulumi manages them for you. Even in the case when some resource is changed (for example a major RDS upgrade will create a new instance) all related infrastructure will be up to date.

The bigger project would require a much more complex definition and resources stack, but for my purpose, it’s rather small and can be kept in only one file.

Resources created by Pulumi

One more cool feature about pulumi is managing secrets. Usually, you need to configure and use some service for it (AWS Secret store, Vault, etc) but with Pulumi you can use it out of the box with a simple command:

pulumi config set --secret DB_PASSWORD <password>

And then use it as an environmental or any other specified property of your app. Additionally, you can configure a key for encryption which is stored in pulumi by default but can easily be changed to a key from AWS KMS or other available provider:

Exported properties allow us to see the most important infra params directly in the CLI or pulumi interface:

ctx.Export("DB Endpoint", pulumi.Sprintf("%s", db.Endpoint))
Exported outputs

To deploy a lambda update, it needs to be built and archived. Then pulumi just sends this archive as a lambda source:

 function, err := lambda.NewFunction(
   ctx,
   "talkToMe",
   &lambda.FunctionArgs{
    Handler: pulumi.String("handler"),
    Role:    role.Arn,
    Runtime: pulumi.String("go1.x"),
    Code:    pulumi.NewFileArchive("./build/handler.zip"),
    Environment: &lambda.FunctionEnvironmentArgs{
     Variables: pulumi.StringMap{
      "TG_TOKEN":    c.RequireSecret("TG_TOKEN"),
      "DB_PASSWORD": c.RequireSecret("DB_PASSWORD"),
      "DB_USER":     db.Username,
      "DB_HOST":     db.Address,
      "DB_NAME":     db.DbName,
     },
    },
   },
   pulumi.DependsOn([]pulumi.Resource{logPolicy, networkPolicy, db}),
  )

To have it as a simple command I use a Makefile:

build::
 GOOS=linux GOARCH=amd64 go build -o ./build/handler ./src/handler.go
 zip -j ./build/handler.zip ./build/handler

Then I can trigger deploy just with make build && pulumi up -y

After that pulumi defines if the archive was changed and sends an update if so. For my simple project, it takes about 20s which is quite fast.

GOOS=linux GOARCH=amd64 go build -o ./build/handler ./src/handler.go
zip -j ./build/handler.zip ./build/handler
updating: handler (deflated 45%)

View Live: https://app.pulumi.com/<pulumi-url>

     Type                    Name                Status            Info
     pulumi:pulumi:Stack     studyAndRepeat-dev
 ~   └─ aws:lambda:Function  talkToMe            updated (12s)     [diff: ~code]


Outputs:
    DB Endpoint   : "<endpoint>.eu-west-1.rds.amazonaws.com:5432"
    Invocation URL: "https://<endpoint>.execute-api.eu-west-1.amazonaws.com/dev/{message}"

Resources:
    ~ 1 updated
    12 unchanged

Duration: 20s

Here you can take a look at the whole infrastructure code.

In conclusion, I can say that pulumi is a great tool to write and keep infrastructure structured, consistent and readable. I used CloudFormation and Terraform previously and pulumi gives me a more user-friendly and pleasant way to have infrastructure as code.

5 tips to make your DevOps team thrive

Written By Vinicius Schettino

DevOps Culture made a real dent on the traditional way organizations used to make software. Heavily inspired by Agile and Lean movements, the DevOps trend makes a move against siloed knowledge, slow and manual deployment pipelines, and strong division between teams such as Development, Testing and Operations. The results announced have been more than eye-catching: Faster deliveries and consequent feedback, happier engineers and better outcomes from the customer/business perspective. 

Background

Aside from the incentives to adopt DevOps practices, companies still struggle to achieve their goals when transforming their internal and customer-facing processes. Turns out that the path is harder than most think. In order to really transform outdated practices into collaborative, agile and automated approaches, the shift cannot be contained only to technical aspects of software development. DevOps Transformation must encompass all organization levels, from developers to executive management. 

Companies often try to achieve that through creating DevOps teams, responsible for the steering and introduction of DevOps practices across the organization. This approach has several pitfalls and threats to efficacy. Often the pushback for maintaining power structures and status quo diminishes the outcomes of that strategy. Instead of really transforming the development process, they often get a limited effect into given phases (such deployment) and automation. When you think of Software Development as a complex system In practice it means that the outcomes are quite limited and the investment might not even pay off. 

In order to avoid that, there are several strategies that can help to avoid said pitfalls and increase the impact of a DevOps team on an organization. In this article we are going to present some tips that can be used to propose, improve and organize DevOps teams in order to make the most out of them. 

1) Make the effort collaborative

The first aspect of DevOps that must be made clear for everyone is that it’s not a thing for only one team to do. The whole idea behind DevOps is about collaboration, avoiding handoffs and complex communication between those working on software. It means it’s impossible to expect one company to “be DevOps” where just one group is trying to put DevOps practices inplace. Those should be widespread, from product management to security and monitoring. Each step towards working software must embrace core concepts of DevOps.

Please note that this is not to say DevOps teams shouldn’t exist. On the contrary, the DevOps world has come a long way, with enough tools, techniques, approaches that we can’t expect everyone to dominate such a variety of skills. The intersection between “Dev” and “Ops” might be big enough to have people dedicated to that. It’s more about what the DevOps team wants to achieve and how they are going to get there. The posture is everything: instead of working alone in the dark and owning everything CI/CD/Ops/Security related, the team must pursue exactly the contrary: working along with product teams, public decisions, pair/mob programming and much more. Might sound simple, but the collaborative approach might be the most important and most ignored aspect of a DevOps Transformation.  In summary, the DevOps team cannot be a silo for DevOps knowledge. Which leads us to the next topic…

2) Do not be the traditional Ops team

Prior to the DevOps era, most organizations would have at least two different teams involved in software production: Development and Operations. While the first would be responsible for product thinking, coding, testing, quality assurance, the second would work on infrastructure, delivery, security, reliability, and monitoring. That way the ideas were transformed into working software through a process resembling an assembly line. Depending on factors such as size, maturity, capillarity, industry sector and even technology, those teams were subdivided into a handful of other, more specialized and shortsighted groups.

Back then, although the frontier between Development and Operations was, naturally, blurred, each team would have quite specific skills and attributions, giving the whole streamed process a frontdesk, bureaucrat tone. 

In that context we had, guess what? A (set of) team(s) dedicated to Operations filled with the same people that today (mostly) form DevOps and similar teams, such as SRE and Engineering Platforms. However, the new proposal has several differences in goals and practices, including the need of working collaboratively in cross functional, small and focused teams. Sadly, far too often they are treated and viewed as the old Operation silo. That way they must “lead by example” the way up into DevOps transformation; not just DevOps tooling and technical aspects, but also through the actions and drive. 

Needless to say, working with assembly-line-like organized teams is very far from an Agile mindset, on which the DevOps principles rely a lot. That said, the DevOps team must be as far as possible from the traditional, waterfall practices of the Operation Teams.

3) Fight the topology that suits your context

There’s no such a thing as perfect architecture! The same could be said for how teams entangled to DevOps should be organized. Every organization is a unique complex system (remember?) and because of that, each solution must be perfectly tailored for the current place it’ll be deployed. That said, however, there are some common patterns that can be found in the wide. Team Topologies identifies the fundamental types of teams and the interactions between them. Following their taxonomy, DevOps focused groups should lean towards the category of Enabling Teams, acting as advocates, evangelists, and overall stimulators of DevOps practices among the stream-aligned teams. Other forms of organization can be of course useful; SRE and Platform Engineering related strategies gravitate towards Platform Teams. What’s the course of action? Just let the software people organize themselves seeking to fulfill the organization/product goals. The resulting team topology will reflect the specific needs of that domain and the support to the initiative will be the strongest possible. That’s a huge success factor since collaboration and will play a big part when the DevOps team starts. 

4) Tools, Templates and Documentation: ways of working together

A key decision when structuring a DevOps team is how they are going to collaborate with others in order to introduce DevOps thinking into software development. Remember DevOps teams should be constantly in touch with other teams presenting new tech, strategies and principles that will be then incorporated (heterogeneously, I would say) throughout the company. The first way one could think of is through traditional ways of collaboration: emails, sync meetings and in person talks appointments.

In spite of those having value, they are far from being really efficient for DevOps collaboration and maybe even for knowledge sharing altogether. There are a few approaches that successful DevOps Teams usually take advantage of:

  • Workshops: Yes, the traditional hands-on course! The idea is old, inspired by classrooms, but the execution changed a lot. Today it can be transformed into a very efficient way of presenting new tools and techniques to a broader public. The emphasis on relevant topics to the public and the use of real world scenarios to train and understand team members are key to the success of workshops.
  • Templates: Sometimes DevOps best practices can be translated into a set of basic tools and configurations to be used on new pieces of documentation and software. In these cases, templates can be a very efficient way of sharing knowledge. It might contain config files for CI/CD platforms, deployment instructions, Dockerfiles, security checks and much more! Just be sure there’s enough flexibility for the use cases you want to cover and enough information so people won’t blindly follow some mysterious, unchangeable standard.
  • Documentation: are you thinking about a boring, gigantic white document with ugly fonts and little to no figures? Think again! Many other forms of storing and sharing knowledge could fall into this category: blog posts, videos, podcasts and even tutorials. The logic is simple: how much do you learn by tutorials and similar content you find online? Why wouldn’t a company create a knowledge source like that? Some content might be too specific and sensitive to be internal, others can also be public and contribute to the increase the awareness of your brand from an employer/reference point of view.  
  • Tools: I think most of the tools related to DevOps topics should be owned by Platform Engineering and SRE teams. It’s another type of collaboration that should be addressed with somewhat different strategies that I intend to write later. 

In all those cases, it’s very important for the DevOps team members to actually understand what their colleagues need. Treat them as your clients and your work as products and ensure what you’re building is really making their lives better. Also, it’s unsurprisingly common to find all those (and others) approaches combined into a greater knowledge sharing strategy.

5) DevOps team scalability

The harsh truth must be said: companies want as much as possible of their software people turning their attention to building value for clients. That pretty much translates as features and working software, no matter how hard it is to keep the big machine working. So we usually cannot afford (and maybe we shouldn’t anyway) to have DevOps people growing at the same pace the rest of the teams grow. That’s ok, we can scale quite efficiently: the ways of collaborating are cheap, easy to maintain and can often be directed to hundreds of professionals at the same time. Onboarding, periodic study groups or tech talks also have a far reach. Another aspect that calls for scabillity is how big is the umbrella on which DevOps things fall. We could add to the Original terms at least a few others 

As teams get more mature on DevOps principles, the support they demand becomes more specialized. That’s when the DevOps team can be transformed into smaller, focused groups containing a certain skillset. This often leads towards Security, Observability, CI/CD and other specialties that will act to teach and show to product teams an in-depth view of the underlying field. As paradoxical as it might sound, DevOps teams wouldn’t be needed in a perfect (hypothetical) DevOps company.

Conclusion

Sure we all can tell the benefits of adopting DevOps principles into software development, but getting there is not easy. Some organizations try to form DevOps teams, expecting they’ll lead the way towards a brighter future. The original idea quite often falls behind, manifesting anti-types that really limit their reach. In this brief article we presented some tips that encompass lots of further reading, seeking the experiment and consolidating several principles that will make your DevOps teams thrive.

Demystifying Web 3 So Anyone Can Understand

Web3 is: read/write/execute with artificial scarcity and cryptographic identity. Should you care? Yes.

What?

Let’s break it down.

Back when I started my career, “web2.0” was the hot new thing.

Веб 2.0 — Википедия
What?

The “2.0” part of it was supposed to capture a few things: blogs, rounded corners on buttons and input fields, sharing of media online, 4th st in SOMA. But what really distinguished it from “1.0” was user-generated content. In the “1.0” days if you wanted to publish content on the web you basically had to upload an HTML file, maybe with some CSS or JS if you were a hotshot webmaster, to a server connected to the internet. It was not a user-friendly process and certainly not accessible to mere mortals.

The user-generated content idea was that websites could allow users to type stuff in and then save it for anyone to see. This was mostly first used for making blogs like LiveJournal and Moveable Type possible, later MySpace and Facebook and Twitter and wordpress.com where I’m still doing basically the same thing as back then. I don’t have to edit a file by hand and upload it to a server. You can even leave comments on my article! This concept seems so mundane to us now but it changed the web into an interactive medium where any human with an internet connection and cheap computer can publish content to anyone else on the planet. A serious game-changer, for better or for worse.

If you asked most people who had any idea about any of this stuff what would be built with web 2.0 they would probably have said “blogs I guess?” Few imagined the billions of users of YouTube, or grandparents sharing genocidal memes on Facebook, or TikTok dances. The concept of letting normies post stuff on the internet was too new to foresee the applications that would be built with it or the frightful perils it invited, not unlike opening a portal to hell.

Web3

The term “web3” is designed to refer to a similar paradigm shift underway.

Before getting into it I want to address the cryptocurrency hype. Cryptocurrency draws in a lot of people, many of dubious character qualities, that are lured by stories of getting rich without doing any work. This entire ecosystem is a distraction, although some of the speculation is based on organizations and products which may or may not have actual value and monetizable utility at some point in the present or future. This article is not about cryptocurrency, but about the underlying technologies which can power a vast array of new technologies and services that were not possible before. Cryptocurrency is just the first application of this new world but will end up being one of the most boring.

What powers the web3 world? What underlies it? With the help of blockchain technology a new set of primitives for building applications is becoming available. I would say the key interrelated elements are: artificial scarcity, cryptographic identity, and global execution and state. I’ll go into detail what I mean here, although to explain these concepts in detail in plain English is not trivial so I’m going to skip over a lot.

Cryptographic identity: your identity in web3-land consists of what is called a “keypair” (see Wikipedia), also known as a wallet. The only thing that gives you access to control your identity (and your wallet) is the fact that you are in physical or virtual possession of the “private key” half of the keypair. If you hold the private key, you can prove to anyone who’s asking that you own the “public key” associated with it, also known as your wallet address. So what?

Your identity is known to the world as your public key, or wallet address. There is an entire universe of possibilities that this opens up because only you, the holder of your private key, can prove that you own that identity. To list just a short number of examples:

  • No need to create a new account on every site or app you use.
  • No need for relying on Facebook, Google, Apple, etc to prove your identity (unless you want to).
  • People can encrypt messages for you that only you can read, without ever communicating with you, and post the message in public. Only the holder of the private key can decrypt such messages.
  • Sign any kind of message, for example voting over the internet or signing contracts.
  • Strong, verifiable identity. See my e-ID article for one such example provided by Estonia.
  • Anonymous, throwaway identities. Create a new identity for every site or interaction if you want.
  • Ownership or custody of funds or assets. Can require multiple parties to unlock an identity.
  • Link any kind of data to your identity, from drivers licenses to video game loot. Portable across any application. You own all the data rather than it living on some company’s servers.
  • Be sure you are always speaking to the same person. Impossible to impersonate anyone else’s identity without stealing their private key. No blue checkmarks needed.
Illustration from Wikipedia.

There are boundless other possibilities opened up with cryptographic identity, and some new pitfalls that will result in a lot of unhappiness. The most obvious is the ease with which someone can lose their private key. It is crucial that you back yours up. Like write the recovery phrase on a piece of paper and put it in a safe deposit box. Brace yourself for a flood of despairing clickbait articles about people losing their life savings when their computer crashes. Just as we have banks to relieve us of the need to stash money under our mattresses, trusted (and scammer) establishments with customer support phone numbers and backups will pop up to service the general populace and hold on to their private keys.

Artificial scarcity: this one should be the most familiar by now. With blockchain technology came various ways of limiting the creation and quantity of digital assets. There will only ever be 21 million bitcoins in existence. If your private key proves you own a wallet with some bitcoin attached you can turn it into a nice house or lambo. NFTs (read this great deep dive explaining WTF a NFT is) make it possible to limit ownership of differentiated unique assets. Again we’re just getting started with the practical applications of this technology and it’s impossible to predict what this will enable. Say you want to give away tickets to an event but only have room for 100 people. You can do that digitally now and let people trade the rights. Or resell digital movies or video games you’ve purchased. Or the rights to artwork. Elites will use it for all kinds of money laundering and help bolster its popularity.

Perhaps you require members of your community to hold a certain number of tokens to be a member of the group, as with Friends With Benefits to name one notable example. If there are a limited number of $FWB tokens in existence, it means these tokens have value. They can be transferred or resold from people who aren’t getting a lot out of their membership to those who more strongly desire membership. As the group grows in prestige and has better parties the value of the tokens increases. As the members are holders of tokens it’s in their shared interest to increase the value the group provides its members. A virtuous cycle can be created. Governance questions can be decided based on the amount of tokens one has, since people with more tokens have a greater stake in the project. Or not, if you want to run things in a more equitable fashion you can do that too. Competition between different organizational structures is a Good Thing.

This concept is crucial to understand and so amazingly powerful. When it finally clicked for me is when I got super excited about web3. New forms of organization and governance are being made possible with this technology.

The combination of artificial scarcity, smart contracts, and verifiable identity is a super recipe for new ways of organizing and coordinating people around the world. Nobody knows the perfect system for each type of organization yet but there will be countless experiments done in the years to come. No technology has more potential power than that which coordinates the actions of people towards a common goal. Just look at nation states or joint stock companies and how they’ve transformed the world, both in ways good and bad.

The tools and procedures are still in their infancy, though I strongly recommend this terrific writeup of different existing tools for managing these Decentralized Autonomous Organizations (DAOs). Technology doesn’t solve all the problems of managing an organization of course, there are still necessary human layers and elements and interactions. However some of the procedures that have until now rested on an reliable and impartial legal system (something most people in the world don’t have access to) for the management and ownership of corporations can now be partially handled not only with smart contracts (e.g. for voting, enacting proposals, gating access) but investment, membership, and participation can be spread to theoretically anyone in the world with a smartphone instead of being limited to the boundaries of a single country and (let’s be real) a small number of elites who own these things and can make use of the legal system.

Any group of like-minded people on the planet can associate, perhaps raise investment, and operate and govern themselves as they see fit. Maybe for business ventures, co-ops, nonprofits, criminal syndicates, micro-nations, art studios, or all sorts of new organizations that we haven’t seen before. I can’t predict what form any of this will take but we have already seen the emergence of DAOs with billions of dollars of value inside them and we’re at the very, very early stages. This is what I’m most juiced about.

Check out the DAO Dashboard. This is already happening and it’s for real.

And to give one more salient example: a series of fractional ownership investments can be easily distributed throughout the DAO ecosystem. A successful non-profit that sponsors open source development work, Gitcoin, can choose to invest some of its GTC token in a new DAO it wants to help get off the ground, Developer DAO. The investment proposal, open for everyone to see and members to vote on, would swap 5% of the newly created Developer DAO tokens (CODE being the leading symbol proposal right now) for 50,000 GTC tokens, worth $680,000 at the time of writing. Developer DAO plans to use this and other funds raised to sponsor new web3 projects acting as an incubator that helps engineers build their web3 skills up for free. Developer DAO can invest its own CODE tokens in new projects and grants, taking a similar fraction of token ownership in new projects spun off by swapping CODE tokens. In this way each organization can invest a piece of itself in new projects, each denominated in their own currency which also doubles as a slice of ownership. It’s like companies investing shares of their own stock into new ventures without having to liquidate (liquidity can be provided via Uniswap liquidity pools). In this case we’re talking about an organic constellation of non-profit and for-profit ventures all distributing risk, investment capital, and governance amongst themselves with minimal friction that anyone in the world can participate in.

Global execution and state: there are now worldwide virtual machines, imaginary computers which can be operated by anyone and the details of their entire history, operations, and usage is public. These computers can be programmed with any sort of logic and the programs can be uploaded and executed by anyone, for a fee. Such programs today are usually referred to as smart contracts although that is really just one possible usage of this tool. What will people build with this technology? It’s impossible to predict at this early age, like imagining what smartphones will look like when the PC revolution is getting started.

From Ethereum EVM illustrated.

These virtual machines are distributed across the planet and are extremely resilient and decentralized. No one person or company “owns” Ethereum (to use the most famous example) although there is a DAO that coordinates the standards for the virtual machine and related protocols. When a new proposal is adopted by the organization, the various software writers update their respective implementations of the Ethereum network to make changes and upgrades. It’s a voluntary process but one that works surprisingly well, and is not unlike the set of proposals and standards for the internet that have been managed for decades by the Internet Engineering Task Force (IETF).

A diagram showing where gas is needed for EVM operations
Ethereum virtual machine. More pictures here.

Also worth mentioning are zero-knowledge proofs which can enable privacy, things like anonymizing transactions and messaging. Of course these will for sure be used to nefarious ends, but they also open up possibilities for fighting tyranny and free exchange of information. Regardless of my opinion or anyone else’s, the cat’s out of the bag and these will be technologies that societies will need to contend with.

History of the Web Infographic: Web1, Web2, Web3.

Why should I care?

I didn’t care until recently, a month ago maybe. When I decided to take a peek to see what was going on in the web3 space, I found a whole new world. There are so many engineers out there who have realized the potential in this area, not to mention many of the smartest investors and technologists. The excitement is palpable and the amount of energy in the community is invigorating. I joined the Developer DAO, a new community of people who simply want to work on cool stuff together and help others learn how to program with this new technology. Purely focused on teaching and sharing knowledge. People from all over the world just magically appear and help each other build projects, not asking for anything in return. If you want to learn more about the web3 world you could do a lot worse than following @Developer_DAO on twitter.

As with all paradigm shifts, some older engineers will scoff and dismiss the new hotness as a stupid fad. There were those who pooh-poohed personal computers which could never match the power and specialized hardware of mainframes, those who mocked graphical interfaces as being for the weak, a grumpy engineer my mother knew who said the internet is “just a fad”, and people like Oracle’s CEO Larry Ellison saying the cloud is just someone else’s computer. Or me, saying the iPhone looks like a stupid idea.

The early phase of web3 is cryptocurrencies and blockchains (“layer 1”) solutions. Not something that non-technical people or really anyone can take full advantage of because there are few interfaces to interact with it. In the phase we’re in right now developer tools and additional layers of abstraction (“layer 2”) are starting to become standardized and accessible, and it’s just now starting to become possible to build web3 applications with user interfaces. Very soon we’ll start to see new types of applications appearing, to enable new kinds of communities, organizations, identity, and lots more nobody has dreamed up yet. There will be innumerable scams, a crash like after the first web bubble, annoying memesters and cryptochads. My advice is to ignore the sideshows and distractions and focus on the technology, tooling, and communities that weren’t possible until now and see what creative and world-changing things people build with web3.

For more information I recommend:

How to Trade Crypto in Your Sleep With Python

The defi revolution is in full swing if you know where to look. Serious efforts to build out and improve the underlying infrastructure for smart contracts as well as applications, art, and financial systems are popping up almost every week it seems. They use their own native tokens to power their networks, games, communities, transactions, NFTs and things that haven’t been thought up yet. As more decentralizated autonomous organizations (DAOs) track their assets, voting rights, and ownership stakes on-chain the market capitalization of tokens will only increase.

Avalanche is one new token of many that is an example of how new tokens can garner substantial support and funding if the community deems the project worthy.

There are as many potential uses for crypto tokens as there are for fiat money, except tokens in a sense “belong” to these projects and shared endeavours. If enough hype is built up, masses of people may speculate to the tune of hundreds of billions of dollars that the value of the tokens will increase. While many may consider their token purchases to be long-term investments in reputable projects with real utility, sometimes coming with rights or dividend payments, I believe a vast majority of people are looking to strike it rich quick. And some certainly have. The idea that you can get in early on the right coin and buy at a low price, and then sell it to someone not as savvy later on for way more money is a tempting one. Who doesn’t want to make money without doing any real work? I sure do.

Quickstart

If you want to skip all of the explanations and look at code you can run, you can download the JupyerLab Notebook that contains all of the code for creating and optimizing a strategy.

Now for some background.

Trading and Volatility

These tokens trade on hundreds of exchanges around the world from publicly-held and highly regulated Coinbase to fly-by-night shops registered in places like the Seychelles and Cayman. Traders buy and sell the tokens themselves as well as futures and leveraged tokens to bet on price movement up and down, lending tokens for other speculators to make leveraged bets, and sometimes actively coordinating pump and dump campaigns on disreputable discords. Prices swing wildly for everything from the most established and institutionally supported Bitcoin to my own MishCoin. This volatility is an opportunity to make money.

With enough patience anyone can try to grab some of these many billions of dollars flowing through the system by buying low and selling higher. You can do it on the timeframe of seconds or years, depending on your style. While many of the more mainstream coins have a definite upwards trend, all of them vacillate in price on some time scale. If you want to try your hand at this game what you need to do is define your strategy: decide what price movement conditions should trigger a buy or a sell.

Since it’s impossible to predict exactly how any coin will move in price in the future this is of course based on luck. It’s gambling. But you do have full control over your strategy and some people do quite well for themselves, making gobs of money betting on some of the stupidest things you can imagine. Some may spend months researching companies behind a new platform, digging into the qualifications of everyone on the team, the problems they’re trying to solve, the expected return, the competitive landscape, technical pitfalls and the track record of the founders. Others invest their life savings into an altcoin based on chatting to a pilled memelord at a party.

Automating Trading

Anyone can open an account at an exchange and start clicking Buy and Sell. If you have the time to watch the market carefully and look for opportunities this can potentially make some money, but it can demand a great deal of attention. And you have to sleep sometime. Of course we can write a program to perform this simple task for us, as long as we define the necessary parameters.

I decided to build myself a crypto trading bot using python and share what I learned. It was not so much a project for making real money (right now I’m up about $4 if I consider my time worth nothing) as a learning experience to tech myself more about automated trading and scientific python libraries and tools. Let’s get into it.

To create a bot to trade crypto for yourself you need to do the following steps:

  1. Get an API key for a crypto exchange you want to trade on
  2. Define, in code, the trading strategy you wish to use and its parameters
  3. Test your strategy on historical data to see if it would have hypothetically made money had your bot been actively trading during that time (called “backtesting”)
  4. Set your bot loose with some real money to trade

Let’s look at how to implement these steps.

Interfacing With an Exchange

To connect your bot to an exchange to read crypto prices, both historical and real-time, you will need an API key for the exchange you’ve selected.

Image

Fortunately you don’t need to use a specialized library for your exchange because there is a terrific project called CCXT (Crypto Currency eXchange Trading library) which provides an abstraction layer to most exchanges (111 at the the time of this writing) in multiple programming languages.

It means our bot can use a standard interface to buy and sell and fetch the price ticker data (this is called “OHLCV” in the jargon – open/high/low/close/volume data) in an exchange-agnostic way.

Now, the even better news it that we don’t really even have to use CCXT directly and can use a further abstraction layer to perform most of the grunt work of trading for us. There are a few such trading frameworks out there, I chose to build my bot using one called PyJuque but feel free to try others and let me know if you like them. What this framework does for you is provide the nuts and bolts of keeping track of open orders, buying and selling when certain triggers are met. It also provides backtesting and test-mode features so you can test out your strategy without using real money. You still need to connect to your exchange though in order to fetch the OHLCV data.

Configuring the Trading Engine

PyJuque contains a number of configuration parameters:

  • Exchange API key
  • Symbols to trade (e.g. BTC/USD, ETH/BTC, etc)
  • Timescale (I use 15 seconds with my exchange)
  • How much money to start with (in terms of the quote, so if you’re trading BTC/USD then this value will be in USD)
  • What fraction of the starting balance to commit in each trade
  • How far below the current price to place a buy order when a “buy” signal is triggered by your strategy
  • How much you want the price to go up before selling (aka “take profit” aka “when to sell”)
  • When to sell your position if the price drops (“stop loss”)
  • What strategy to use to determine when buy signals get triggered

Selecting a Strategy

Here we also have good news for the lazy programmers such as myself: there is a venerable library called ta-lib that contains implementations of 200 different technical analysis routines. It’s a C library so you will need to install it (macOS: brew install ta-lib). There is a python wrapper called pandas-ta.

Pandas TA

All you have to do is pick a strategy that you wish to use and input parameters for it. For my simple strategy I used the classic “bollinger bands” in conjunction with a relative strength index (RSI). You can pick and choose your strategies or implement your own as you see fit, but ta-lib gives us a very easy starting point. A future project could be to automate trying all 200 strategies available in ta-lib to see which work best.

Tuning Strategy Parameters

The final step before letting your bot loose is to configure the bot and strategy parameters. For the bollinger bands/RSI strategy we need to provide at least the slow and fast moving average windows. For the general bot parameters noted above we need to decide the optimal buy signal distance, stop loss price, and take profit percentage. What numbers do you plug in? What work best for the coin you want to trade?

scikit-optimize: sequential model-based optimization in Python — scikit- optimize 0.8.1 documentation

Again we can make our computer do all the work of figuring this out for us with the aid of an optimizer. An optimizer lets us find the optimum inputs for a given fitness function, testing different inputs in multiple dimensions in an intelligent fashion. For this we can use scikit-optimize.

To use the optimizer we need to provide two things:

  1. The domain of the inputs, which will be reasonable ranges of values for the aforementioned parameters.
  2. A function which returns a “loss” value between 0 and 1. The lower the value the more optimal the solution.
from skopt.space import Real, Integer
from skopt.utils import use_named_args
# here we define the input ranges for our strategy
fast_ma_len = Integer(name='fast_ma_len', low=1.0, high=12.0)
slow_ma_len = Integer(name='slow_ma_len', low=12.0, high=40.0)
# number between 0 and 100 - 1% means that when we get a buy signal, 
# we place buy order 1% below current price. if 0, we place a market 
# order immediately upon receiving signal
signal_distance = Real(name='signal_distance', low=0.0, high=1.5)
# take profit value between 0 and infinity, 3% means we place our sell 
# orders 3% above the prices that our buy orders filled at
take_profit = Real(name='take_profit', low=0.01, high=0.9)
# if our value dips by this much then sell so we don't lose everything
stop_loss_value = Real(name='stop_loss_value', low=0.01, high=4.0)
dimensions = [fast_ma_len, slow_ma_len, signal_distance, take_profit, stop_loss_value]
def calc_strat_loss(backtest_res) -> float:
    """Given backtest results, calculate loss.
    
    Loss is a measure of how badly we're doing.
    """
    score = 0
    
    for symbol, symbol_res in backtest_res.items():
        symbol_bt_res = symbol_res['results']
        profit_realised = symbol_bt_res['profit_realised']
        profit_after_fees = symbol_bt_res['profit_after_fees']
        winrate = symbol_bt_res['winrate']
        if profit_after_fees <= 0:
            # failed to make any money.
            # bad.
            return 1
        # how well we're doing (positive)
        # money made * how many of our trades made money
        score += profit_after_fees * winrate
        
    if score <= 0:
        # not doing so good
        return 1
    # return loss; lower number is better
    return math.pow(0.99, score)  # clamp 1-0 
@use_named_args(dimensions=dimensions)
def objective(**params):
    """This is our fitness function.
    
    It takes a set of parameters and returns the "loss" - an objective single scalar to minimize.
    """
    # take optimizer input and construct bot with config - see notebook
    bot_config = params_to_bot_config(params)
    backtest_res = backtest(bot_config)
    return calc_strat_loss(backtest_res)

Once you have your inputs and objective function you can run the optimizer in a number of ways. The more iterations it runs for, the better an answer you will get. Unfortunately in my limited experiments it appears to take longer to decide on what inputs to pick next with each iteration, so there may be something wrong with my implementation or diminishing returns with the optimizer.

Asking for new points to test gets slower as time goes on. I don’t understand why and it would be nice to fix this.

The package contains various strategies for selecting points to test, depending on how expensive your function should be. If the optimizer is doing a good job exploring the input space you should hopefully see loss trending downwards over time. This represents more profitable strategies being found as time goes on.

After you’ve run the optimizer for some time you can visualize the search space. A very useful visualization is to take a pair of parameters to see in two dimensions the best values, looking for ranges of values which are worth exploring more or obviously devoid of profitable inputs. You can use this information to adjust the ranges on the input domains.

The green/yellow islands represent local maxima and the red dot is the global maximum. The blue/purple islands are local minima.

You can also visualize all combinations of pairs of inputs and their resulting loss at different points:

Note that the integer inputs slow_ma_len and fast_ma_len have distinct steps in their inputs vs. the more “messy” real number inputs.

After running the optimizer for a few hundred or thousand iterations it spits out the best inputs. You can then visualize the buying and selling the bot performed during backtesting. This is a good time to sanity-check the strategy and see if it appears to be buying low and selling high.

Run the Bot

Armed with the parameters the optimizer gave us we can now run our bot. You can see a full script example here. Set SIMULATION = False to begin trading real coinz.

Trades placed by the bot.

All of the code to run a functioning bot and a JupyterLab Notebook to perform backtesting and optimization can be found in my GitHub repo.

I want to emphasize that this system does not comprise any meaningfully intelligent way to automatically trade crypto. It’s basically my attempt at a trader “hello world” type of application. A good first step but nothing more than the absolute basic minimum. There is vast room for improvement, things like creating one model for volatility data and another for price spikes, trying to overcome overfitting, hyperparameter optimization, and lots more. Also be aware you will need a service such as CoinTracker to keep track of your trades so you can report them on your taxes.