Will We Still Need Programmers?

Based Jensen Huang, CEO of Nvidia, which is one of the largest companies in the world by market cap now, said this February that people should stop learning to code. He was making the point that people will soon be able to instruct computers with natural human language to get what they want out of them. If you’re a younger person or a parent, it’s worth asking now, is there still value in learning to program a computer or get a computer science education?

I respect Jensen enormously and believe he is worth taking seriously, even if he used to slang 3D-Sexy-Elf boxes to gamers. Also to recognize his bias as the manufacturer of the picks and shovels for the current venture gold rush and his incentives to pump up the future perceived value of his company’s stock, if such a thing is even possible at a P/E of 73.

My perspective is as a software practitioner, having been a professional software “engineer” for two decades now, consulted and built companies on software of my design, and writing code every day, something I do not believe Mr. Huang is doing. Let’s start with a little historical perspective.

Past Promises To Eliminate Programmers

Programming real physical computers in the beginning was a laborious process taking weeks to rewire circuitry with plugboards and switches in the early ENIAC and UNIVAC days.

As stored-program machines became more sophisticated and flexible, they could be fed programs in various computer-friendly formats, often via punched cards. These programs were very specific to the architecture of the computer system they were controlling and required specialized knowledge about the computer to make it do what you wanted.

As the utility of computers grew and businesses and government began seeing a need to automate operations, in the 1950’s the U.S. Department of Defense along with industry and academia set up a committee to design a COmmon Business-Oriented Language (COBOL) as a temporary “short-range” solution “good for a year or two” to make the design of business automation software accessible to non-computer-specialists, across a range of industries and fields to promote standards and reusability. The idea was:

Representatives enthusiastically described a language that could work in a wide variety of environments, from banking and insurance to utilities and inventory control. They agreed unanimously that more people should be able to program and that the new language should not be restricted by the limitations of contemporary technology. A majority agreed that the language should make maximal use of English, be capable of change, be machine-independent and be easy to use, even at the expense of power.

The general idea was that now that computers could be instructed in the English language instead of talking to them at their beep-boop level, and programming could be made widely accessible. High-level business requirements could be spelled out in precise business-y language rather than fussing about with operands and memory locations and instruction pointers.

In one sense this certainly was a huge leap forward and did achieve this aim, although to what degree depends on your idea of “widely accessible.” COBOL for business and FORTRAN for science and many many other languages did make programming comparably widely available in contrast to before, although the explosion of personal computers and even more simplified languages like BASIC helped speed things along immensely.

Statistically, the number of people engaged in programming and related computer science fields grew substantially during this period from the 1960s to the year 2000. In the early 1980s, there were approximately 500,000 professional computer programmers in the United States. By the end of the 20th century, this number had grown to approximately 2.3 million programmers due to the tech boom and the widespread adoption of personal computers. The percentage of households owning a computer in the United States increased from 8% in 1984 to 51% in 2000, according to the U.S. Census Bureau. This increase in computer ownership correlates with a greater exposure to programming for the general population.

Every advancement in programming has made the operation of computers more abstract and accessible, which in my opinion is a good thing. It allows programmers to focus on problems closer to providing some value to someone rather than doing undifferentiated mucking about with technical bits and bobs.

As software continues to become more deeply integrated into every aspect of human life, from medicine to agriculture, to entertainment, the military, art, education, transportation, manufacturing, and everything else besides, there has been a need for people to create this software. How many specialists will be needed in the future though?

The U.S. Bureau of Labor Statistics projects the demand for computer programmer jobs in the U.S. to decline 11% from 2022-2032, although notes that 6,700 jobs on average will be created each year due to the need to replace other programmers who may retire (incidentally, COBOL programmers frequently) or exit the market.

So how will AI change this?

Current AI Tooling For Software Development

I’ve been using AI assistants for my daily work, including GitHub’s Copilot for two and a half years, and ChatGPT for the better part of a year. I happily pay for them because they make my life easier and serve valuable functions in helping me to perform some tasks more rapidly. Here’s some of the benefits I get from them:

Finishing what I was going to write anyway.

It’s often the case that I know what code needs to be added to perform some small feature I’m building. Sometimes function by function or line by line, what I’m going to write is mostly obvious based on the surrounding code. There is frequently a natural progression of events, since I’m giving the computer instructions to perform some relatively common task that has been done thousands of times before me by other programmers. Perhaps it’s updating a database record to mark an email as being sent, or updating a customer’s billing status based on the payment of an invoice, or disabling a button while an operation is proceeding. Most of what I’m doing is not groundbreaking never-before-seen stuff, so the next steps in what I’m doing can be anticipated. Giving descriptive names to my files, functions, and variables, and adding descriptive comments is already a great practice for anyone writing maintainable code but also really helps to push Copilot in the right direction. The greatest value here is simply saving me time. Like when you’re writing a reply to an email in Gmail, and it knows how you were going to finish your sentence, so you just tell it to autocomplete what you were going to write anyway, like adding “a great weekend!” to “Have” as a sign-off. Lazy programmers are the most productive programmers and this is a great aid in this pursuit.

Saving me a trip to the documentation.

Most of modern programming is really just gluing other libraries and services together. All of these components you must interface with have their own semantics and vocabulary. If you want to accomplish a task, whether it’s resizing an image or sending a notification to Slack, you typically need to do a quick google search then dig through the documentation of whatever you’re using, which is also often in a multitude of formats and page structures and of highly varying quality. Another time saver is having your AI assistant already aware of how to interface with this particular library or service you’re using, saving you a trip to the documentation.

Writing tests and verifying logic.

It should be mentioned that writing code is actually not all that hard. The real challenge in building software comes in making sure the code you wrote actually does what you expected it to do and doesn’t have bugs. One way we solve this is by writing automated tests that verify the behavior of our code. It doesn’t solve all of our problems but it does add a useful safety net and helps later when you need to modify the code without breaking it. Copilot is useful for automating some of the tedium of this process.

Also sometimes if I have some piece of code with a number of conditions and not very straightforward logic, I will just paste it into ChatGPT and ask me if the logic looks correct. Sometimes it points out potential issues I hadn’t considered or suggests how to rewrite the code to be simpler or more readable.

Finding the appropriate algorithm or formula.

This is less common but sometimes I have some need to perform an operation on data and I don’t know the appropriate algorithm. Suppose I want to get a rough distance between two points on the globe, and I’m more interested in speed rather than accuracy. I don’t know the appropriate formula off the top of my head but I can at least get one suggested to me and it’s ready to convert between the data structures I have available and my desired output format. AIs are great at this.

Problems With “AI” Tooling For Building Software

While the term artificial intelligence is all the buzz right now, there is no intelligence to speak of in any of these commercial tools. They are large language models, which are good at predicting what comes next in a sequence of words, code, etc. I could probably ask one to finish this article for me and maybe it would do a decent job, but I’m not going to.

Fancy autocomplete is powerful and helps with completing something you at least have some idea of how to begin. This sort of tool has fundamental limitations though especially when it comes to software. For one thing, these tools are trained with a finite context window size, meaning they have very small limits in terms of how much information they can work with at any time, can’t really do recursion, and can’t “understand” (using the term very loosely) larger structures of a project.

There are nifty demos of new software being created by AI, which is not such a terribly difficult task. I’ve started hundreds of software projects, there’s no challenge in that. I admit it is very neat to see a napkin drawing of a simple web app turned into code, and this may very well one day in the medium term allow non-technical people to build simple, limited, applications. However, modifying an existing project is a problem that gets drastically harder as the project grows and evolves. Precisely because of the limited context window, an AI agent can’t understand the larger structures that develop in your program. A software program ends up with its own internal representations of data, nomenclatures, interfaces, in essence its own domain-specific programming language for solving its functional requirements. The complexity explodes as each project turns into its own deviation from whatever the LLM has been previously trained on.

Speaking of training, another glaring issue is that computer programmers (I mean, computers that program) only know about what they’ve seen before. Much of that training input is out of date for one thing. Often when I do use a LLM to help me use an API, it will give me commands for an older version of that API which are no longer correct or relevant. New technologies it has particular trouble with, for example giving me all kinds of nonsense about the AWS CDK. But worse than that, anything created after about 2022 or 2023 which be very hard for LLMs to train on because of the “closed loop” problem. This occurs when LLMs begin to learn from their own outputs, a scenario that becomes increasingly likely as they are used to generate more content, including code and documentation. The risk here is twofold: first, there’s the potential for perpetuating inaccuracies. An LLM might generate an incorrect or suboptimal piece of code or explanation, and if this output is ingested back into the training set, the error becomes part of the learning material, potentially leading to a feedback loop of misinformation. Second, this closed loop can lead to a stagnation of knowledge, where the model becomes increasingly confident in its outputs, regardless of their accuracy, because it keeps “seeing” similar examples that it itself generated. This self-reinforcement makes it particularly challenging for LLMs to learn about new libraries, languages, or technologies introduced after their last training cut-off. As a result, the utility of LLMs in programming could be significantly hampered by their inability to stay current with the latest developments, compounded by the risk of circulating and reinforcing their own misconceptions.

If we get rid of most of the programmer specialists as Mr. Huang foresees, then more code generated in the future will be the product of computery-type programmers. They learn by example, but will not always produce the best solution to a given problem, or maybe a great solution but for a slightly different problem. Sure, this can be said about human-style programmers as well, but we’re capable of abstract reasoning and symbolic manipulation in a way that LLMs will never be capable of, meaning we can interrogate the fundamental assumptions being made about how things are done, rather than parroting variations on what’s been seen before and assuming our training data is probably correct because that’s how it was done in the past. Their responses are generated based on patterns in the data they’ve been trained on, not from a process of logical deduction or understanding of a computational problem in the way a human or even a conventional program does. This means that while a LLM can produce code that looks correct, it may not actually function as intended when put into practice, because the model doesn’t truly “understand” the code it’s writing, it’s essentially making educated guesses based on the data it’s seen. While LLMs can replicate patterns and follow templates, they fall short when it comes to genuine innovation or creative problem-solving. The essence of programming often involves thinking outside the box, something that’s inherently difficult for a model that operates by mimicking existing boxes. Even more, programming at its core often involves understanding and manipulating complex systems in ways that are fundamentally abstract and symbolic. This level of abstraction and symbolic manipulation is something that LLMs, as they currently stand, are fundamentally incapable of achieving in a meaningful way.

If these programmer agents do become capable of reasoning and symbolic manipulation, and if they can reason about large amounts of data, then we will be truly living in a different world. People with money and brains are actively working on these projects and there are financial and reputational incentives now motivating this work in a major way, so I absolutely expect advancements one day. How far that day is off, I dare not speculate on.

The Users Of Computer-Aided Programming Tools

One of the coolest features and biggest footguns about programming is that you can give a computer instructions that it will follow perfectly and without fail exactly as you specify them. I think it’s amazingly neat to have a robot that will do whatever you ask it, never get tired, never need oiling, can be shared and tinkered with, worked on in maximum comfort from anywhere. That’s partly what I love about programming, I enjoy bossing a machine around and imposing my will on it, and it almost never complains. But the problem anyone learns almost immediately when they start writing code is that computers will do exactly what you tell them to do without any idea of what you want them to do. When people communicate with each other there is a huge background of shared context, not to mention gestures, facial expressions, tones, and other verbal and non-verbal channels to help get the message across more accurately. ChatGPT does not seem to get the hint even when I get so frustrated I start swearing at it. Colleagues that work together on a shared task in the context of a larger organization have a shared understanding of purpose, risks, culture, legal environment, etc. that they are operating inside of. Even with all of this, human programmers still err constantly in translating things like business requirements into computer programs. A LLM with none of this out-of-band information will have a much harder time and make many more assumptions than a colleague performing the same task. The LLM may however be better suited for narrowly-defined, well-scoped tasks which are not doing anything new. I hope and believe LLMs will reduce drudgery, boilerplate, and wheel reinventing. But I wouldn’t count on them excelling at building anything new, partly because of how non-specialists will try to communicate with them.

In most professions a sort of jargon develops. Practitioners learn the special meanings of words in their fields, for example in medicine or law or baseball. In the world of software we love applying 10-dollar words like “polymorphism” or “atomicity” to justify our inflated paychecks, but there is another benefit to this jargon, which is it can help us describe common concepts and ideas more precisely. If you say to a coder to “add up all the stuff we sold yesterday”, you may or may not get the answer to the question you think you’re asking. If you say “sum the invoice totals for all transactions from 2024-04-01 at 00:00 UTC up to 2024-04-02 00:00 UTC for accounts in the USA” you are a lot more likely to get the answer you want and be more confident that you got a reliable number. Less precise instructions leave huge mists of ambiguity which programming computers will happily fill in with assumptions. This is where much of the value of specialists comes in, namely the facility in some dialect which enables a greater precision of expression and intent than the verbiage someone in the Sales or HR department will reliably deploy. In plenty of situations, say if you’re building a social media network for parakeets, maybe it’s okay. But when real money is on the line, or health and safety, or decisions with liability ramifications are introduced, I predict companies will still want to keep one or two specialists around to make sure they aren’t now playing a game of telephone with a coworker talking to a LLM on the other end and who knows what logic emerging out of the other end.

Introducing AI agents into your software development flow may create more problems than it solves, at least with what is likely to be around in the near term. On the terrific JS Party podcast they recently took a look at “Devin”, a brand new tool which claims the ability to go off by itself and complete tickets like a software engineer in your team might do. It received heaps of breathless press. But until these agents are actually reliable, other engineers still will have to debug the work done by the agent, which could end up wasting more time than is saved.

But the number they published, I think, was 13.86% of issues unresolved. So that’s about one in seven. So you pointed out a list of issues, and it can independently go and solve one in seven. And first off, to me, I’m like “That is not an independent software developer.” And furthermore, I find myself asking “If its success rate is one in seven, how do you know which one?” Are the other six those “It just got stuck”? Or has it submitted something broken? Because if it sets up something broken, that doesn’t actually solve the issue, not only do you have it only actually solving one in seven, but you’ve added load, because you have to go and debug and figure out which things are broken. So I think the marketing stance there is little over the top relative to what’s being delivered.

The software YouTuber Internet of Bugs went on to accuse the creators of Devin of misleading the public about its capabilities in a detailed takedown.

And in practice, a lot of code a LLM gives me is just wrong. A couple weeks ago I think I ended up trying to write a script for around four hours with ChatGPT when probably should have just looked up the relevant documentation myself. It kept going in circles and giving me code that just didn’t do what it said it did, even with me feeding it back output and error messages.

What’s The Future?

While there are many reasons for skepticism about AI-assisted programming, I am guardedly optimistic. I do get value out of the tools I use today which I believe will improve rapidly. I know there are very smart people out there working on them, assisted by computers making the tools even smarter, perhaps not unlike how computers started to be used to design better processors in a powerful feedback loop. I have limited expectations for LLMs but they are certainly not the only tool in the AI toolbox and there are vast amounts of money flowing into research and development, some of which will undoubtedly produce results. Computers will be able to drop some of the arbitrary strictness that made programming so tedious in the past and present (perhaps a smarter IDE will not bother you ever again about forgetting a ; or about an obviously mistyped variable name). I have high hopes for better static analysis tools or automated code reviews. Long-term I do believe how most people interact with computers will fundamentally change. AI agents will be trusted with more and more real-world tasks in the way that more and more vital societal functions have been allowed to be conducted over the internet. But someone will still have to create those agents and the systems they interact with and it won’t be only AI agents talking to themselves. At least I sure hope not.

So I’m not too concerned for my job security just yet. Software continues to eat the world and I’m not convinced the current state of the art can do the job of programmers without specialist intermediaries. As in the past, new development tools and paradigms will make programmers more productive and able to spend more time focusing on the problems that need solving rather than on mundane tasks. Maybe we will be able to let computers automate their automation of our lives but it will be on the less-mission-critical margins for quite some time.

Mastering JavaScript Tree-Shaking

One fantastic feature of JavaScript (compared to say, Python) is that it is possible to bundle your code for packaging; removing everything that is not needed to run your code. We call this “tree-shaking” because the stuff you aren’t using falls out leaving the strongly-attached bits. This reduces the size of your output, whether it’s a NPM module, code to run in a browser, or a NodeJS application.

By way of illustration, suppose we have a script that imports a function from a library. Here it calls apple1() from lib.ts:

If we bundle script.ts using esbuild:

esbuild --bundle script.ts > script-bundle.js

Then we get the resulting output:

The main point here being that only apple1 is included in the bundled script. Since we didn’t use apple2 it gets shaken out like an overripe piece of fruit.

Motivation

There are many reasons this is a valuable feature, the main one is performance. Less code means less time spent parsing JavaScript when your code is executed. It means your web app loads faster, your docker image is smaller, your serverless function cold start time is reduced, your NPM module takes up less disk space.

A robust application architecture can be a serverless architecture where your application is composed of discrete functions. These functions can function like an API if you put an API gateway or GraphQL server that invokes the functions for different routes and queries, or can be triggered by messages in queues or files being uploaded to a bucket or as regularly scheduled events. In this setup each function is self-contained, only containing whatever code is needed for its specific functionality and no unrelated code. This is in contrast to a monolith or microservice where the entire application must be loaded up in order to handle a request. No matter how large your project gets, each function remains about the same size.

I build applications using Serverless Stack, which has a terrific developer experience focused on building serverless applications on AWS with TypeScript and CDK in a local development environment. Under the hood it uses esbuild. Let’s peek under the hood.

Mechanics

Conceptually tree-shaking is pretty straightforward; just throw away whatever code our application doesn’t use. However there are a great number of caveats and tricks needed to debug and finesse your bundling.

Tree-shaking is a feature provided by all modern JavaScript bundlers. These include tools like Webpack, Turbopack, esbuild, and rollup. If you ask them to produce a bundle they will do their best to remove unused code. But how do they know what is unused?

The fine details may vary from bundler to bundler and between targets but I’ll give an overview of salient properties and options to be aware of. I’ll use the example of using esbuild to produce node bundles for AWS Lambda but these concepts apply generally to anyone who wants to reduce their bundle size.

Measuring

Before trying to reduce your bundle size you need to look at what’s being bundled, how much space everything takes up, and why. There are a number of tools at our disposal which help visualize and trace the tree-shaking process.

Bundle Buddy

This is one of the best tools for analyzing bundles visually and it has very rich information. You will need to ask your bundler to produce a meta-analysis of the bundling process and run Bundle Buddy on it (it’s all local browser based). It supports webpack, create-react-app, rollup, rome, parcel, and esbuild. For esbuild you specify the --metafile=meta.json option.

When you upload your metafile to Bundle Buddy you will be presented with a great deal of information. Let’s go through what some of it indicates.

Bundle Buddy in action

Let’s start with the duplicate modules.

This section lets you know that you have multiple versions of the same package in your bundle. This can be due to your dependencies or your project depending on different versions of a package which cannot be resolved to use the same version for some reason.

Here you can see I have versions 3.266.0 and 3.272 of the AWS SDK and two versions of fast-xml-parser. and The best way to hunt down why different versions may be included is to simply ask your package manager. For example you can ask pnpm:

$ pnpm why fast-xml-parser

dependencies:
@aws-sdk/client-cloudformation 3.266.0
├─┬ @aws-sdk/client-sts 3.266.0
│ └── fast-xml-parser 4.0.11
└── fast-xml-parser 4.0.11
@aws-sdk/client-cloudwatch-logs 3.266.0
└─┬ @aws-sdk/client-sts 3.266.0
  └── fast-xml-parser 4.0.11
...
@prisma/migrate 4.12.0
└─┬ mongoose 6.8.1
  └─┬ mongodb 4.12.1
    └─┬ @aws-sdk/credential-providers 3.282.0
      ├─┬ @aws-sdk/client-cognito-identity 3.282.0
      │ └─┬ @aws-sdk/client-sts 3.282.0
      │   └── fast-xml-parser 4.1.2
      ├─┬ @aws-sdk/client-sts 3.282.0
      │ └── fast-xml-parser 4.1.2
      └─┬ @aws-sdk/credential-provider-cognito-identity 3.282.0
        └─┬ @aws-sdk/client-cognito-identity 3.282.0
          └─┬ @aws-sdk/client-sts 3.282.0
            └── fast-xml-parser 4.1.2

So if I want to shrink my bundle I need to figure out how to get it so that both @aws-sdk/client-* and @prisma/migrate can agree on a common version to share so that only one copy of fast-xml-parser needs to end up in my bundle. Since this function shouldn’t even be importing @prisma/migrate (and certainly not mongodb) I can use that as a starting point for tracking down an unneeded import which will discuss shortly. Alternatively you can open a PR for one of the dependencies to use a looser version spec (e.g. ^4.0.0) for fast-xml-parser or @aws-sdk/client-sts.

With duplicate modules out of the way, the main meat of the report is the bundled modules. This will usually be broken up into your code and stuff from node_modules:

When viewing in Bundle Buddy you can click on any box to zoom in for a closer look. We can see that of the 1.63MB that comprises our bundle, 39K is for my actual function code:

This is interesting but not where we need to focus our efforts.

Clearly the prisma client and runtime are taking up sizable parcels of real-estate. There’s not much you can do about this besides file a ticket on GitHub (as I did here with much of this same information).

But looking at our node_modules we can see at a glance what is taking up the most space:

This is where you can survey what dependencies are not being tree-shaken out. You may have some intuitions about what belongs here, doesn’t belong here, or seems too large. For example in the case of my bundle the two biggest dependencies are on the left there, @redis-client (166KB) and gremlin 97KB). I do use redis as a caching layer for our Neptune graph database, of which gremlin is a client library that one uses to query the database. Because I know my application and this function I know that this function never needs to talk to the graph database so it doesn’t need gremlin. This is another starting point for me to trace why gremlin is being required. We’ll look at that later on when we get into tracing. Also noteworthy is that even though I only use one redis command, the code for handling all redis commands get bundled, adding a cost of 109KB to my bundle.

Finally the last section in the Bundle Buddy readout is a map of what files import other files. You can click in for what looks like a very interesting and useful graph but it seems to be a bit broken. No matter, we can see this same information presented more clearly by esbuild.

esbuild –analyze

Your bundler can also operate in a verbose mode where it tells you WHY certain modules are being included in your bundle. Once you’ve determined what is taking up the most space in your bundle or identified large modules that don’t belong, it may be obvious to you what the problem is and where and how to fix it. Oftentimes it may not be so clear why something is being included. In my example above of including gremlin, I needed to see what was requiring it.

We can ask our friend esbuild:

esbuild --bundle --analyze --analyze=verbose script.ts --outfile=tmp.js 2>&1 | less

The important bit here being the --analyze=verbose flag. This will print out all traces of all imports so the output gets rather large, hence piping it to less. It’s sorted by size so you can start at the top and see why your biggest imports are being included. A couple down from the top I can see what’s pulling in gremlin:

   ├ node_modules/.pnpm/gremlin@3.6.1/node_modules/gremlin/lib/process/graph-traversal.js ─── 13.1kb ─── 0.7%
   │  └ node_modules/.pnpm/gremlin@3.6.1/node_modules/gremlin/index.js
   │     └ backend/src/repo/gremlin.ts
   │        └ backend/src/repo/repository/skillGraph.ts
   │           └ backend/src/repo/repository/skill.ts
   │              └ backend/src/repo/repository/vacancy.ts
   │                 └ backend/src/repo/repository/candidate.ts
   │                    └ backend/src/api/graphql/candidate/list.ts

This is extremely useful information for tracking down exactly what in your code is telling the bundler to pull in this module. After a quick glance I realized my problem. The file repository/skill.ts contains a SkillRepository class which contains methods for loading a vacancy’s skills which is used by the vacancy repository which is eventually used by my function. Nothing in my function calls the SkillRepository methods which need gremlin, but it does include the SkillRepository class. What I foolishly assumed was that the methods on the class I don’t call will get tree-shaken out. This means that if you import a class, you will be bringing in all possible dependencies any method of that class brings in. Good to know!

@next/bundle-analyzer

This is a colorful but limited tool for showing you what’s being included in your NextJS build. You add it to your next.config.js file and when you do a build it will pop open tabs showing you what’s being bundled in your backend, frontend, and middleware chunks.

The amount of bullshit @apollo/client pulls in is extremely aggravating to me.

Modularize Imports

It was helpful for learning that using top-level Material-UI imports such as import { Button, Dialog } from "@mui/material" will pull in ALL of @mui/material into your bundle. Perhaps this is because NextJS still is stuck on CommonJS, although that is pure speculation on my part.

While you can fix this by assiduously doing import { Button } from "@mui/material/Button" everywhere this is hard to enforce and tedious. There is a NextJS config option to rewrite such imports:

  modularizeImports: {
    "@mui/material": {
      transform: "@mui/material/{{member}}",
    },
    "@mui/icons-material": {
      transform: "@mui/icons-material/{{member}}",
    },
  },

Webpack Analyzer

Has a spiffy graph of imports and works with Webpack.

Tips and Tricks

CommonJS vs. ESM

One factor that can affect bundling is using CommonJS vs. EcmaScript Modules (ESM). If you’re not familiar with the difference, the TypeScript documentation has a nice summary and the NodeJS package docs are quite informative and comprehensive. But basically CommonJS is the “old, busted” way of defining modules in JavaScript and makes use of things like require() and module.exports, whereas ESM is the “cool, somewhat less busted” way to define modules and their contents using import and export keywords.

Tree-shaking with CommonJS is possible but it is more wooley due to the more procedural format of defining exports from a module whereas ESM exports are more declarative. The esbuild tool is specifically built around ESM, in the docs it says:

This way esbuild will only bundle the parts of your packages that you actually use, which can sometimes be a substantial size savings. Note that esbuild’s tree shaking implementation relies on the use of ECMAScript module import and export statements. It does not work with CommonJS modules. Many packages on npm include both formats and esbuild tries to pick the format that works with tree shaking by default. You can customize which format esbuild picks using the main fields and/or conditions options depending on the package.

So if you’re using esbuild, it won’t even bother trying unless you’re using ESM-style imports and exports in your code and your dependencies. If you’re still typing require then you are a bad person and this is a fitting punishment for you.

As the documentation highlights, there is a related option called mainFields which affects which version of a package esbuild resolves. There is a complicated system for defining exports in package.json which allows a module to contain multiple versions of itself depending on how it’s being used. It can have one entrypoint if it’s require‘d, a different one if imported, or another if used in a browser.

The upshot is that you may need to tell your bundler explicitly to prefer the ESM (“module“) version of a package instead of the fallback CommonJS version (“main“). With esbuild the option looks something like:

esbuild --main-fields=module,main --bundle script.ts 

Setting this will ensure the ESM version is preferred, which may result in improved tree-shaking.

Minification

Tree-shaking and minification are related but distinct optimizations for reducing the size of your bundle. Tree-shaking eliminates dead code, whereas minification rewrites the result to be smaller, for example replacing a function identifier e.g. “frobnicateMajorBazball” with say “a1“.

Usually enabling minification is a simple option in your bundler. This bundle is 2.1MB minified, but 4.5MB without minification:

❯ pnpm exec esbuild --format=esm --target=es2022 --bundle --platform=node --main-fields=module,main  backend/src/api/graphql/candidate/list.ts --outfile=tmp.js

  tmp.js  4.5mb ⚠️

⚡ Done in 239ms


❯ pnpm exec esbuild --minify --format=esm --target=es2022 --bundle --platform=node --main-fields=module,main  backend/src/api/graphql/candidate/list.ts --outfile=tmp-minified.js

  tmp-minified.js  2.1mb ⚠️

⚡ Done in 235ms

Side effects

Sometimes you may want to import a module not because it has a symbol your code makes use of but because you want some side-effect to happen as a result of importing it. This may be an import that extends jest matchers, or initializes a library like google analytics, or some initialization that is performed when a file is imported.

Your bundler doesn’t always know what’s safe to remove. If you have:

import './lib/initializeMangoids'

In your source, what should your bundler do with it? Should it keep it or remove it in tree-shaking?

If you’re using Webpack (or terser) it will look for a sideEffects property in a module’s package.json to check if it’s safe to assume that simply importing a file does not do anything magical:

{
  "name": "your-project",
  "sideEffects": false
}

Code can also be annotated with /*#__PURE__ */ to inform the minifier that this code has no side effects and can be tree-shaken if not referred to by included code.

var Button$1 = /*#__PURE__*/ withAppProvider()(Button);

Read about it in more detail in the Webpack docs.

Externals

Not every package you depend on needs to necessarily be in your bundle. For example in the case of AWS lambda the AWS SDK is included in the runtime. This is a fairly hefty dependency so it can shave some major slices off your bundle if you leave it out. This is done with the external flag:

❯ pnpm exec esbuild --minify --format=esm --target=es2022 --bundle --platform=node --main-fields=module,main  backend/src/api/graphql/candidate/list.ts --outfile=tmp-minified.js

  tmp-minified.js  2.1mb 


❯ pnpm exec --external:aws-sdk --minify --format=esm --target=es2022 --bundle --platform=node --main-fields=module,main  backend/src/api/graphql/candidate/list.ts --outfile=tmp-minified.js

  tmp-minified.js  1.8mb 

One thing worth noting here is that there are different versions of packages depending on your runtime language and version. Node 18 contains the AWS v3 SDK (--external:@aws-sdk/) whereas previous versions contain the v2 SDK (--external:aws-sdk). Such details may be hidden from you if using the NodejsFunction CDK construct or SST Function construct.

On the CDK slack it was recommended to me to always bundle the AWS SDK in your function because you may be developing against a different version than what is available in the runtime. Or you can pin your package.json to use the exact version in the runtime.

Another reason to use externals is if you are using a layer. You can tell your bundler that the dependency is already available in the layer so it’s not needed to bundle it. I use this for prisma and puppeteer.

Performance Impacts

For web pages the performance impacts are instantly noticeable with a smaller bundle size. Your page will load faster both over the network and in terms of script parsing and execution time.

Another way to get an idea of what your node bundle is actually doing at startup is to profile it. I really like the 0x tool which can run a node script and give you a flame graph of where CPU time is spent. This can be an informative visualization and let you dig into what is being called when your script runs:

For serverless applications you can inspect the cold start (“initialization”) time for your function on your cloud platform. I use the AWS X-Ray tracing tool. Compare before and after some aggressive bundle size optimizations:

The cold-start time went from 2.74s to 1.60s. Not too bad.

Demystifying Web 3 So Anyone Can Understand

Web3 is: read/write/execute with artificial scarcity and cryptographic identity. Should you care? Yes.

What?

Let’s break it down.

Back when I started my career, “web2.0” was the hot new thing.

Веб 2.0 — Википедия
What?

The “2.0” part of it was supposed to capture a few things: blogs, rounded corners on buttons and input fields, sharing of media online, 4th st in SOMA. But what really distinguished it from “1.0” was user-generated content. In the “1.0” days if you wanted to publish content on the web you basically had to upload an HTML file, maybe with some CSS or JS if you were a hotshot webmaster, to a server connected to the internet. It was not a user-friendly process and certainly not accessible to mere mortals.

The user-generated content idea was that websites could allow users to type stuff in and then save it for anyone to see. This was mostly first used for making blogs like LiveJournal and Moveable Type possible, later MySpace and Facebook and Twitter and wordpress.com where I’m still doing basically the same thing as back then. I don’t have to edit a file by hand and upload it to a server. You can even leave comments on my article! This concept seems so mundane to us now but it changed the web into an interactive medium where any human with an internet connection and cheap computer can publish content to anyone else on the planet. A serious game-changer, for better or for worse.

If you asked most people who had any idea about any of this stuff what would be built with web 2.0 they would probably have said “blogs I guess?” Few imagined the billions of users of YouTube, or grandparents sharing genocidal memes on Facebook, or TikTok dances. The concept of letting normies post stuff on the internet was too new to foresee the applications that would be built with it or the frightful perils it invited, not unlike opening a portal to hell.

Web3

The term “web3” is designed to refer to a similar paradigm shift underway.

Before getting into it I want to address the cryptocurrency hype. Cryptocurrency draws in a lot of people, many of dubious character qualities, that are lured by stories of getting rich without doing any work. This entire ecosystem is a distraction, although some of the speculation is based on organizations and products which may or may not have actual value and monetizable utility at some point in the present or future. This article is not about cryptocurrency, but about the underlying technologies which can power a vast array of new technologies and services that were not possible before. Cryptocurrency is just the first application of this new world but will end up being one of the most boring.

What powers the web3 world? What underlies it? With the help of blockchain technology a new set of primitives for building applications is becoming available. I would say the key interrelated elements are: artificial scarcity, cryptographic identity, and global execution and state. I’ll go into detail what I mean here, although to explain these concepts in detail in plain English is not trivial so I’m going to skip over a lot.

Cryptographic identity: your identity in web3-land consists of what is called a “keypair” (see Wikipedia), also known as a wallet. The only thing that gives you access to control your identity (and your wallet) is the fact that you are in physical or virtual possession of the “private key” half of the keypair. If you hold the private key, you can prove to anyone who’s asking that you own the “public key” associated with it, also known as your wallet address. So what?

Your identity is known to the world as your public key, or wallet address. There is an entire universe of possibilities that this opens up because only you, the holder of your private key, can prove that you own that identity. To list just a short number of examples:

  • No need to create a new account on every site or app you use.
  • No need for relying on Facebook, Google, Apple, etc to prove your identity (unless you want to).
  • People can encrypt messages for you that only you can read, without ever communicating with you, and post the message in public. Only the holder of the private key can decrypt such messages.
  • Sign any kind of message, for example voting over the internet or signing contracts.
  • Strong, verifiable identity. See my e-ID article for one such example provided by Estonia.
  • Anonymous, throwaway identities. Create a new identity for every site or interaction if you want.
  • Ownership or custody of funds or assets. Can require multiple parties to unlock an identity.
  • Link any kind of data to your identity, from drivers licenses to video game loot. Portable across any application. You own all the data rather than it living on some company’s servers.
  • Be sure you are always speaking to the same person. Impossible to impersonate anyone else’s identity without stealing their private key. No blue checkmarks needed.
Illustration from Wikipedia.

There are boundless other possibilities opened up with cryptographic identity, and some new pitfalls that will result in a lot of unhappiness. The most obvious is the ease with which someone can lose their private key. It is crucial that you back yours up. Like write the recovery phrase on a piece of paper and put it in a safe deposit box. Brace yourself for a flood of despairing clickbait articles about people losing their life savings when their computer crashes. Just as we have banks to relieve us of the need to stash money under our mattresses, trusted (and scammer) establishments with customer support phone numbers and backups will pop up to service the general populace and hold on to their private keys.

Artificial scarcity: this one should be the most familiar by now. With blockchain technology came various ways of limiting the creation and quantity of digital assets. There will only ever be 21 million bitcoins in existence. If your private key proves you own a wallet with some bitcoin attached you can turn it into a nice house or lambo. NFTs (read this great deep dive explaining WTF a NFT is) make it possible to limit ownership of differentiated unique assets. Again we’re just getting started with the practical applications of this technology and it’s impossible to predict what this will enable. Say you want to give away tickets to an event but only have room for 100 people. You can do that digitally now and let people trade the rights. Or resell digital movies or video games you’ve purchased. Or the rights to artwork. Elites will use it for all kinds of money laundering and help bolster its popularity.

Perhaps you require members of your community to hold a certain number of tokens to be a member of the group, as with Friends With Benefits to name one notable example. If there are a limited number of $FWB tokens in existence, it means these tokens have value. They can be transferred or resold from people who aren’t getting a lot out of their membership to those who more strongly desire membership. As the group grows in prestige and has better parties the value of the tokens increases. As the members are holders of tokens it’s in their shared interest to increase the value the group provides its members. A virtuous cycle can be created. Governance questions can be decided based on the amount of tokens one has, since people with more tokens have a greater stake in the project. Or not, if you want to run things in a more equitable fashion you can do that too. Competition between different organizational structures is a Good Thing.

This concept is crucial to understand and so amazingly powerful. When it finally clicked for me is when I got super excited about web3. New forms of organization and governance are being made possible with this technology.

The combination of artificial scarcity, smart contracts, and verifiable identity is a super recipe for new ways of organizing and coordinating people around the world. Nobody knows the perfect system for each type of organization yet but there will be countless experiments done in the years to come. No technology has more potential power than that which coordinates the actions of people towards a common goal. Just look at nation states or joint stock companies and how they’ve transformed the world, both in ways good and bad.

The tools and procedures are still in their infancy, though I strongly recommend this terrific writeup of different existing tools for managing these Decentralized Autonomous Organizations (DAOs). Technology doesn’t solve all the problems of managing an organization of course, there are still necessary human layers and elements and interactions. However some of the procedures that have until now rested on an reliable and impartial legal system (something most people in the world don’t have access to) for the management and ownership of corporations can now be partially handled not only with smart contracts (e.g. for voting, enacting proposals, gating access) but investment, membership, and participation can be spread to theoretically anyone in the world with a smartphone instead of being limited to the boundaries of a single country and (let’s be real) a small number of elites who own these things and can make use of the legal system.

Any group of like-minded people on the planet can associate, perhaps raise investment, and operate and govern themselves as they see fit. Maybe for business ventures, co-ops, nonprofits, criminal syndicates, micro-nations, art studios, or all sorts of new organizations that we haven’t seen before. I can’t predict what form any of this will take but we have already seen the emergence of DAOs with billions of dollars of value inside them and we’re at the very, very early stages. This is what I’m most juiced about.

Check out the DAO Dashboard. This is already happening and it’s for real.

And to give one more salient example: a series of fractional ownership investments can be easily distributed throughout the DAO ecosystem. A successful non-profit that sponsors open source development work, Gitcoin, can choose to invest some of its GTC token in a new DAO it wants to help get off the ground, Developer DAO. The investment proposal, open for everyone to see and members to vote on, would swap 5% of the newly created Developer DAO tokens (CODE being the leading symbol proposal right now) for 50,000 GTC tokens, worth $680,000 at the time of writing. Developer DAO plans to use this and other funds raised to sponsor new web3 projects acting as an incubator that helps engineers build their web3 skills up for free. Developer DAO can invest its own CODE tokens in new projects and grants, taking a similar fraction of token ownership in new projects spun off by swapping CODE tokens. In this way each organization can invest a piece of itself in new projects, each denominated in their own currency which also doubles as a slice of ownership. It’s like companies investing shares of their own stock into new ventures without having to liquidate (liquidity can be provided via Uniswap liquidity pools). In this case we’re talking about an organic constellation of non-profit and for-profit ventures all distributing risk, investment capital, and governance amongst themselves with minimal friction that anyone in the world can participate in.

Global execution and state: there are now worldwide virtual machines, imaginary computers which can be operated by anyone and the details of their entire history, operations, and usage is public. These computers can be programmed with any sort of logic and the programs can be uploaded and executed by anyone, for a fee. Such programs today are usually referred to as smart contracts although that is really just one possible usage of this tool. What will people build with this technology? It’s impossible to predict at this early age, like imagining what smartphones will look like when the PC revolution is getting started.

From Ethereum EVM illustrated.

These virtual machines are distributed across the planet and are extremely resilient and decentralized. No one person or company “owns” Ethereum (to use the most famous example) although there is a DAO that coordinates the standards for the virtual machine and related protocols. When a new proposal is adopted by the organization, the various software writers update their respective implementations of the Ethereum network to make changes and upgrades. It’s a voluntary process but one that works surprisingly well, and is not unlike the set of proposals and standards for the internet that have been managed for decades by the Internet Engineering Task Force (IETF).

A diagram showing where gas is needed for EVM operations
Ethereum virtual machine. More pictures here.

Also worth mentioning are zero-knowledge proofs which can enable privacy, things like anonymizing transactions and messaging. Of course these will for sure be used to nefarious ends, but they also open up possibilities for fighting tyranny and free exchange of information. Regardless of my opinion or anyone else’s, the cat’s out of the bag and these will be technologies that societies will need to contend with.

History of the Web Infographic: Web1, Web2, Web3.

Why should I care?

I didn’t care until recently, a month ago maybe. When I decided to take a peek to see what was going on in the web3 space, I found a whole new world. There are so many engineers out there who have realized the potential in this area, not to mention many of the smartest investors and technologists. The excitement is palpable and the amount of energy in the community is invigorating. I joined the Developer DAO, a new community of people who simply want to work on cool stuff together and help others learn how to program with this new technology. Purely focused on teaching and sharing knowledge. People from all over the world just magically appear and help each other build projects, not asking for anything in return. If you want to learn more about the web3 world you could do a lot worse than following @Developer_DAO on twitter.

As with all paradigm shifts, some older engineers will scoff and dismiss the new hotness as a stupid fad. There were those who pooh-poohed personal computers which could never match the power and specialized hardware of mainframes, those who mocked graphical interfaces as being for the weak, a grumpy engineer my mother knew who said the internet is “just a fad”, and people like Oracle’s CEO Larry Ellison saying the cloud is just someone else’s computer. Or me, saying the iPhone looks like a stupid idea.

The early phase of web3 is cryptocurrencies and blockchains (“layer 1”) solutions. Not something that non-technical people or really anyone can take full advantage of because there are few interfaces to interact with it. In the phase we’re in right now developer tools and additional layers of abstraction (“layer 2”) are starting to become standardized and accessible, and it’s just now starting to become possible to build web3 applications with user interfaces. Very soon we’ll start to see new types of applications appearing, to enable new kinds of communities, organizations, identity, and lots more nobody has dreamed up yet. There will be innumerable scams, a crash like after the first web bubble, annoying memesters and cryptochads. My advice is to ignore the sideshows and distractions and focus on the technology, tooling, and communities that weren’t possible until now and see what creative and world-changing things people build with web3.

For more information I recommend:

How to Trade Crypto in Your Sleep With Python

The defi revolution is in full swing if you know where to look. Serious efforts to build out and improve the underlying infrastructure for smart contracts as well as applications, art, and financial systems are popping up almost every week it seems. They use their own native tokens to power their networks, games, communities, transactions, NFTs and things that haven’t been thought up yet. As more decentralizated autonomous organizations (DAOs) track their assets, voting rights, and ownership stakes on-chain the market capitalization of tokens will only increase.

Avalanche is one new token of many that is an example of how new tokens can garner substantial support and funding if the community deems the project worthy.

There are as many potential uses for crypto tokens as there are for fiat money, except tokens in a sense “belong” to these projects and shared endeavours. If enough hype is built up, masses of people may speculate to the tune of hundreds of billions of dollars that the value of the tokens will increase. While many may consider their token purchases to be long-term investments in reputable projects with real utility, sometimes coming with rights or dividend payments, I believe a vast majority of people are looking to strike it rich quick. And some certainly have. The idea that you can get in early on the right coin and buy at a low price, and then sell it to someone not as savvy later on for way more money is a tempting one. Who doesn’t want to make money without doing any real work? I sure do.

Quickstart

If you want to skip all of the explanations and look at code you can run, you can download the JupyerLab Notebook that contains all of the code for creating and optimizing a strategy.

Now for some background.

Trading and Volatility

These tokens trade on hundreds of exchanges around the world from publicly-held and highly regulated Coinbase to fly-by-night shops registered in places like the Seychelles and Cayman. Traders buy and sell the tokens themselves as well as futures and leveraged tokens to bet on price movement up and down, lending tokens for other speculators to make leveraged bets, and sometimes actively coordinating pump and dump campaigns on disreputable discords. Prices swing wildly for everything from the most established and institutionally supported Bitcoin to my own MishCoin. This volatility is an opportunity to make money.

With enough patience anyone can try to grab some of these many billions of dollars flowing through the system by buying low and selling higher. You can do it on the timeframe of seconds or years, depending on your style. While many of the more mainstream coins have a definite upwards trend, all of them vacillate in price on some time scale. If you want to try your hand at this game what you need to do is define your strategy: decide what price movement conditions should trigger a buy or a sell.

Since it’s impossible to predict exactly how any coin will move in price in the future this is of course based on luck. It’s gambling. But you do have full control over your strategy and some people do quite well for themselves, making gobs of money betting on some of the stupidest things you can imagine. Some may spend months researching companies behind a new platform, digging into the qualifications of everyone on the team, the problems they’re trying to solve, the expected return, the competitive landscape, technical pitfalls and the track record of the founders. Others invest their life savings into an altcoin based on chatting to a pilled memelord at a party.

Automating Trading

Anyone can open an account at an exchange and start clicking Buy and Sell. If you have the time to watch the market carefully and look for opportunities this can potentially make some money, but it can demand a great deal of attention. And you have to sleep sometime. Of course we can write a program to perform this simple task for us, as long as we define the necessary parameters.

I decided to build myself a crypto trading bot using python and share what I learned. It was not so much a project for making real money (right now I’m up about $4 if I consider my time worth nothing) as a learning experience to tech myself more about automated trading and scientific python libraries and tools. Let’s get into it.

To create a bot to trade crypto for yourself you need to do the following steps:

  1. Get an API key for a crypto exchange you want to trade on
  2. Define, in code, the trading strategy you wish to use and its parameters
  3. Test your strategy on historical data to see if it would have hypothetically made money had your bot been actively trading during that time (called “backtesting”)
  4. Set your bot loose with some real money to trade

Let’s look at how to implement these steps.

Interfacing With an Exchange

To connect your bot to an exchange to read crypto prices, both historical and real-time, you will need an API key for the exchange you’ve selected.

Image

Fortunately you don’t need to use a specialized library for your exchange because there is a terrific project called CCXT (Crypto Currency eXchange Trading library) which provides an abstraction layer to most exchanges (111 at the the time of this writing) in multiple programming languages.

It means our bot can use a standard interface to buy and sell and fetch the price ticker data (this is called “OHLCV” in the jargon – open/high/low/close/volume data) in an exchange-agnostic way.

Now, the even better news it that we don’t really even have to use CCXT directly and can use a further abstraction layer to perform most of the grunt work of trading for us. There are a few such trading frameworks out there, I chose to build my bot using one called PyJuque but feel free to try others and let me know if you like them. What this framework does for you is provide the nuts and bolts of keeping track of open orders, buying and selling when certain triggers are met. It also provides backtesting and test-mode features so you can test out your strategy without using real money. You still need to connect to your exchange though in order to fetch the OHLCV data.

Configuring the Trading Engine

PyJuque contains a number of configuration parameters:

  • Exchange API key
  • Symbols to trade (e.g. BTC/USD, ETH/BTC, etc)
  • Timescale (I use 15 seconds with my exchange)
  • How much money to start with (in terms of the quote, so if you’re trading BTC/USD then this value will be in USD)
  • What fraction of the starting balance to commit in each trade
  • How far below the current price to place a buy order when a “buy” signal is triggered by your strategy
  • How much you want the price to go up before selling (aka “take profit” aka “when to sell”)
  • When to sell your position if the price drops (“stop loss”)
  • What strategy to use to determine when buy signals get triggered

Selecting a Strategy

Here we also have good news for the lazy programmers such as myself: there is a venerable library called ta-lib that contains implementations of 200 different technical analysis routines. It’s a C library so you will need to install it (macOS: brew install ta-lib). There is a python wrapper called pandas-ta.

Pandas TA

All you have to do is pick a strategy that you wish to use and input parameters for it. For my simple strategy I used the classic “bollinger bands” in conjunction with a relative strength index (RSI). You can pick and choose your strategies or implement your own as you see fit, but ta-lib gives us a very easy starting point. A future project could be to automate trying all 200 strategies available in ta-lib to see which work best.

Tuning Strategy Parameters

The final step before letting your bot loose is to configure the bot and strategy parameters. For the bollinger bands/RSI strategy we need to provide at least the slow and fast moving average windows. For the general bot parameters noted above we need to decide the optimal buy signal distance, stop loss price, and take profit percentage. What numbers do you plug in? What work best for the coin you want to trade?

scikit-optimize: sequential model-based optimization in Python — scikit- optimize 0.8.1 documentation

Again we can make our computer do all the work of figuring this out for us with the aid of an optimizer. An optimizer lets us find the optimum inputs for a given fitness function, testing different inputs in multiple dimensions in an intelligent fashion. For this we can use scikit-optimize.

To use the optimizer we need to provide two things:

  1. The domain of the inputs, which will be reasonable ranges of values for the aforementioned parameters.
  2. A function which returns a “loss” value between 0 and 1. The lower the value the more optimal the solution.
from skopt.space import Real, Integer
from skopt.utils import use_named_args
# here we define the input ranges for our strategy
fast_ma_len = Integer(name='fast_ma_len', low=1.0, high=12.0)
slow_ma_len = Integer(name='slow_ma_len', low=12.0, high=40.0)
# number between 0 and 100 - 1% means that when we get a buy signal, 
# we place buy order 1% below current price. if 0, we place a market 
# order immediately upon receiving signal
signal_distance = Real(name='signal_distance', low=0.0, high=1.5)
# take profit value between 0 and infinity, 3% means we place our sell 
# orders 3% above the prices that our buy orders filled at
take_profit = Real(name='take_profit', low=0.01, high=0.9)
# if our value dips by this much then sell so we don't lose everything
stop_loss_value = Real(name='stop_loss_value', low=0.01, high=4.0)
dimensions = [fast_ma_len, slow_ma_len, signal_distance, take_profit, stop_loss_value]
def calc_strat_loss(backtest_res) -> float:
    """Given backtest results, calculate loss.
    
    Loss is a measure of how badly we're doing.
    """
    score = 0
    
    for symbol, symbol_res in backtest_res.items():
        symbol_bt_res = symbol_res['results']
        profit_realised = symbol_bt_res['profit_realised']
        profit_after_fees = symbol_bt_res['profit_after_fees']
        winrate = symbol_bt_res['winrate']
        if profit_after_fees <= 0:
            # failed to make any money.
            # bad.
            return 1
        # how well we're doing (positive)
        # money made * how many of our trades made money
        score += profit_after_fees * winrate
        
    if score <= 0:
        # not doing so good
        return 1
    # return loss; lower number is better
    return math.pow(0.99, score)  # clamp 1-0 
@use_named_args(dimensions=dimensions)
def objective(**params):
    """This is our fitness function.
    
    It takes a set of parameters and returns the "loss" - an objective single scalar to minimize.
    """
    # take optimizer input and construct bot with config - see notebook
    bot_config = params_to_bot_config(params)
    backtest_res = backtest(bot_config)
    return calc_strat_loss(backtest_res)

Once you have your inputs and objective function you can run the optimizer in a number of ways. The more iterations it runs for, the better an answer you will get. Unfortunately in my limited experiments it appears to take longer to decide on what inputs to pick next with each iteration, so there may be something wrong with my implementation or diminishing returns with the optimizer.

Asking for new points to test gets slower as time goes on. I don’t understand why and it would be nice to fix this.

The package contains various strategies for selecting points to test, depending on how expensive your function should be. If the optimizer is doing a good job exploring the input space you should hopefully see loss trending downwards over time. This represents more profitable strategies being found as time goes on.

After you’ve run the optimizer for some time you can visualize the search space. A very useful visualization is to take a pair of parameters to see in two dimensions the best values, looking for ranges of values which are worth exploring more or obviously devoid of profitable inputs. You can use this information to adjust the ranges on the input domains.

The green/yellow islands represent local maxima and the red dot is the global maximum. The blue/purple islands are local minima.

You can also visualize all combinations of pairs of inputs and their resulting loss at different points:

Note that the integer inputs slow_ma_len and fast_ma_len have distinct steps in their inputs vs. the more “messy” real number inputs.

After running the optimizer for a few hundred or thousand iterations it spits out the best inputs. You can then visualize the buying and selling the bot performed during backtesting. This is a good time to sanity-check the strategy and see if it appears to be buying low and selling high.

Run the Bot

Armed with the parameters the optimizer gave us we can now run our bot. You can see a full script example here. Set SIMULATION = False to begin trading real coinz.

Trades placed by the bot.

All of the code to run a functioning bot and a JupyterLab Notebook to perform backtesting and optimization can be found in my GitHub repo.

I want to emphasize that this system does not comprise any meaningfully intelligent way to automatically trade crypto. It’s basically my attempt at a trader “hello world” type of application. A good first step but nothing more than the absolute basic minimum. There is vast room for improvement, things like creating one model for volatility data and another for price spikes, trying to overcome overfitting, hyperparameter optimization, and lots more. Also be aware you will need a service such as CoinTracker to keep track of your trades so you can report them on your taxes.

Frameworkless Web Applications

Since we have (mostly) advanced beyond CGI scripts and PHP the default tool many people reach for when building a web application is a framework. Like drafting a standard legal contract or making a successful Hollywood film, it’s good to have a template to work off of. A framework lends structure to your application and saves you from having to reinvent a bunch of wheels. It’s a solid foundation to build on which can be a substantial “batteries included” model (Rails, Django, Spring Boot, Nest) or a lightweight “slap together whatever shit you need outta this” sort of deal (Flask, Express).

Foundations can be handy.

The idea of a web framework is that there are certain basic features that most web apps need and that these services should be provided as part of the library. Nearly all web frameworks will give you some custom implementation of some or all of:

  • Configuration
  • Logging
  • Exception trapping
  • Parsing HTTP requests
  • Routing requests to functions
  • Serialization
  • Gateway adaptor (WSGI, Rack, WAR)
  • Authentication
  • Middleware architecture
  • Plugin architecture
  • Development server

There are many other possible features but these are extremely common. Just about every framework has its own custom code to route a parsed HTTP request to a handler function, as in “call hello() when a GET request comes in for /hello.”

There are many great things to say about this approach. The ability to run your application on any sort of host from DigitalOcean to Heroku to EC2 is something we take for granted, as well as being able to easily run a web server on your local environment for testing. There is always some learning curve as you learn the ins and outs of how you register a URL route in this framework or log a debug message in that framework or add a custom serializer field.

But maybe we shouldn’t assume that our web apps always need to be built with a framework. Instead of being the default tool we grab without a moment’s reflection, now is a good time to reevaluate our assumptions.

Serverless

What struck me is that a number of the functions that frameworks provide are not needed if I go all-in on AWS. Long ago I decided I’m fine with Bezos owning my soul and acceded to writing software for this particular vendor, much as many engineers have built successful applications locked in to various layers of software abstraction. Early programmers had to decide which ISA or OS they wanted to couple their application to, later we’re still forced to make non-portable decisions but at a higher layer of abstraction. My python or JavaScript code will run on any CPU architecture or UNIX OS, but features from my cloud provider may restrict me to that cloud. Which I am totally fine with.

I’ve long been a fan of and written about serverless applications on my blog because I enjoy abstracting out as much of my infrastructure as possible so as to focus on the logic of my application that I’m interested in. My time is best spent concerning myself with business logic and not wrangling containers or deployments or load balancer configurations or gunicorn.

We’ve had a bit of a journey over the years adopting the serverless mindset, but one thing has been holding me back and it’s my attachment to web frameworks. While it’s quite common and appropriate to write serverless functions as small self-contained scripts in AWS Lambda, building a larger application in this fashion feels like trying to build a house without a foundation. I’ve done considerable experimentation mostly with trying to cram Flask into Lambda, where you still have all the comforts of your familiar framework and it handles all the routing inside a single function. You also have the flexibility to easily take your application out of AWS and run it elsewhere.

There are a number of issues with the approach of putting a web framework into a Lambda function. For one, it’s cheating. For another, when your application grows large enough the cold start time becomes a real problem. Web frameworks have the side-effect of loading your entire application code on startup, so any time a request comes in and there isn’t a warm handler to process it, the client must wait for your entire app to be imported before handling the request. This means users occasionally experience an extra few seconds of delay on a request, not good from a performance standpoint. There are simple workarounds like provisioned concurrency but it is a clear sign there is a flaw in the architecture.

Classic web frameworks are not appropriate for building a truly serverless application. It’s the wrong tool for the architecture.

The Anti-Framework

Assuming you are fully bought in to AWS and have embraced the lock-in lifestyle, life is great. AWS acts like a framework of its own providing all of the facilities one needs for a web application but in the form of web services of the Amazonian variety. If we’re talking about RESTful web services, it’s possible to put together an extremely scalable, maintainable, and highly available application.

No docker, kubernetes, or load balancers to worry about. You can even skip the VPC if you use the Aurora Data API to run SQL queries.

The above list could go on for a very long time but you get the point. If we want to be as lazy as possible and leverage cloud services as much as possible then what we really want is a tool for composing these services in an expressive and familiar fashion. Amazon’s new Cloud Development Kit (CDK) is just the tool for that. If you’ve never heard of CDK you can read a friendly introduction here or check out the official docs.

In short CDK lets you write high-level code in Python, TypeScript, Java or .NET, and compile it to a CloudFormation template that describes your infrastructure. A brief TypeScript example from cursed-webring:

// API Gateway with CORS enabled
const api = new RestApi(this, "cursed-api", {
  restApiName: "Cursed Service",
  defaultCorsPreflightOptions: {
    allowOrigins: apigateway.Cors.ALL_ORIGINS,
  },
  deployOptions: { tracingEnabled: true },
});

// defines the /sites/ resource in our API
const sitesResource = api.root.addResource("sites");

// get all sites handler, GET /sites/
const getAllSitesHandler = new NodejsFunction(
  this,
  "GetCursedSitesHandler",
  {
    entry: "resources/cursedSites.ts",
    handler: "getAllHandler",
    tracing: Tracing.ACTIVE,
  }
);
sitesResource.addMethod("GET", new LambdaIntegration(getAllSitesHandler));

Is CDK a framework? It depends how you define “framework” but I consider more to be infrastructure as code. By allowing you to effortlessly wire up the services you want in your application, CDK more accurately removes the need for any sort of traditional web framework when it comes to features like routing or responding to HTTP requests.

While CDK provides a great way to glue AWS services together it has little to say when it comes to your application code itself. I believe we can sink even lower into the proverbial couch by decorating our application code with metadata that generates the CDK resources our application declares, specifically Lambda functions and API Gateway routes. I call it an anti-framework.

@JetKit/CDK

To put this into action we’ve created an anti-framework called @jetkit/cdk, a TypeScript library that lets you decorate functions and classes as if you were using a traditional web framework, with AWS resources automatically generated from application code.

The concept is straightforward. You write functions as usual, then add metadata with AWS-specific integration details such as Lambda configuration or API routes:

import { HttpMethod } from "@aws-cdk/aws-apigatewayv2"
import { Lambda, ApiEvent } from "@jetkit/cdk"

// a simple standalone function with a route attached
export async function aliveHandler(event: ApiEvent) {
  return "i'm alive"
}
// define route and lambda properties
Lambda({
  path: "/alive",
  methods: [HttpMethod.GET],
  memorySize: 128,
})(aliveHandler)

If you want a Lambda function to be responsible for related functionality you can build a function with multiple routes and handlers using a class-based view. Here is an example:

import { HttpMethod } from "@aws-cdk/aws-apigatewayv2"
import { badRequest, methodNotAllowed } from "@jdpnielsen/http-error"
import { ApiView, SubRoute, ApiEvent, ApiResponse, ApiViewBase, apiViewHandler } from "@jetkit/cdk"

@ApiView({
  path: "/album",
  memorySize: 512,
  environment: {
    LOG_LEVEL: "DEBUG",
  },
  bundling: { minify: true, metafile: true, sourceMap: true },
})
export class AlbumApi extends ApiViewBase {
  // define POST handler
  post = async () => "Created new album"

  // custom endpoint in the view
  // routes to the ApiViewBase function
  @SubRoute({
    path: "/{albumId}/like", // will be /album/123/like
    methods: [HttpMethod.POST, HttpMethod.DELETE],
  })
  async like(event: ApiEvent): ApiResponse {
    const albumId = event.pathParameters?.albumId
    if (!albumId) throw badRequest("albumId is required in path")

    const method = event.requestContext.http.method

    // POST - mark album as liked
    if (method == HttpMethod.POST) return `Liked album ${albumId}`
    // DELETE - unmark album as liked
    else if (method == HttpMethod.DELETE) return `Unliked album ${albumId}`
    // should never be reached
    else return methodNotAllowed()
  }
}

export const handler = apiViewHandler(__filename, AlbumApi)

The decorators aren’t magical; they simply save your configuration as metadata on the class. It does the same thing as the Lambda() function above. This metadata is later read when the corresponding CDK constructs are generated for you. ApiViewBase contains some basic functionality for dispatching to the appropriate method inside the class based on the incoming HTTP request.

Isn’t this “routing?” Sort of. The AlbumApi class is a single Lambda function for the purposes of organizing your code and keeping the number of resources in your CloudFormation stack at a more reasonable size. It does however create multiple API Gateway routes, so API Gateway is still handling the primary HTTP parsing and routing. If you are a purist you can of course create a single Lambda function per route with the Lambda() wrapper if you desire. The goal here is simplicity.

The reason Lambda() is not a decorator is that function decorators do not currently exist in TypeScript due to complications arising from function hoisting.

Why TypeScript?

As an aside, TypeScript is now my preferred choice for backend development. JavaScript no, but TypeScript yes. The rapid evolution and improvements in the language with Microsoft behind it have been impressive. The language is as strict as you want it to be. Having one set of tooling, CI/CD pipelines, docs, libraries and language experience in your team is much easier than supporting two. All the frontends we work with are React and TypeScript, why not use the same linters, type checking, commit hooks, package repository, formatting configuration, and build tools instead of maintaining say, one set for a Python backend and another for a TypeScript frontend?

Python is totally fine except for its lack of type safety. Do not even attempt to blog at me ✋🏻 about mypy or pylance. It is like saying a Taco Bell is basically a real taqueria. Might get you through the day but it’s not really the same thing 🌮

Construct Generation

So we’ve seen the decorated application code, how does it get turned into cloud resources? With the ResourceGeneratorConstruct, a CDK construct that takes your functions and classes as input and generates AWS resources as output.

import { CorsHttpMethod, HttpApi } from "@aws-cdk/aws-apigatewayv2"
import { Construct, Duration, Stack, StackProps, App } from "@aws-cdk/core"
import { ResourceGeneratorConstruct } from "@jetkit/cdk"
import { aliveHandler, AlbumApi } from "../backend/src"  // your app code

export class InfraStack extends Stack {
  constructor(scope: App, id: string, props?: StackProps) {
    super(scope, id, props)

    // create API Gateway
    const httpApi = new HttpApi(this, "Api", {
      corsPreflight: {
        allowHeaders: ["Authorization"],
        allowMethods: [CorsHttpMethod.ANY],
        allowOrigins: ["*"],
        maxAge: Duration.days(10),
      },
    })

    // transmute your app code into infrastructure
    new ResourceGeneratorConstruct(this, "Generator", {
      resources: [AlbumApi, aliveHandler], // supply your API views and functions here
      httpApi,
    })
  }
}

It is necessary to explicitly pass the functions and classes you want resources for to the generator because otherwise esbuild will optimize them out of existence.

Try It Out

@jetkit/cdk is MIT-licensed, open-source, and has documentation and great tests. It doesn’t actually do much at all and that’s the point.

If you want to try it out as fast as humanly possible you can clone the TypeScript project template to get a modern serverless monorepo using NPM v7 workspaces.

Woodworker Designs and Builds the Perfect Tiny House Boat called the Le Koroc
Maybe a foundation isn’t needed after all