Posts

Stablecoins Explained: Why They’re Becoming Core Financial Infrastructure in 2026

   Stablecoins Explained: Why They’re Becoming Core Financial Infrastructure in 2026 Stablecoins were once treated as a crypto side story: useful for traders, interesting for DeFi, but not central to the future of money. That view no longer fits reality. Meanwhile payments are built for a world that no longer exists. They are slow when they should be instant, fragmented when they should be unified, and constrained by operating hours in a world that runs continuously. Cross-border transfers remain complex, settlement is often delayed, and liquidity gets trapped in the gaps between systems. Stablecoins are not interesting because they are digital assets, but because they address these operational limits directly, removing the constraints of current payment systems. In 2026, stablecoins are increasingly being treated as infrastructure: a digital cash layer that moves value continuously, settles quickly, works across borders, and can plug directly into software. What changed ...

Setting up Python...

Setting up Python. Install pyenv 1. Download curl -L https://raw.github.com/yyuu/pyenv-installer/master/bin/pyenv-installer | bash 2. Setup environment variables and initialiaze in your profile. $ echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bash_profile $ echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bash_profile $ echo 'eval "$(pyenv init -)"' >> ~/.bash_profile 3. Restart your shell. $ exec $SHELL 4. Install your choice of python pyenv install 3.5.2 5. Set as the current (global) version of python pyenv global 3.5.2 6. Install the  virtual environment plugin pyenv-virtualenv 7. create a virtual environment pyenv virtualenv protovima 8. Activate it. pyenv virtualenv protovima NOTE With python 3.5 you should install: sudo apt-get install curl git-core gcc make zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev libssl-dev

JSON Performance

I have some code that processes tweets, about 5 million a day, in realtime. They are currently stored in mongodb and also posted on various celery/rabbitmq work queues. The average message size is 5524, so encoding and decoding these messages is an issue. Using the following test code below. Standard Tweet message Encode/Decode with python built-in json package. message size Test Msg Size De-serialize Obj Size Serialize cjosn, bson, ujson Storage Storoge Cost Empty object 41 14.790, 37.565, 0.970 54 6.341, 41.856, 1.249 2050Mb 0.21 {} Empty list 41 15.069, 38.021, 1.005 54 6.675, 41.475, 1.400 2050Mb 0.21 [] Object of objects 843 107.750, 145.440, 25.525 3226 63.555, 828.235, 28.051 42150Mb 4.21 List of lists 563 58.805, 81.950, 16.960 104 43.426, 815.965, 18.311 28150Mb 2.81 Object with only tweet id 93 25.030, 53.360, 2.280 422 23.570, 83.445, 3.295 4650Mb 0.47 Full tweet message 4386 697.221, 867.780, 188.560 12606 360.290, 5847.335, 201.610 219300Mb 21...

Replacing ARRI lights

My aim is to replace my very hot ARRI lights. I've ought on ebay a   Bi-color 1500W LED Fresnel Video Spotlight Light from     Xiamen Came Photographic Equipment Co., Ltd.( http://stores.ebay.com/PhotoLight )  I came with two problems  1) It didn't have a fresnel lens as advertised. Okay I could live with this if it wasn't for problem number two: 2) The focus adjust button didn't work. I feels as if it's poped out of it's thread. Now I'm talking to  Came and hopefully they will resolve these problems. If so it will be a great light!

Some PiCloud tests...

I'm using their PI example. The Performance gains are great and when you see the code below you will realize that PICloud is really easy and intuitive to use. I'll be moving some of my python jobs to them. 0.0 0.0160000324249 0.047000169754 0.483999967575 4.64100003242 46.5150001049 Process Location Number of Tests Number in Parallel Wall Clock Time (sec) Pi calcPiLocal local 1 10 2 0.00 3.16000000 calcPiCloud cloud 8 10 2 30.37 3.04000000 calcPiLocal local 1 10 3 0.00 3.13200000 calcPiCloud cloud 8 10 3 5.31 3.08000000 calcPiLocal local 1 10 4 0.02 3.13640000 calcPiCloud cloud 8 10 4 4.31 3.13200000 calcPiLocal local 1 10 5 0.05 3.13664000 calcPiCloud cloud 8 10 5 1.22 3.13840000 calcPiLocal local 1 10 6 0.48 3.14185200 calcPiCloud cloud 8 10 6 2.30 3.14116000 calcPiLocal local 1 10 7 4.64 3.14092240 calcPiCloud cloud 8 10 7 2.31 3.14099920 calcPiLocal local 1 10 8 46.52 3.14138168 calcPiCloud cloud 8 10 8 8.50 3.14139276 calcPiLocal local 1 10...

Serializion Performance

Last week  I stuck my head out  in a meeting and declared that XML is verbose and slow to parse and that we should move to something like Google's protocols buffers,  or something readable such as json or YAML, which are  easier to parse etc etc etc! Well is this really true ? The statement seems logical considering how verbose XML can be. Still, after the meeting, some questions stayed in my mind. So I thought I would do some tests. I used  a FIX Globex (CME) swap trade confirmation message to test my theory. Size from Python to Python json cjson 2332 0.222238063812 0.0943419933319 pickle cPickle 1778 0.233518123627 0.128826141357 XML cElementTree 2083 0.407706975937 2.77832698822 json simplejson 2332 3.37723612785 5.11316084862 So this simple test shows that using XML with cElementTree parser  is not so slow, cjson wins in speed and the conclusion must be: Your performance will ultimately depend on your data and the quality of the l...

My Second Super Computer

Cluster GPU Quadruple Extra Large 22 GB memory: 22 GB EC2 Compute Units: 33.5 , GPU: 2 x NVIDIA Tesla “Fermi” M2050 GPUs, 1690 GB of local instance storage, 64-bit platform, 10 Gigabit Ethernet cores each: 448 os: CENTOS 64bit Monte Carlo on One Telsa Device Options : 256 Simulation paths CPU GPU Time (ms.) options/sec. Time (ms.) options/sec. 262144 6000 42 3.586 71388 Monte Carlo on Two Telsa Devices Options : 256 split across two Tesla boards Simulation paths CPU GPU Time (ms.) options/sec. Time (ms.) options/sec. 262144 6000 42 3.405 151999 TOTAL Cost: $0.04 including building the environment and sample code from scratch. CUDA Device Query (Runtime API) version (CUDART static linking) There are 2 devices supporting CUDA Device 0: "Tesla M2050" CUDA Driver Version: 3.20 CUDA Runtime Version: 3.10 CUDA Capability Major/Minor version number: 2.0 Total amount of...