ICO 2.0

ICOs have become a huge opportunity for entrepreneurs who want to raise capital for their companies without issuing shares. Think about this: raise $50M now in Bitcoin, give away some valueless tokens to speculators, and keep 100% of your shares in the company. Do you want that deal?

Sure enough, the party needed to end. It turns out those valueless tokens had value because the investors were able to trade them on shady exchanges for more than they paid for. How much more? Think thousands of percents more. How much can you probably get if you invest in Apple this year? You get 31%. How much can you get if you invested in an ICO? Probably 1,000%. You get the picture. Recently the SEC made some very important bulletins and press releases about ICOs:

  • The SEC concluded the DAO token was a security.
  • They warned investors to be very careful when investing in ICOs.
  • Create an new cyber enforcement unit to fight cyber crime and ICO fraud.
  • They announced the first enforcement against a CEO and two companies.

In other countries such as China and South Korea, ICOs have been entirely banned. Australia, meanwhile, came out with clear guidelines.

More is coming. Probably many of tokens that were previously issued are going to be viewed as securities by the SEC. So what does this mean for the ICO market?

Well, it is actually a real good thing.

How so? Think of this: Investors will be better informed in their investment decision through the proper disclosures required by the SEC. This is the least companies can offer them. If a company decides to launch an ICO and write a whitepaper with lots of nonsense, (I have read many of those), and then go on to raise $50M, it should be required to register its offering with the SEC and let the commission qualify it. This way, the investors are reassured the offering was well-reviewed and the financial disclosures are clear. (This is nonexistent right now).

When the JOBS Act was voted into law in 2012, Congress had no idea that Bitcoin existed. Their motivation was to enact a modern securities legislation to update the 1934 securities rules. Why? Because the internet changed everything, and crowdfunding became a very important new way to raise capital, most of it through donations on Kickstarter and Indiegogo. But in reality, everyone was able to see the future: people want to invest in new ideas. The JOBS Act introduced four parts, each providing a new way for small companies to raise capital for their businesses.. Three of the parts that are most relevant to ICOs are:

  • Regulation D 506(c) through which a company can raise unlimited capital from accredited investors on the internet. To comply, the company checks the accredited status of every investor. Total legal costs are around $50K.
  • Regulation A+ through which companies can raise up to $50M from the general public every year. To comply, the company registers the offerings with the SEC. Total filing costs range from $100K to $300K.
  • Regulation Crowdfunding through which a company can raise up to $1.07M from the general public every year. To comply, the company uses a funding portal and files a notice to the SEC. Total filing costs range from nothing to under $10K.

So what is the big deal for companies who want to raise capital with their ICO? They simply can use a Regulation A+ offering to raise $50M.

With a Regulation A+, the Bank Secrecy Act is required to review the large investments against the OFAC list to prevent money laundering and terrorism financing.

The secondary market for these tokens needs to be changed to allow for tokens that have been issued under one of the JOBS Act rules. Any exchange company that is currently operating with these shadow tokens will not want to list these new security tokens.They will be violating a whole list of laws regulating the trading of securities. These exchanges will need to become broker-dealers or only trade regular tokens. However, many of the regular tokens they are trading now are securities. Ooops! They will need to hire a good law firm and pray.

China and South Korea ordered exchanges to shut down and Japan regulated 11 of them. So there is a path forward to exchanges, but it might not be the current ones who survive.

The good news for investors in ICOs is there are companies who announced they will trade securities tokens. The first one is T0 (T-Zero), which is a division of Overstock, These new trading marketplaces are regulated as broker-dealers. T0 (T-Zero) is using the alternative trading system (ATS), which is not an exchange but a way to trade assets without being licensed as an exchange. Becoming an exchange à la NASDAQ or NYSE is hard to get the license. Becoming an ATS is easy for any broker-dealer to file with the SEC.

The new ICO market is going to further grow but this time it will happen in regulated marketplaces. Entrepreneurs will have to jump through a few hoops to get their ICOs launched with a clear legal framework.

 

Read More

in ICO | 854 Words

What does the new Docker Swarm announement mean for Kubernetes?

This field of container orchestration is moving incredibly fast even by the normal standards of software development. There has been a cambrian explosion of container startups and competition is heating up in a really big way. This is good for innovation, but makes it difficult to choose a technology. As such, I’m keeping my eye on both Docker and Swarm.

 

My goal was to choose an orchestration technology and to commit to a technology that is innovative, stable, and would be maintained for while. I’ve decided that working in a healthy community was critical to fulfill all three objectives. I’ve chose Kubernetes after a long technical, community and business evaluation (formally with Kismatic and previously Mesosphere) of different container orchestration solutions. However, as other container cluster management options become available, it’s important to recognize what capabilities they provide and compare them to the strengths of Kubernetes.

 

So let’s take a moment and look at the most recent release of Docker (version 1.12) which now competes directly with Kubernetes with SwarmKit (based on Swarm) now part of the core of Docker and providing the capability of instantiating a Swarm cluster directly from the console.

 

Worth noting is that once you create a new Swarm cluster it also creates the swarm manager which in turn creates a certificate authority (if an external store isn’t provided) so for now transparent security is built-in directly.

 

The command line console also has the ability to join a node to an existing Swarm cluster as either the manager or worker and seamlessly the worker can be promoted to a manager and a manager can be demoted back to the role of worker as needed providing much needed additional flexibility. The Swarm manager uses the RAFT protocol to elect a leader and determine consensus which is very similar to how Kubernetes works today with its internal use of etcd. Also worth pointing out is how the Swarm workers use a gossip protocol to communicate their respective state amongst themselves so Docker users no longer require external entities or key-value stores to keep track of the cluster topology.

Also new to this most recent Docker release is the concept of a logical service — consisting of 1-to-many container instances and with the introduction of this logical view it makes management of services much easier overall. You can now create and update as well as scale a service which results in containers being either deployed updated or ultimately destroyed when no longer required.


Yet one weakness in the Docker 1.12 release in my opinion is its service discovery, which works quite elegantly in Kubernetes. Important to note, that the notion of a “Service” proxy for containers has existed in Kubernetes since the beginning of the project you simply connect to service name in your cluster and Kubernetes will make sure you reach the correct pod (or one or more containers)(s) behind the service. Kubernetes is also designed to be modular and extensible so that its components can be easily swappable, which allows for some interesting opportunities to tailor its use to your needs overall.

 

This new release from Docker will definitely face competition from Kubernetes, which is intended to help automate the deployment, scaling, and operation of containers across clusters of hosts. Already many companies are using Kubernete because of its ultra strong community. Kube, as the community calls it, is also gaining widespread acceptance from enterprise customers that are looking to build containerized applications using the new cloud native paradigms.

 

Kubernetes describes itself as a way to “manage a cluster of containers as a single system to accelerate development and simplify operations”. Kubernetes is open source, but also community developed and stewarded by the CNCF. This is fundamentally different than Docker/Swarm, which is ultimately controlled by a single start up and and is not governed by an open source community. Kubernetes is awesome because it brings Google’s decade plus experience of running containers at scale, Red Hat’s years of experience deploying and managing open source in the enterprise, the nimble development experience of CoreOS as well as advantages from many, many other organizations and community members.

 

Because of a powerful and diverse community, Kubernetes is as flexible as a Swiss Army Chainsaw. You can run Kubernetes on bare metal or on just about any cloud provider out there. Another amazing feature of Kubernetes is that it supports both Docker and Rocket containers as well as providing the ability to address additional cluster runtimes moving forward.

 

The wonderful experience and drive of the community cements our dedication to our choice and it’s place in the overall container orchestration space. Just the shear velocity of the project is amazing and the community is extremely vibrant.

 

So, in the end, I’m choosing to rally behind Kubernetes. It was the most robust solution we tried and we’re confident that it’ll support us as we grow in the future. Red Hat along with others are looking forward to providing Windows support for Kubernetes and the ability to run Windows containers directly as well. But it’s important to keep in mind that the other cluster orchestration services aren’t necessarily bad but as I stated earlier this field is moving quite fast and we want to ensure that we’re working with the most active as well as stable and mature project available to us. We’ve been extremely happy with Kubernetes and been using it in production for awhile now in fact ever since the 1.0 release.

 

We’re excited about the 1.3 release of Kubernetes and the new PetSet (was nominal services) feature providing the new stateful primitives for running your pods which need strong identity and storage capabilities. Looking forward to everything to come with the addition of cluster federation (a.k.a. “Ubernetes”) in Kubernetes 1.3 as well. I for one am very grateful to the entire Kubernetes community for everything they’ve done on this project thus far and everything that they continue to do. It’s truly an amazing piece of technology and a great building block for my needs.

Excerpts on the new Swarm capabilites from: https://lostechies.com/gabrielschenker/2016/06/21/dockercon-2016-day-2-presentations/ by Gabriel Schenker used with express permission.

Docker and Vagrant Development on OS X Yosemite

Vagrant

Vagrant is an amazing tool for managing virtual machines via a simple to use command line interface.

Install

Vagrant uses Virtualbox to manage the virtual dependencies by default. (You can directly download virtualbox and install or use homebrew for it.) But I like using VMware Fusion 7 Professional with the Vagrant VMware provider.

I’m assuming you know how to download and install VMware Fusion the typical way.

brew install caskroom/cask/brew-cask
brew cask install vagrant
brew cask install vagrant-manager
vagrant plugin install vagrant-vmware-fusion
vagrant plugin license vagrant-vmware-fusion license.lic
vagrant box add precise64_vmware http://files.vagrantup.com/precise64_vmware.box
vagrant init precise64_vmware
vagrant up
vagrant ssh

SSHFS

Installation

An easy-to-use installer package for the latest version of SSHFS can be downloaded from the SSHFS repository’s download section. The package installs a self-contained (as in “does not depend on external libraries”) version of SSHFS. It supports Mac OS X 10.5 (Intel, PowerPC) and later.

Note: This build of SSHFS is based on the “FUSE for OS X” software, that is not contained in the installer package and has to be installed separately. The latest release of “FUSE for OS X” can be downloaded from http://osxfuse.github.com.

Macfusion

To use Macfusion with the newer “FUSE for OS X”-based version of SSHFS, put Macfusion in your Applications folder and run the following commands in Terminal. See 3. under “Frequently Asked Questions” for more information as to why you might want to use Macfusion.

cd /Applications/Macfusion.app/Contents/PlugIns/sshfs.mfplugin/Contents/Resources
mv sshfs-static sshfs-static.orig
ln -s /usr/local/bin/sshfs sshfs-static

I ran into a problem though. I mount some of my servers via SSH, and even though the SSH account has write access to some files, OS X doesn’t let me open them with the standard “permission denied” error. This is because the user on the server has another UID than the local user on my mac. To get around this issue, I’ve entered the following line into the Extra Options (Advanced) field of MacFusion:

-o idmap=user -o uid=501 -o gid=501

This maps the remote UIDs to match those of the local system. If you’re on a mac, your user ID will most likely be 501. If not, make sure you enter the right ID.

A few more customizations:

cd ~/vagrant/
vagrant ssh
useradd -d /home/preilly -m preilly -s /bin/bash -c "Patrick Reilly"
vim /etc/hostname #change to vagrant
vim /etc/hosts #change to vagrant
ifconfig | grep "inet addr" #take note of address (non loopback)
exit
vim /etc/hosts #add vagrant entry with previous ip from ifconfig
ssh vagrant
mkdir -p /home/preilly/.ssh
vim authorized_keys
chmod 700 /home/preilly/.ssh/
chmod 640 .ssh/authorized_keys
exit
ssh -A vagrant

 

Git

Git uses your username to associate commits with an identity. The git config command can be used to change your Git configuration, including your username.
sudo apt-get install git
git config --global user.name "Patrick Reilly"
git config --global user.email "patrick@kismatic.io"

So now I can use the Macfusion menu item to mount my Vagrant image as a local volume:

cd /Volumes/vagrant/

and use the editor of my choice to work with my home directory in Vagrant.

Docker

Prerequisites

Docker requires a 64-bit installation regardless of your Ubuntu version. Additionally, your kernel must be 3.10 at minimum. The latest 3.10 minor version or a newer maintained version are also acceptable.

Kernels older than 3.10 lack some of the features required to run Docker containers. These older versions are known to have bugs which cause data loss and frequently panic under certain conditions.

To check your current kernel version, open a terminal and use uname -r to display your kernel version:

$ ssh -A vagrant
$ uname -r
3.2.0-29-virtual
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install linux-image-generic-lts-trusty
$ sudo reboot

Get the latest Docker package

$ sudo apt-get install apparmor

$ wget -qO- https://get.docker.com/ | sh
$ sudo usermod -aG docker preilly

Verify docker is installed correctly.

$ sudo docker run hello-world

Installing Go

$ sudo apt-get install curl git mercurial make binutils bison gcc build-essential
$ bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)
$ sudo apt-get install bison
$ gvm install go1.4
$ gvm use go1.4 --default
# I got bit by this issue: https://github.com/moovweb/gvm/issues/124
$ sudo apt-get install python-software-properties
$ sudo add-apt-repository ppa:git-core/ppa
$ sudo apt-get update
$ sudo apt-get install git
$ gvm use go1.4 --default

This is my current development environment on my MacBook. I’d really like to get others feedback or suggestions as well.

The Datacenter is the Computer

Using containers I can easily ship applications between machines and start to think of my cluster as a single computer. Each machine acts as additional CPU cores with the ability to execute my applications and run an operating system, but the goal is not to interact with the locally installed OS directly. Instead we want to treat the local OS as firmware for the underlying hardware resources.

Now we just need a good scheduler.

The Linux kernel does a wonderful job of scheduling applications on a single host system. Chances are if we run multiple applications on a single system the kernel will attempt to use as many CPU cores as possible to ensure that our various applications run in parallel.

When it comes to a cluster of machines the job of scheduling applications becomes an exercise for the operations team. Today for many organizations scheduling is handled by the fine folks running that team. Yet, unfortunately the use of a human scheduler requires humans to keep track of where applications are running. Sometimes this means using complicated error-prone spreadsheets or a configuration management tool with Puppet. Either way these tools don’t really offer the robust scheduling that is necessary to react to these real time events. This is where Kubernetes fits in.

If you think of the datacenter in this way then Kubernetes would be it’s datacenter operating system.

Kubernetes on MesosTry It Now

The inspiration for this post came from Kelsey Hightower (@kelseyhightower).

Myriad is a framework for scaling YARN clusters on Mesos

Myriad is a mesos framework designed for scaling YARN clusters on Mesos.
Myriad can expand or shrink one or more YARN clusters in response to events as per configured rules and policies.

The name Myriad means, countless or extremely great number. In context of the project, it allows one to expand overall resources managed by Mesos, even when the cluster under mesos management runs other cluster mangaers like YARN.

Myriad allows Mesos and YARN to co-exist and share resources with Mesos as the resource manager for the datacenter. Sharing resources between these two resource allocation systems improves overall cluster utilization and avoids statically partitioning resources amongst two separate clusters/resource managers.

Roadmap

Myriad is a work in progress.

  • Support multiple clusters
  • Custom Executor for managing NodeManager
  • Support multi-tenancy for node-managers
  • Support unique constraint to let only one node-manager run on a slave
  • Configuration store for storing rules and policies for clusters managed by Myriad
  • NodeManager Profiles for each cluster
  • High Availability mode for framework
  • Framework checkpointing
  • Framework re-conciliation

https://github.com/mesos/myriad

Open-Source Service Discovery

The problem seems simple at first: How do clients determine the IP and port for a service that exist on multiple hosts?

When developing and running resource-efficient distributed systems like
Apache Mesos (a cluster manager) that simplifies the complexity of running applications on a shared pool of servers, this is a very important decision to make.

Jason Wilder has looked at a number of general purpose, strongly consistent registries (Zookeeper, Doozer, Etcd) as well as many custom built, eventually consistent ones (SmartStack, Eureka, NSQ, Serf, Spotify’s DNS, SkyDNS).

Many use embedded client libraries (Eureka, NSQ, etc..) and some use separate sidekick processes (SmartStack, Serf).

Interestingly, of the dedicated solutions, all of them have adopted a design that prefers availability over consistency.

Please read this really nice writeup by Jason Wilder to learn more.

http://jasonwilder.com/blog/2014/02/04/service-discovery-in-the-cloud/

PHP Next Generation

The PHP Group has put up a post about the future of PHP. They say, ‘Over the last year, some research into the possibility of introducing JIT compilation capabilities to PHP has been conducted. During this research, the realization was made that in order to achieve optimal performance from PHP, some internal API’s should be changed. This necessitated the birth of the phpng branch, initially authored by Dmitry Stogov, Xinchen Hui, and Nikita Popov. This branch does not include JIT capabilities, but rather seeks to solve those problems that prohibit the current, and any future implementation of a JIT capable executor achieving optimal performance by improving memory usage and cleaning up some core API’s. By making these improvements, the phpng branch gives us a considerable performance gain in real world applications, for example a 20% increase in throughput for WordPress. The door may well now be open for a JIT capable compiler that can perform as we expect, but it’s necessary to say that these changes stand strong on their own, without requiring a JIT capable compiler in the future to validate them.’