Obol is a project by is a small international team of tech entrepreneurs, mathematicians, and computer scientists who recognise that there is a series of very significant challenges looming with regards to safely integrating autonomous systems within human society.


Recent advances in blockchain technology will soon allow for the deployment of self-enforcing smart contracts (such as joint savings accounts, financial exchange markets, or even trust funds) as well as distributed autonomous organizations (DAOs) that subsist independently of any moral or legal entity.

These algorithmical entities are both autonomous and self-sufficient: they charge users directly from services rendered, and so are able to self-sustain their operations, with no further assistance required. They may even hire humans to perform tasks.

Once deployed, DAOs are not controlled, owned or operated by any given entity. They exist in a legal limbo; a sovereign capable of holding property, whilst being a legal non-person, yet possessing agency.

With agency comes responsibility and the potential of unethical practices, and committing tort. However there will be severe difficulty in enforcing judgment against it. Hybrid human and AI agents present especially significant challenges, as they possess the strengths of both platforms. There are imminent challenges from DAOs and thus we must take action now. 


The achievement of a practical computational ethics system will be a crucially important milestone for the following reasons:

  1. Objective ethical standards that transcend time, geography and culture.
  2. Streamlining and decentralising the legal system through smart contracts and smart insurance overseen by Obol's verification engine.
  3. Managing Existential Risk from both narrow and general artificial intelligences, whether a-moral or over-moral.