De La Soul gave away entire catalog

I mentioned in class that De La Soul gave away their entire catalog on Valentine’s Day. This is fascinating in the context of Lanier’s work. Arguably, De La Soul is frustrated with their predicament where they have not been able to get their music released digitally. Their early work used many samples, so in order to release it digitally, their record label must get permission and negotiate a payment for the use of each sample. And digital sale of music that includes samples is frequently treated differently than physical sales. So their music has languished, available only in physical form, while music sales have shifted to digital.

This is caused in part by the lack of a “statutory rate” for music sample clearance. In covering another artist’s song, there is a Congressionally set rate that you must pay to the copyright holder of a song for each sale of your cover. This is a statutory rate. Along with that, after the first performance of a new song, the owner of the copyright cannot prohibit others from covering it (this is a “compulsory license”). And there is also a “performance” compulsory license and statutory rate (for using a song on the radio, or TV, or streaming), and several clearinghouses (ASCAP and BMI) that audit radio and TV stations to get them to pay the performance royalties to copyright holders. It’s basically a half-century old micropayment system. Given today’s NYT article on the Content Creators Coalition protesting the way these royalties get paid, it seems that the jury is still out on whether this system is a good thing.

However, sampling is not treated the same way. There is no set rate for sample use, so each sample use must be negotiated individually.  Also, a copyright holder of a work does not have to agree to allow anyone to use the sample. The Beatles, for example, have never allowed anyone to use samples of their music. De La Soul happened to sample The Beatles, so I imagine that’s part of their non-starter.

Lanier might argue that sample clearance should be part of his universal micro payment system. It’s an interesting case, because supply and demand runs into a basic math problem: when supply is equal to zero, there is no way to set a price. Assuming The Beatles have a valuable and popular catalog, but they will not sell it for any amount of money, then their product is fundamentally unsuitable for the micropayment system unless the micropayment system can coerce them to provide it. In this regard, it’s a similar situation to the camera surveillance proposal that Lanier mentioned–many people will never consent to having their physical presence constantly monitored, and not for any amount of money, so their data is fundamentally unsuitable for the micropayment system. In order for a micropayment system like his to work on a broad scale, it practically must have compulsory licensing of the content that it sells. So a big question emerges: is compulsory licensing suitable for all kinds of data and content? Would you consent to compulsory licensing of your child’s physical location? Compulsory licensing of your psychological health records? Compulsory licensing of your art?

As the the article also mentioned, it is interesting that (so far) De La Soul has not been sued. It’s possible that this is an industry stunt, engineered by their labels for maximum effect. But there are certainly other recent cases of artists making saleable music that samples music without clearance (e.g. Girl Talk). Is this just the race to the bottom that Lanier predicts will happen to all content as it is devalued by our system, or is it something else, something better?

Dark Pools

A quick search in the NYU Library’s Journals system for articles from The Economist that contained the words “dark pool” yielded the following:

  • “Big headache or Big Bang?” 2007
  • “The battle of the bourses,” 2008
  • “Attack of the clones,” 2009
  • “Rise of the machines,” 2009
  • “Some like it not,” 2011
  • “The fast and the furious,” 2012
  • “Going broke in stocks,” 2013
  • “Code blue,” 2013
  • “The end of the street,” 2013

Over approximately seven years, dark pools evolved from a curiosity to a major upheaval in securities trading, and may actually be on the wane lately. Let’s take a minute to define the term. From “Rise of the machines” in 2009: “dark pools [are] electronic trading venues that conceal an order’s size and origin.” Here’s a longer definition:

Dark pools are places where anonymous buyers and sellers can trade directly with each other away from normal, “lit” exchanges. Because data are published only once trades are complete, institutional investors such as pension funds are meant to be able to take, or offload, large positions in quoted companies without alerting the wider market. (“Some like it not,” 2011)

Before dark pools, a buyer/seller with a large position in a stock would have the following options:

  1. just put the bid/ask on the stock market, taking any adverse price movement on the chin–this is the riskiest option because a large supply/demand shift can make a stock temporarily unsaleable (or permanently unsaleable in the worst case)
  2. have a broker or specialist do the bid/ask on the exchange but in small batches, say 100 shares at a time over a longer period, to avoid a sudden shift in supply/demand that sends the price rocketing in the wrong direction
  3. write an algorithm that does the bid/ask in small batches on the electronic markets (possibly on many electronic markets)
  4. call up colleagues that are known to have large positions in similar stocks and work out a private deal (a practice known as the “third market” or “block desk”)

Dark pools can be thought of as an automation of #4. Basically, these large traders form a consortium and agree to use custom software to search through their internal order books to match potential buyers with sellers. The stock bids/asks are never posted to the public exchanges, and the identities of the buyer and seller are not known to each other until the sale is reported, which happens after the sale is completed.

Several interesting things can be teased out of this list of options. Option #1 presented a critical difficulty: moving large positions without negatively affecting the price you get; that difficulty was ameliorated by option #2. But option #2 forced investors to rely on expensive brokers and specialists, whose intentions were never assured:

Institutional investors may complain about being forced into “dark pools” (off-­exchange venues where they can deal anonymously) to avoid HFTs, but these pools existed before HFTs and were set up in part to avoid being scalped by brokers or floor traders. (“The fast and the furious,” 2012)

Along came option #3, once the rise of electronic trading made it easier to move large positions by clever automation of the role formerly occupied by the people on the trading floor. But electronic trading also made it easier for other algorithms to analyze activities like the series of small orders that indicated an attempt to move a large position:

The basic idea of HFT is to use clever algorithms and super­fast computers to detect and exploit market movements. To avoid signalling their intentions to the market, institutional investors trade large orders in small blocks-­­often in lots of 100 to 500 shares-­­and within specified price ranges. High-­frequency traders attempt to uncover how much an investor is willing to pay (or sell for) by sending out a stream of probing quotes that are swiftly cancelled until they elicit a response. The traders then buy or short the targeted stock ahead of the investor, offering it to them a fraction of a second later for a tiny profit. (“Rise of the machines,” 2009)

So dark pools evolved as a sort of peer to peer stock trading model, avoiding the risk and scrutiny of public exchanges. However, it wasn’t long before banks got in on the game:

Dark pools have proliferated over the past five years, winning volume from exchanges. Broker­-dealers including Goldman Sachs have muscled in, setting up their own dark pools to capture transaction fees their clients would otherwise pay to exchanges. Pools owned by broker-­dealers now dominate the sector, but as a result many of them have become much less attractive to large investors. (“Some like it not,” 2011)

And it wasn’t long before HFT got into the game:

Many dark pools that are owned by broker-­dealers now welcome high-­frequency traders (HFTs), who can use their speed to exploit price differences between exchanges and dark pools . An HFT might, for instance, lock in a high price by submitting a large sell order to a dark pool just as the exchange price of a share begins to fall. (“Some like it not,” 2011)

And now dark pools are showing various signs of declining popularity:

There are signs of a backlash from institutional investors. After several years of steep rises, trading volumes on dark pools have started to level off this year, according to the TABB Group, which tracks the industry. Some investors are only routing orders to pools that exclude HFTs. (“Some like it not,” 2011)

Alongside this is the decline of the public exchanges, and the increasingly marginal role of floor-based trading. In 2007, the New York Stock Exchange was riding high as a profitable, publicly traded company, buying its competitors, basking in the light of its deregulators. But fast forward to November 2013 to see the NYSE bought by an upstart electronic market competitor (ICE).

What does this brief history of dark pools tell us? Well:

  • investors are not satisfied with limited liability–they want anonymity too
  • large investors want to obscure the details of trades as much as possible
  • HFT will infiltrate any electronic trading system that becomes available, particularly any that achieve high volume or high liquidity
  • in the absence of real time public exchanges, banks will move to implement their own trading exchanges

From these trends, it seems clear to me that some form of crypto currency will succeed in this space because it has the two investor goals (anonymity and obscured transaction details) built-in.

It also seems clear that the plain old stock market used by regular people has become a second class place to trade. HFT (which can be thought of as meta-trading) and dark pools (analogous to Internet peering agreements) are the first class trading venues now. Regular people exist to pour money into the system by buying and selling, but also to pay the fund manager commissions, provide the HFT algorithms with liquidity rebates, and to pay HFT systems the “penny toll” that they extract from most trades they participate in. These new first class trading platforms will only be accessible by large players who have access to the best market data, research, and technology. Those large players will use these platforms to limit their risk, while the systemic risks are borne by the aggregated small investors on public exchanges/markets.

Finally, the typically cited primary goal of the stock market system (capital formation) has been pushed so far aside that one must wonder whether the model will persist. Stock prices have a diminishing connection to the performance of the company that issued the stock. Investors typically care only about making good bets–mostly short term bets, at that.

Let’s assume the stock market doesn’t exist. How will a company get capital without it? One answer is that they won’t, at least not in the millions and billions of dollars. Micro loans and other peer to peer lending systems seem like entirely more sustainable options for small business. Just as companies formed consortia to allow them to trade in dark pools, it is entirely possible that small businesses could form similar mutual lending arrangements within their geographic area, or even within their industry networks. For example, the owner of a local trucking company might recognize the benefit of a nearby warehousing company or a wholesaler, and might be willing to assist those other parts of the economic ecosystem with small loans.

As for regular investors, the question is how they’ll get the sort of investment returns they have come to expect (and plan their retirement around). Again, the answer might be that they won’t, at least not at the scale of the boom parts of our stock market cycle. Again, micro lending or geographically/economically local lending systems might be a viable option if they are able to pool enough capital, manage the overhead costs of a lending program, structure for small purposeful loans, and insure against default risks.

I’m not convinced that micro loans and peer to peer lending are the right path for a post stock market age, but they are an interesting start. My next dive will probably be into the micro loan world, starting with the work of Muhammad Yunus.

Interesting Article on Bitcoin

Why Bitcoin Matters

Makes some interesting points, some of which being:

-Think about the implications for protest movements. Today protesters want to get on TV so people learn about their cause. Tomorrow they’ll want to get on TV because that’s how they’ll raise money, by literally holding up signs that let people anywhere in the world who sympathize with them send them money on the spot.

– Switching to Bitcoin, which charges no or very low fees, for these remittance payments will therefore raise the quality of life of migrant workers and their families significantly.

Check it out, and share your thoughts.

The Second Machine Age: Work, Progress and Prosperity In A Time Of Brilliant Technologies

By Myriam Melki, Pam Liou, Sam Lavigne, Jon Wasserman

Erik Brynjolfsson and Andrew Mcafee begin the Second Machine Age with a broad discussion of human progress, asking: “What have been the most important developments in human history?” They argue that human social progress corresponds with technological progress, and that the last great leap in human progress can be directly traced to the invention of the steam engine. Furthermore, we, in our current historical moment, are in the early stage of what will become the next great technological and social breakthrough. Where the previous leap had been brought about by our ability to leverage the power of steam (and fully expressed itself in the industrial revolution), the current leap is catalyzed by advances in computer technology, and will lead to what they call a “second machine age.” And what, exactly, will human progress look like? “We’re heading into an era that won’t just be different; it will be better, because we’ll be able to increase both the variety and the volume of our consumption.”

 

In chapter 2 Brynjolfsson and Mcafee point out how computers have become increasingly good at performing tasks that were previously assumed to be impossible for a computer to complete. For example, self-driving cars, once thought to be beyond computation, have now become a technical reality and will likely soon enter the consumer space. At the same time, computers like the Jeopardy machine “Watson” are getting better at processing natural language. They argue that three key characteristics are most foundational in the “second machine age”:  technology is becoming increasingly “exponential, digital, and combinatorial.”

 

Chapter 3 tackles the exponential quality of technology. Brynjolfsson and Mcafee reference Moore’s law and describe how computer technology advances at a consistently exponential rate, due mostly, they claim, to the ingenuity of computer engineers and designers. They argue that exponential advances accelerate so quickly that they are difficult to fully comprehend.

 

Chapter 4 describes the power of digitization. Brynjolfsson and Mcafe’s main point is that information is becoming increasingly digitized, which increases overall understanding “by making huge amounts of data readily accessible.” Large quantities of cheap data can be analyzed and collated. As an example they cite Waze, a GPS app that collects information about road conditions from everyone who has the app installed, thereby converting smartphones into data collection devices.

 

Some thoughts: the description of our current state of technology seems accurate. This is, however, a utopian, pro-capitalist, pro-consumerist book that enshrines a new form of exploitation, one that is distributed and crowdsourced. Technological advances will continue to replace human labor. The authors give this march of innovation an uncritical positive value. Unfortunately, the utopian goal to free up human time can never happen because the owners of innovative technology don’t distribute the surplus value they create. It’s difficult to benefit from “exponential, digital, and combinatorial” advancements unless you control the robot that replaces you.

 

Chapter 5 starts with a very interesting quote, one the whole book seems to be revolving around: “Productivity isn’t everything, but in the long run it is almost everything” (Paul Krugman). The authors emphasize the importance of general purpose technologies. GPTs such as steam and electricity impacted more than just their respective industries, they spread quickly to other sectors of the economy and revolutionized the industrial world. And Information and Communications Technologies are the new GPT. And although innovations get used up, causing the economy to be stagnant at times, recombinant growth generates new ideas, thus boosting the economy once again. All inventions are a mish mash of things invented in the past and overlooked. There are endless possibilities for new ideas because there are always ways to recombine things and ideas in new ways, especially in the internet age. In the early stages of development, growth is constrained by number of potential new ideas, but later on it is constrained only by the ability to process them. The solution would presumably be to bring in more eyeballs in order to process more ideas. According to the authors “Plenty of building blocks are in place, and they’re being recombined in better and better ways all the time”.

 

Chapter 6 is entitled Artificial and Human Intelligence in the Second Machine Age. The authors seem confident about the future, because “We’re going to see artificial intelligence do more and more, and as this happens costs will go down, outcomes will improve, and our lives will get better. Soon countless pieces of AI will be working on our behalf, often in the background. They’ll help us in areas ranging from trivial to substantive to life changing.” IBM are building the world’s best diagnostician, a robot. And C-Path is a computational pathologist that is supposedly more accurate and less biased than human pathologists. Moreover, the digital network has supposedly led to an overall improvement in all fields, including the environment (air quality for example).

 

Chapter 7 gives an overview on the important productivity growth that followed the introduction of Information and Communications Technologies in our lives. Despite its “productivity paradox”, this GPT has led to an improvement in various sectors of the economy. The authors then explain how the introduction of the internet and sometimes even just organizational softwares in firms and industries have improved in the long run their productivity.

 

The Second Machine Age details the shortcomings of the current models of measuring economic growth: GDP. Are there alternative metrics to articulate the productivity and wealth of a nation other than GDP? What does GDP fail to capture?

 

Free digital goods pose a profound challenge to qualitative measurement because they offer value and improvements to quality of life without driving revenue. These innovations create efficiencies; however, the jury is still out on whether these benefits outweigh the effect of an exponentially increasing number of new, free digital goods flooding the market.

 

Various alternative approaches include:

 

Would you rather? Method
The choice between for comparable services over time  Produce mark rather insubstantial qualitative gains, while digital services has improved by leaps and bounds.

 

Measuring Consumer Surplus-
“If you would happily pay one dollar to read the morning newspaper but instead you get it for free, then you’ve just gained one dollar of consumer surplus.” Measured in money AND time; “rapidly growing consumer surplus from price declines in computers increased economic welfare by about $50 billion each year.”

 

Gross National Happiness- Bhutanese index to measure quality of life per capita across a wide range of categories.

 

New Products-  Measuring SKU’s or introduction of new items into the market. Think of it as the product always existing, but was infinitely expensive to produce prior to its invention.

 

Our global economy has reached an event horizon where the rate of innovation in digital tools and commodities has outpaced the rate of innovation in physical/engineered ones. While digitalization has made marked improvements to the manufacture of goods and production, there are limits to those gains (um–like, physics).  Pushing these limits through globalization and aggressive (but common) Supply Chain Management tactics has led to tenuous infrastructures and complete collapses of entire verticals.

 

How do you account for an economy where the loftiest, highest creative needs are met while the basic needs are not? Maslow would be reeling. SMA treats digital innovations as the same as physical ones, but they would be weighted differently.

 

Here’s a perfect illustration of this: Korean parents let baby starve as they play with virtual child.

 

The disproportionate distribution of wealth that privileges the leader in a category and front-runners of all kinds. That’s pretty much all this chapter says, and supports this claim with lots of statistics.

 

Technology is not a scalar that benefits a cross section of the population equally.

Chapter 12 is basically about the future. We worry or expect that machines will take over completely, they say. And they probably will with the exciting developments like Google’s autonomous car. But never fear, humans still have the upperhand on deviating from rigid prescribed operations. This is, for now, the thing that robots can’t do better than us. They suggest that you should find an area of industry where people are becoming obsolete, and then figure out the one adjacent space where you can provide human expertise, thus capitalizing on the scarcity. Seems like sound advice.

 

Chapter 13 makes broad suggestions about future forward policy decisions to prolong the human ability to be productive and desirable to an economy. They assert that we can ”encourage technology to race ahead while ensuring that as few people as possible are left behind. They start off the chapter saying, “With sci-fi technology becoming reality it might seems that radical steps are necessary, but… many recommendations for growth and prosperity found in any “Economy 101″ textbook are the right place to start now and for a while.”  The reason for this is that humans can still manage enough logical work better than machines. This could be interpreted as the obvious way to continue participating in this economic system but NOT because the current economic philosophy is sound.

 

The way to beat the labor force challenge is to grow the economy. There are a number of ways to do that not least of which is to incorporate more technology into education. That’s good because Education has been a “laggard” compared to other Industries. Therefore if we stop being laggards about learning better and using more technology, then by transitive property we will naturally (guaranteed and obviously) catch up to other industries. Because Education is an Industry and in this economic model is bound to compete with other Industries.

 

Chapter 14 addresses the long-term strategies and challenges. While admitting that History (not human choice) is “littered with unintended… side effects of well intentioned social and economic policies”, the authors cite Tim O’Reilly in pushing forward with technology’s momentum rather than clinging to the past or present. They suggest that we should let the technologies do their work and just find ways to deal with it. They are “skeptical of efforts to come up with fundamental alternatives to capitalism”. The features of private control over production as opposed to governmental work so well, that they’re even used in China, “which is still officially communist”! Their existence is proof of their success and durability and doesn’t need to be challenged or evaluated.

 

Most of earners are laborers. As androids take over our jobs, employers will be forced to pay lower wages and ultimately cut jobs. One solution that has been floating around for a few hundred years is Basic Income, a format where everyone is paid to have a minimum standard of living. Endorsed even by Richard Nixon, his last attempt was thwarted by caseworkers and administrators of welfare programs, notorious for their large numbers and unbalanced influence, who were worried about losing their jobs. But the authors are wary of endorsing that solution, because “work” provides more than just money, it provides self-worth among other noble attributes.

 

Chapter 15 reasserts that we are at an “inflection point”, the precipice of another Industrial Revolution. But with technological advancements, we open ourselves up to accidents and malice of greater magnitude. The internet of things amplifies this further. “There’s a genuine tension between our ability to know more and our ability to prevent others from knowing about us. When information was mostly analog and local, the laws of physics created an automatic zone of privacy.”

 

So what’s in store for us? Utopian future or dystopian future? The Singularity or Terminator? “It’s wise to never say never, but we still have a long way to go” they say. At the end, it all boils down to Uncle Ben.

Who Owns The Future?

Who Owns the future?

 

The Internet’s major players tend to share a similar business plan (when they do have a business plan), or at least have much in common. They offer some service for free. Eg : Google,facebook, tumblr, reddit, twitter etc. Majority of the content created on these platforms is user generated. Along with the content the platforms also track people who are visiting their website and use the data they collect to sell to forward.

 

Most obvious revenue stream from this data comes from advertisers who are using it to better target their audience but more interesting situation can also arise for eg banks and insurance companies use big data correlations to determine who and under what conditions to offer credit and insurance to consumers. Its these sorts of correlations that give an unfair advantage to organizations that have access to this data. Lanier refers to companies that operate this way as Siren Servers: “Siren Servers gather data from the network, often without having to pay for it. The data is analyzed using the most powerful available computers, run by the very best available technical people.

 

The problem with having Siren servers is that it centralises the majority of power and wealth. For example At the height of their  power, kodak was 140,000 people. Snapchat employs ~30.

 

As it is now, most of the Internet is set up for only one-way linking. Lanier calls his two-way linking system a Nelsonian network in homage to the man who first came up with the idea: Ted Nelson. In a network with two-way links, each node knows what other nodes are linked to it – you’d know all the websites that point to yours… you’d know all the videos that used your music. Two-way linking would preserve context and also create a structure necessary for compensation and accreditation.

 

With a two-way linking system, each person can remain the proprietary holder of any data that originates with them , and sell this data directly to other people and companies. Fede from the year before us made a go of this.

 

The alternative is to pay people directly for their contributions to the information economy, and preserve capitalism. Lanier argues we must find a way to the latter option.

 

Laniers Solution :

 

Monetize information and create a micro/nanopayment system to directly credit the user and creator of the data.  If data is used by a market prediction algorithm then a payment proportional to the degree of contribution and the value generated will be due to the person. This alters the current paradigm by arguing that user created data entitles them to remuneration for it. Encouraging people to contribute more to the information economy.

 

Lanier seems to be trying to reach a pragmatic middle ground between the current hierarchy of the information economy and those that are calling for a more horizontal structure like that of the open source movement .

 

His solution consists of:

  • Two-way linking: A method to keep track of where information is coming from and to whom it is going.  Each person can remain the proprietary holder of any data that originates with them, and sell this data directly to other people and companies.

 

  • The role of companies and corporations shifting focus more on endeavors that need mass coordination and mass capital.

 

  • Centralised Governance : “the only way to create a distribution of clout on a digital network that isn’t overly centralized, so that middle classes and a maximally competitive marketplace can exist, is to be honest about the existence of top-down dynamics from the start / We should… be thinking at least partially in a top-down way about making sure that the information that should be monetized is monetized. This might rub a lot of people the wrong way; bottom-up, self-organizing dynamics are so trendy. But while accounting can happen locally between individuals, finance relies on some rather boring agreements about conventions on a global, top-down basis. If you repudiate that way of thinking, you make it a lot harder to build up the replacements for the failing levees of the middle class”

Overall, if things keep going the way the are now the middle class will all but disappear and the majority of wealth will be in the control of a few. He adds that his solution  is just a hypothesis and a lot of glitches will need to be ironed out as implemented. However, this could ultimately be the most practical solution.

Interesting points

 

  • Jaron’s main argument is that without the levees / tolls / taxes / hard-faught victories serving as buttresses for the middle class in America, there will be no middle class.

  • Current trajectory spells semi-near-term hyper-unemployment.

  • Tech is destroying more jobs than it creates – example : at height of power, kodak was 140,000 people. Snapchat is like 20.

  • Current tech paradigm doesn’t treat humans beings as special or sacrosanct.

  • More forward free information and financial insecurity for everyone, or paid information with a stronger middle class

  • “Ordinary people ‘share’, while elite network presences generate unprecedented fortunes”

  • We won’t survive if we gut the country of paying positions in transportation, manufacturing, energy, office work, education, and health care” – all of which are reasonable targets for tech disruption.

  • “Digital Information is Really Just People in Disguise”

  • The improvements that happen in AI are improvements built from data that we contribute – and aught to be rewarded for

  • The internet needn’t be purely a facilitator for copying, the idea that “information wants to be / is free” is at its core extractionary. Contributing ideas or information or content and not being compensated for it is destroying jobs.

  • Links didn’t have to go one way. while two way connections are more difficult, they would facilitate systems that encourage recognition of sources and reasonable recompense.

  • Zero-liability is a real thing in HFT

  • PG 89 “I wish children could experience earning money online today, but that is harder to do than starting lemonade stands”

  • Teachers and academics celebrating the disruptions of other industries should watch their backs, as prices for tuition rise and “free” or cheap online alternatives continue to spring up and spread.

  • “Siren servers are narcissists ; blind to where value comes from”

  • A big problem with using algorithms to sort

Evaluation

 

  • The book seems really US Heavy, and doesn’t begin to address how these things might play out in the rest of the world. The book still assumes some degree of agency – access to computers or whatever. Leaves a lot of people behind.

  • One example where this isn’t true is the “on the world stage… no one expects twitter to help reate jobs in cairo”

    • Jaron doesn’t seem to believe in autonomous social network ecology in the context of social movement. I understand his fatigue from back-patting about social networking, but am not sure since I could observe some changes from that. (Even though it didn’t change anything in the big picture, these changes gave some experiences about making change to people)

 

  • Book is lite on data and strong on anecdote. We get it –  Jaron was present and participating in some glorious and formative times, but the graphs and charts are hand-drawn. I don’t disagree with his points re: local maxima and and optimal settings in complex scenarios or networks, but take this book in comparison to all the charts and graphs in Kurzweil’s singularity book, and Lanier’s hand-drawn charts and graphs feel a bit light.

 

  • He owns the fact that it takes a leap to imagine the places between where we are now and where he imagines us being. His proposed future of micropayments – the updated seagull on the beach scenario – lacks a certain plausibility. Did we/you feel this when reading his original beach scenarios (no access to water but able to “share”, etc..)

 

  • All that said, the premise is sound. The problem is legit and micropayments are an interesting vehicle towards reclaiming the wealth presently being siphoned off by “siren servers”. We’re collectively interested in exploring the gradient / the awkward and imperfect middle?

Project Proposal.

 

Build dogeCoin Micropayments into my thesis : comicdrop.com will attempt to address Jaron’s wish – “I wish children could experience earning money online today, but that is harder to do than starting lemonade stands”

 

ComicDrop is A Browser-Based Space for People, and Kids, to Collaborate on Comics.

 

Users can contribute stickers, which are then free for everyone to use and collage into their panels and stories.

 

I’m interested in exploring methods of compensating people for their content in altCoins, specifically dogeCoin.

 

Real Problems

 

Carbon footprint is problem. altCurrencies spring up to potentially address this. what happens to/with peerCoin?

Two way linking – difficult to do and potentially a tricky database problem(?). Need to talk with people who know more than I do about relational databases and how to make a thing like this really work.

 

Drug and illicit trade stuff is a (perceptual?) problem. The Silk Road is dead, but can  altCoins escape links to negative press) The Jamacian Bobsled Team is school-safe, buttttt anonymous currencies are kinda shady, no joke.

(some) taxes are good ( i.e. we need roads for lemonade stands). How does this stuff get taxed.

 

Deflationary currencies are (probably) bad (because people horde shit, for sure)

 

Bitcoin just reproduces “traditional” currency – and it’s problems – From Rushkoff’s book – the market cap creating artificial scarcity.

 

Something that is “not capitalist money?”  rushkoff said this – bitcoin being capitalist money, and I think we deserve to examine it more closely.

 

How do you make a currency that by design is encouraged to be spent and not hoarded(Inflationary vs. deflationary. Corn vs Gold). Is there any value in that? dogeCoin has no fixed Cap. Does this go far enough?

 

Does the altCoin marketplace generate a degree of validity – aka… I can transfer bitCoins into dogeCoins and vice-versa… one acts as a more permanent store of value (bitcoin = gold) and one acts as a more flexible everyday unit of exchange (dogeCoin = Corn)

Floor to Electronic Trading

Alexandra Coym and Karl Ward

We all have this image of trading floors with hundreds of people yelling at each other while maniacally waving little sheets of paper in the air. Though this may have been the case 40 years ago, 82% of today’s trading is done electronically. The shift to electronic trading began in the late 1980’s and 1990’s, first to phone trading, then to electronic trading done in so-called “upstairs” offices close to the exchange. This new style of trading really took off in 1992 with Globex, the first global electronic futures trading platform. Traditional trading floor exchanges such as the New York Stock Exchange implemented their own electronic trading systems to compete with the heavily computerized and decentralized NASDAQ exchange.

Exchanges exist to bring potential buyers and sellers of securities together (either in physical or virtual form) and, through facilitating the process, reduce the risk of investing. There are two kinds of markets on which people trade – the primary and the secondary market. On the primary market, securities are created (i.e. through an IPO), whereas on the secondary market already-created securities are traded. There are still trading floors around the world that mainly deal with primary market trading and large institutional secondary trading, whereas most secondary market trading is done digitally.

During the ascendance of electronic trading, the traditional stock market roles of broker and specialist changed dramatically, as most markets minimized the role of brokers and some eliminated the role of specialists. Specialists (also called “designated market makers”) are similar to brokers, but only for a single stock, and with the additional responsibility of reducing volatility in that stock when supply or demand becomes unbalanced. They reduce volatility by selling more of their own stock when demand is very high, or buying that stock when supply is very high.

What works? What doesn’t?
As with many machine integrations that replace human activity, the advantages and disadvantages brought on by the change are less than clear. Proponents of electronic trading tout its role in reducing execution costs for trades via “straight through process” (i.e. removing the middleman). They also cite the increased liquidity it creates by bringing many more potential buyers and sellers into the market. Because electronic trading is accessible globally, trades have definitely become geographically independent, more competitive, and more anonymized. Arguably, there is also increased transparency into pricing (due to real time quoting and trade data) and accountability into nefarious practices by brokers (due to the electronic audit trail)–though the accuracy of any of this will be discussed later. Last but not least, tighter spreads (the difference between the ask and the bid price) allow for more successful trading due to prices not having to move far in one direction or the other.

Critics of electronic trading tend to express several systemic concerns. First, that the incredible volume of electronic trades by high frequency trading algorithms does tend to increase liquidity when markets are stable, but exacerbates volatility to a dangerous extent when the markets encounter a destabilizing event. One typical strategy for HFT algorithms is to withdraw all bids and offers when the market becomes unpredictable, which translates to a sudden disappearance of demand and usually a precipitous price drop. Second, the HFT traders flood the quote and order systems with fake trades in order to influence the price of stocks or to fool potential traders into believing the market is moving a particular direction–more than 90% of all quotes are currently fake. Third, the HFT computers are so fast and so close to the data coming out of the exchanges’ computers that they have an asymmetric advantage over human traders. These advantages make it trivial for them to get ahead of a human trader who has put in even a modestly lucrative bid, thus beating that trader to the purchase (a technique called adverse selection). As Dennis Dick from Bright Trading noted:

HFT’s are the new market makers without the traditional affirmative obligation of designated market makers to keep markets orderly. When uncertainty enters the picture, they cancel their orders and liquidity disappears. Without traditional market makers to step in and be the buyer of last resort, prices can fall quickly as we saw in the flash crash in May 2010.

With technological advancements, there is always the danger of unforeseen events exceeding human control. The 1987 “Black Monday” crash was the first hint of the pitfalls of electronic trading, when the Dow Jones index dropped 23% in a single day. “Program trading” (an early name for algorithmic and electronic trading) bore the brunt of the blame, and the SEC responded by establishing a system for halting trades when the market displays excessive volatility. A more recent and dramatic example of electronic trading risk is found in the 2010 “Flash Crash,” where the Dow dropped 9% within minutes. There are a lot of theories and explanations as to how this happened, but in the end they all point to computer error. One algorithm ‘decided’ to sell a large bulk which cause other computers to react by panic selling and buying, or aggressively short selling. It is hard to say how the situation might have played out differently if humans had been involved in the trades, but most likely the trades would not have occurred instantaneously, and multiple brokers/specialists on the floor would have caught the error before it was posted as an offer. Computerized trading works at such a high speed and volume that the severity of damage that can occur within minutes or even seconds is hard to fathom. The algorithms are set to search for certain patterns and react in certain ways, but that does not allow a lot of room for the kind of judgement calls that would have been helpful in the Flash Crash incident. Trading algorithms and proprietary trading platforms are well kept secrets that make it difficult to regulate the practices appropriately.

Who gets left behind?
With so much of the trading dependent on technological solutions, it’s clear that smaller companies that could not keep up with the latest multi-million dollar tech were pushed out of the market. Even for the large players, there is a constant drive to develop new and faster technology for handling the trades, all proprietary and secret. It even goes as far as trading companies paying to have their servers in the same building as the exchanges’ computers, or even on the same floor. Computerized trading enables firms to post buy and sell prices they don’t intend to follow through on, misleading potential investors and increasing instability in the market. The same goes for traders rapidly selling and buying back and forth between two entities they control, making it appear as though there is a lot of interest in that stock.

A big problem that has arisen from electronic trading are the so-called ‘Dark Pools of Liquidity’. In essence, these are large trades that are offered anonymously and away from the public between big financial institutions. The reason people use these dark pools is to make large trades anonymously so as to not reveal their strategy and stir up the market. The problem here is that investors that aren’t participating in those trades are disadvantaged by not seeing the trade beforehand, and therefore not participating in the price discovery and auction occurring behind closed doors by the participants in the dark pool. This brings us back to the earlier assertion that computerized trading has not necessarily made the market more transparent, but instead created more opportunities for making it opaque.

Recommendations
Some would argue that the machines and their algorithms help remove any bias and emotions of human participation that could affect the process, yet others might say exactly the opposite: that this bias and emotion is necessary for successful trading beyond the short term. The fact that over the past years so many regulations have been set up for computerized trading demonstrates that the harmful potential is larger (and weirder) than originally anticipated. With technology advancing so quickly it is nearly impossible to control what is being used on the market, unless you standardize the process or tightly regulate the network and information flow, in which case all competitive advantage of trading firms would disappear. On the other hand, if more and more restrictions are applied it could happen that the restrictions themselves become loop holes that certain players can use to their advantage.

One restriction that makes sense is a ban on dark pool trading, to restore the market transparency essential to make computerized trading fair for all players.

To contain the potential damage done by errant electronic trading and HFT, regulators might enforce a limit on trading volumes, or (better) an automatic “rate limit” that slows down trading as volatility increases.  The current system of halts is not fast enough to prevent incidents like the Flash Crash, which have nearly instant global market repercussions.

To deal with the volatility problems caused by fake trades, there should be a small penalty on canceled trades, as recommended by the SEC in their report on the Flash Crash.

Resources

Book review of “Technological Revolutions and Financial Capital”

 

Carlota Perez argues that the economy is a structurally engineered system of collapse and reward. Every half century capitalism produces a chain of events that repeat themselves time and time again. First, an innovative, disruptive technology comes into the world that essentially causes a revolution and upends the current infrastructure/establishment. This rupture enables a financial bubble to build. Once it grows to an unsustainable, overwhelming size, it bursts and the economy collapses. Upon collapse, a fertile ground comes out of the destruction, which leads to a “golden age”. Once the excesses of the “golden age” take root, political unrest arises.

 

Why is the economy intentionally built as a house of cards? Tech revolutions replace one technology with another, which leads to massive change and a subsequently explosively volatile period in markets (and potentially massive profitability). The new wealth that accumulates at one end is often more than counterbalanced by the poverty that spreads at the other end. With enough discrepancy in wealth, as noted, political unrest boils over. In theory, the practical task of setting up an adequate regulatory system / safeguards would seem essential to minimize suffering and instability. But the safeguards that exist are only present as to the extent that they enable the continuation of the system that they are designed to oversee.

 

Perez cites Schumpeter’s “Creative Destruction” theory (destroy old to forge new) as pivotal. Tech revolutions lead to paradigm shifts, which result in inclusion-exclusion mechanisms. Then, Perez writes of an “installation period” that is divided into two sections known as “irruption” and “frenzy.” How are these maintained? The first tech revolution enables the subsequent revolutions. Again, a product of design. Most of core assets of tech revolutions already existed. Every revolution combines truly new tech with others that are simply redefined. Big bang events initiating the revolutions are also bringing cost-competitive or cheaper options to the surface, which leads to investment, lending etc.

 

1st, Industrial rev > led by Britain (1771)

2nd, Age of steam and railways > led by Britain, then USA (1829)

3rd, Age of steel, electricity and heavy engineering > led by USA then Germany (1875)

4th, Age of oil, the car and mass production > USA > (1908)

5th, Age of info and telecommunications > USA > (1971)

 

 

She explains the technological revolution requires an entire network of interconnected services and infrastructures in addition to the primary technology that enables the new technology to take hold. An example would be when automobiles were invented, the subsequent services that need to be in place for the proliferation of automobiles would be gas stations and mechanics, but for these secondary services to be profitable, there would first need enough cars on the road. Additionally, people need to be educated with how the technology works, a social assimilation of the technology, transitioning it’s use into second nature. This period is painful for those who are awaiting the profits from the new technologies. The “excitement” at the beginning of a technological revolution “divides society” by “widening the gap between rich and poor” because of the frenzy of investment, and a “rift” occurs between “paper values and real values,”  though mentions nothing about how or why she thinks this happens.

   Characterizing the surge of a technological revolution can be divided into four main phases with a turning point at the center of these phases: Irruption, Frenzy, the turning point, Synergy, and Maturity. Irruption is when the new technology is introduced, the “techno-economic split,” with unemployment and the decline of the old industries. Frenzy is a time where there are “new millionaires at one end and growing exclusion at the other,” and mentions protests as almost a natural feature of this inequality, but that eventually fades. Other features include intense investment in the revolution, and decoupling with the whole system, and this is when the financial bubble happens. The Turning Point is “neither an event nor a phase; it is a process of contextual change,” when regulations balance the excesses and unsustainable features, and where the institutional recomposition and the “mode of growth” is defined. Synergy is known as the Golden Age, with coherent growth with increasing externalities, marked with production. The final phase, Maturity, fades into the Irruption of the next revolution, but is seen as the socio-political split, with market saturation of the last products and industries, and disappointment versus complacency. The first two phases fall within the Installation Period, where the last two are in the Deployment Period.

   Governing these phases of the technological revolution are the those who control Financial Capital, and those who own Production Capital. Financial Capitalists possess wealth in money or other “paper assets”, acting only to increase wealth, and always seeking to make their money grow; making money with money. Production Capitalists seek to create new wealth by borrowing money from Financial Capital to produce goods and services,  and by innovating and expanding, seek to reap as much wealth as possible off of the laborers. The relationship between these two sets of people changes through the phases of the revolution. During Irruption there is a love affair with Financial Capitalists with the revolution. In Frenzy, they decouple from the Production Capitalists, and recouple after the Turning Point in the Synergy phase. And in Maturity, begin to separate again.

The Maturity phase combines the signs of exhaustion of  in many of the original core industries, with very high growth rates in the last few new industries with the same paradigm. Companies begin reaching the limits of their own industries and products and begin to invest in alternatives to carry them through the next phase. The buildup of idle capital of successful companies means more money can be invested in technological advances, mergers and acquisitions, and investments in foreign markets.

The companies begin to operate in a two paradigm mode, earning profits through their core industry while investing capital elsewhere. This development and diffusion of technological revolutions tends to stimulate innovations in finance that benefit from impulse they provide. eg. Suez canal made it possible for entrepreneurs to trade in small quantities of goods creating smaller, shorter-term credit allowing for credit markets that made possible budgets for home purchases of refrigerators, vacuum cleaners and automobiles. The enthusiasm for these new technologies and the success of these new products leads to the end of the Irruption phase and starts the Frenzy.

The realocating of capital in risky investments leads to a destabilization of weaker markets, the redistribution of capital to those with the capital available to make investments. This cycle is usually marked with the overfunding industries that investors are convinced to be the most profitable, often creating a period of ‘ethical softening’ as loopholes are found, and ponzi schemes tend to be enticing in the escalation of investments. This is the boom we’re familiar with, where new companies like Yahoo, who produce virtually nothing, compete with Kodak, and old paradigm technology. This is the period of ‘Irrational exuberance’ and an ‘orgy of unrestrained speculation’ that occurs right before the bubble bursts and the economy is left in a recession, or depression depending on the height of the fall.

In Chapter 12, Perez discusses the passage to maturity through which economies go. A relevant point is her description of what many experience in the Brooklyn gentrification trap. An economy arrives at Maturity, the late phase of the deployment period, that is superficially brilliant but politically tumultuous. Then the workers organize and demand but promises aren’t delivered. The artists and activists and young rebel. As this period continues, the idle capital grows (1%) while investment opportunities dwindle (I’d make the case that this is the lack of available resources for the 99%).

The striking “however” to this economic point that Perez discusses is that “there seems to be an underlying faith in the eventual arrival of a period…without social problems as as a result of the operation of the system” (137). This poignantly describes many peoples’ frustration with the blanket acceptance of capitalism. It does not right itself.

The book ends with Perez asking whether the consequences of the current economic system – with its irruption and frenzy, which end up in a bubble and collapse – can be mitigated.

 

The Human Use of Human Beings – N. Weiner

Written by Norbert Weiner in the 1950’s, this book is definitely flavored by its time, and is timely in its messages.

Norbert was a child prodigy , brilliant mathematician and philosopher. Looking at the fields of engineering, the study of the nervous system and statistical mechanics, he coined the phrase “cybernetics” to characterize the  “control and communication in the animal and machine”. This idea and many others have become pervasive through the sciences (especially computing and biology). As he sees it, “If the seventeenth and early eighteenth centuries are the age of clocks, and the later eighteenth and nineteenth centuries constitute the age of steam engines, the present time is the age of communication and control”.

For Norbert, technologies were viewed as applied social and moral philosophy. His personal philosophy itself being rooted in existentialism, instead of the formal analytical philosophy of his day. He strongly prized himself on being an independent and knowledgable intellectual, not affiliating with any political, social or philosophical group. He did not accept funds from governments , agencies, corporations or any other groups that would or could compromise his independence and honesty.

As a lifelong obsession, Norbert wished to distinguish human from machine. He recognized the organization of patterns and functions that could be performed by either, but focussed his intention and understanding on the human/machine identity/dichotomy within a humane social philosophy. The obvious questions therefore arose:

1. How is the machine affecting people’s lives?

2. Who reaps those benefits?

I commend Norbert for urging the scientists and engineers of his day to “practice ‘the imaginative forward glance’ so as to attempt assessing the impact of an innovation, even before making it known”. This is valuable for us even today when considering the environmental impacts of our creations, let alone the overall human life impacts as well.

Norbert plainly states “that society can only be understood through a study of the messages and the communication facilities which belong to it; and that in the future development of these messages and communication facilities, messages between man and machines, between machines and man, and between machine and machine, are destined to play an ever increasing part.” “To live effectively is to live with adequate information. Thus, communication and control belong to the essence of man’s inner life, even as they belong to his life in society.”

Comparing “the physical functioning of the living individual and the operation of some of the newer communication machines” he finds that there is a”parallel in their analogous attempts to control entropy” (disorder) through feedback. Feedback being essential  for both human and machine to make effective decisions and ultimately take action. “Certain kinds of machines and some living organisms -particularly the higher living organisms-can, … modify their patterns of behavior on the basis of past experience so as to achieve specific antientropic ends. In these higher forms of communicative organisms the environment, considered as the past experience of the individual, can modify the pattern of behavior into one which in some sense or other will deal more effectively with the future environment.” Only in this way can we create new environments, since absolute repetition is absolutely impossible.

He warns however, that “what many of us fail to realize is that the last four hundred years are a highly special period in the history of the world. The pace at which changes during these years have taken place is unexampled in earlier history, as is the very nature of these changes. This is partly the result of increased communication, but also of an increased mastery over nature which, on a limited planet like the earth, may prove in the long run to be an increased slavery to nature. For the more we get out of the world the less we leave, and in the long run we shall have to pay our debts at a time that may be very inconvenient for our own survival. We are the slaves of our technical improvement…We have modified our environment so radically that we must now modify ourselves in order to exist in this new environment.”

This “new” environment, or in Norbert’s words “second industrial revolution”  or ” new automatic age” in part consists of the transportation of words/messages/information which serve to forward an individual’s power of perception, and in a sense extends one’s physical existence to the whole world. The design of the machines that would help make this “new” environment would be “transferred from the domain of the skilled shop worker to that of the research-laboratary man”. “Invention, under the stimulus of necessity and the unlimited employment of money” would be the “new blood” fueling the research behind them. The employment of these computing machines during this time would be “much faster and more accurate than the human computer” enabling the replacement of humans at certain levels whereby these machines can talk to each other and execute “repetitive tasks”. The benefit being that it “has displaced man and the beast” as a source of physical power, ideally freeing up time to pursue greater interests, but also warns that “the matter of replacing human production by other modes may well be a life-or-death matter”. Under these circumstances, it is logical to see that these new tools will “yield immediate profits, irrespective of what long-time damage they can do” and that any automatic machine that “competes with slave labor…will produce an unemployment situation, in comparison with which the present recession and even the depression of the thirties…”. “Thus the new industrial revolution is a two-edged sword”, whereby the “machine’s danger to society is not from the machine itself but from what man makes of it”. Norbert takes solace in that “the technique of building and employing these machines is still very imperfect” and that “the problems of the stability of prediction remain beyond what we can seriously dream of controlling”.

Norbert saw that during this time “invention is losing its identity as a commodity in the face of the general intellectual structure of emergent inventions”, whereby “information and entropy are not conserved, and are equally unsuited to being commodities”. It is to this last point on information not being a good commodity where we see that Norbert was not able to see beyond the times in which he wrote. It can be agreed that “the matter of time is essential in all estimates of the value of information”, but he was unable to anticipate the increased speed by which it could be acquired, stored, and received, let alone the exponential decrease in cost.

Norbert leaves us with a lingering thought we must all confront, namely that “what is used as an element in a machine, is in fact an element in the machine. Whether we entrust our decisions to machines of metal, or to those machines of flesh and blood which are bureaus and vast laboratories and armies and corporations, we shall never receive the right answers to our questions unless we ask the right questions.”

.