2010/10/12 - Understanding Networks

  1. Networking Hardware
  2. Privacy, Data and I

1. Networking Hardware

In class we looked at 5 different pieces of hardware that are designed to be used as either www-network connectors (port, shield, ...) or embedded processors that can run as standalone www-servers. The connectors are meant to be used as additional components for either uCs, computers or, in theory, phones - i.e. a phone does not have access to a data network.

The Arduino Ethernet Shield comes with a WiZnet W5100 ethernet controller which can handle up to 4 socket connections simultaneously at 25 Mbps (check their specs here). Additionally, the Ethernet Shield has a SD slot which is not supported by the Arduino Ethernet library. However, the SD card reader could be directly hooked up to the W5100 via its SPI interface. Communication with the uC unit is handled via serial data communication. Priced at ~40,- $

Lantronix offers a variety of ethernet interfaces. The XPort is based on Lantronix own DSTni-EX chip. Whereas the Ethernet Shield strictly communicates via serial, the XPort offers other gateways such as telnet or a Windows based utility program (check for specs here:here). Priced at ~ 50,-$

The LF 15022 is a WSM (web-server module) used in connection with the STAMP2 chp, which is a BASIC based uC unit. For in an discussion of the setup check Petruzzellis, Thomas: STAMP 2 Communications and Control Projects (McGraw-Hill Professional). No idead if this is still available

The TINI is a board designed to be a standalone www-interface. It comes with a processing unit and ethernet connector. It can handle up to 32 TCP/IP connections simultaneously due to two on board accelerators. However, serial communication only works via RS-232 hook up. The CPU can be programmed using C/AVR. The manufacturer, Dallas Semiconductors, offers a set of simple scripts to get started at here. Priced at ~ 100$

The newest product among all presented is the Beagel Board, which is basically a small board which runs Linux. It therefore comes without any gateway as the consensus is that Linux people will have their own gear to hook up to it. Priced at ~ 169$

In general, it is always the application that drives the decision of what hardware to use. In general, we considered computational force / connectivity / sensitivity as the metrics that can be used as reasonable yard sticks when assessing a system. For a given case, you still might have to do the bit/baud V/mA conversions to see what suits the specific task best. So, from a more abstract design perspective we might add the presentation level that can be used as the starting point for researching the other factors.

2. Privacy, Data and I

The second half of class, commenced with the question "What's up with all the data? What should we do with it? What should we do about it?". A distinction established early was that between 'leaked' and 'generated' data. Though this terms are inherently blurry, the general tendency was to assume that by referring to generated data we mean static and persistent data that is consciously created by a user, i.e. logins, profile pages, static websites. Leaked data is dynamic and tied to generated data in that it refers to a static entity and describes an action/active state of this entity (which in turn may be tied to generated data - it always comes full circle). After this first distinction, it is interesting to look at the historic development of data pipe lines and how the speed of data communication on all levels continues to increase. While the participants are slowly coming to realize that the amassing of data offers new ways of how to go about data they are also realizing that proper codes of conduct have yet to be established. The catch phrases, albeit key concepts, in this context are "privacy", "identity" and "policy" as well as "information overload" and "information panic". Today, the internet has reached every continent and a lot of people - of course, there are still millions without access to the web. Different technologies developed independently, such as pattern recognition/machine learning algorithms, high definition imaging, digital signal processing and digital archiving, have altered the general schemes of practice in how we as humans organize, construct and inform our reality. The internet of things is one aspect epitomizing these developments as it refers to networked ... things. These can be a car that receives automatic GPS updates or a plant that uploads its C02 values to a data hub or Twitter. However, in making sense of these developments, there seems to be a general cluelessness that leads to two radical and somewhat diametric positions:

  1. All personal data needs to be protected and should be strictly bound to an identity copyright or the likes.
  2. All data needs to be readily available and if there is data to hide, maybe it shouldn't exist in the first place.

So far the moderate, mediating position was one of good data "hygiene", meaning that an individual can use his/her rational to consider the trade-offs of putting something on the web (i.e. a "me vomiting from the 15th floor of the college dormitory" picture on facebook can earn me great social pay off with my frat friends but a feature employer googling me might not be equally amused). However, applications and mobile web structures enable a seamless augmentation of social networks, professional networks, public networks, private networks. The decision of what is published is not necessarily that of the individual. A way of remedying is to allow levels of data openness. Discriminating, as one does in reality, between groups is so far the most effective and trusted way which is mirrored by the tendency of web platforms to fail if they encourage a 'plaza' like structure whereas a 'den' or 'warren' system which allows for sophisticated privacy settings.