Author Archives: btm

Etherchannel and trunking with Cisco 3524xl and 6509

The Cisco 3524XL doesn’t support PaGP or LaCP, you simply configure etherchannel by adding ‘port group N’ to each interface. The port group takes the configuration of the first interface in the port group

! Cisco 3524XL
interface FastEthernet0/1
 description uplink to 6509
 port group 1
 switchport trunk encapsulation dot1q
 switchport mode trunk
end

The 6509 supports more dynamic protocols, and will try to use them unless you specify ‘switchport nonegotiate’ on the portchannel interface, which is key. Otherwise everytime you turn on ‘channel-group 4 mode on’ the ports will go down on the 3524XL and the ports on the 6509 will go into the ‘err-disabled’ state until you ‘shut’ / ‘no shut’ them.

! Cisco 6509
interface GigabitEthernet7/7
 description sw03 - rack 3
 no ip address
 switchport
 switchport mode trunk
 switchport nonegotiate
 channel-group 4 mode on
end

interface Port-channel4
 description sw03 - rack 3
 no ip address
 switchport
 switchport trunk encapsulation dot1q
 switchport mode trunk
 switchport nonegotiate
end

libvirt: unknown OS type hvm

It took me a little while to narrow this down. Building a kvm guest with vmbuilder via libvirt I was getting the error “unknown OS type hvm”. When I compared the output of ‘virsh capabilities’ on a good host and the one that wasn’t working, the later was missing the kvm hvm entries. When I checked out the init script for kvm, I realized the the kernel module wasn’t loaded and a quick check of dmesg confirmed that virtualization was disabled in the bios.

Why can’t sysadmins build networks?

Why can’t System Administrators get network design?

Sometime around 1997 I built my first ISP. I was doing computer repair for a man at the time. Internet access was just getting situated in my small city. This man wanted in, but showed up at my house in frustration one night because he couldn’t figure out how to get the router to work. He came sporting a $100 bill and told me it was mine if I fixed it. I suppose it was going to be much more than he had been paying me hourly, but I was more interested in the problem then the pay, and he was frustrated. He had a Livingston Portmaster 2ER, a pile of external modems, and a 56K frame relay uplink to another local ISP. This ISP was always more network gear than computers, because he was “thrifty” mostly, despite owning a computer store. There was an NT 3.5.1 box, a Linux box, and for a little while before it got reappropriated, a FreeBSD machine as well. As fanciness like 56k modems came out and customers grew, hardware scaled out. It remained mostly network hardware.

Ever since then, every network I’ve inherited has been a mess. There have been design ideals focused around age old buzzwords like “security” that results in a pile of expensive security gear that’s essentially useless because proper implementation and design simply wasn’t understood. All of them have grown their L2 infrastructure out horizontally, usually with terribly cheap switches, but often with terrible not so cheap switches as well. Patch Panels and cabling have always run amok, usually with patch cables two to three times longer than necessary stuffed into the cable ducts.

VLANs are almost always used on a single switch, then individual switches are plugged into access ports to provide a switch for every VLAN. Or worse, the switches are all broken up into multiple vlans, with an uplink cable for each VLAN. It’s obvious that concepts like trunking and vtp are simply not understood. These don’t add complexity cost, they simplify what otherwise tends to be a disaster.

I find myself up early lying in bed thinking about the second round of ripping out erroneous unmanaged switches and migrating a live production network to a proper hierarchal design. Suddenly I realized it shouldn’t have to be this way, and really wish more administrators had at least the knowledge of a CCNA. Small companies don’t usual get the benefit of administrators who take the time to understand technology, and usually suffice on consultants who draw a direct line between something functioning and it being right, unfortunately between something not working and it being wrong as well. The latter is almost always because they failed to understand the problem and instead blamed the vendor or technology, from then on spouting that using a SAN creates a SPOF, domain controllers can’t be virtual machines, portable A/C doesn’t actually do anything.

As I trudge through my memory recalling these kinds of misguided attempts at wisdom, they all have a common denominator: not knowing the cause of the problems they are having. You have to understand the technology you’re leveraging. It’s absolutely essential that you know why your network works, not only that it does at the moment.

Displaying the time in wordpress posts with K2

K2 defaults to adding:

‘Published by btm on April 16, 2009 in Uncategorized’

to posts, which doesn’t include the time, which is sometimes contextually important. This is controlled in ‘theloop.php’ in K2, which uses the date_format, which you can set under ‘Settings -> General’ in the wordpress configuration. The format is the php date format. Simply using ‘r’ is the best, since it provides a nice RFC 2822 formatted date like:

‘Published by btm on Mon, 20 Apr 2009 09:28:48 -0700 in Uncategorized’.

Configuring LVM preseed on Ubuntu intrepid

It recently clicked in my head that all the blades with small swap partitions were because they had their OS installed when they had very little RAM in them. So I set out to modify the Ubuntu 8.10 preseed install to create a larger swap partition and configure LVM while we were at it.

This proved difficult. Mostly because the better documentation of debian-installer (preseed, partman-auto) has features that aren’t in the version in Ubuntu.

Just got this working:

d-i partman-auto/disk string /dev/sda
d-i partman-auto/method string lvm
d-i partman-lvm/device_remove_lvm boolean true 
d-i partman-lvm/device_remove_lvm_span boolean true
d-i partman-auto/purge_lvm_from_device  boolean true
d-i partman-auto-lvm/new_vg_name string system
#d-i partman-auto/init_automatically_partition \
#  select Guided - use entire disk and set up LVM
d-i partman-auto/expert_recipe string                         \
      boot-root ::                                            \
              40 300 300 ext3                                 \
                      $primary{ }                             \
                      $bootable{ }                            \
                      method{ format } format{ }              \
                      use_filesystem{ } filesystem{ ext3 }    \
                      mountpoint{ /boot }                     \
              .                                               \
              2000 10000 1000000000 ext3                      \
                      $lvmok{ }                               \
                      method{ format } format{ }              \
                      use_filesystem{ } filesystem{ ext3 }    \
                      mountpoint{ / }                         \
              .                                               \
              8000 8000 200% linux-swap                       \
                      $lvmok{ }                               \
                      method{ swap } format{ }                \
              .

d-i partman-lvm/confirm boolean true
d-i partman/confirm_write_new_label boolean true
d-i partman/choose_partition select Finish partitioning and write changes to disk
d-i partman/confirm boolean true

Quick bridging with KVM on Ubuntu jaunty

It took me a little while to put the pieces together to figure out how to take a vm-builder created vm and use briding with it instead of kvm/qemu’s user-mode networking. All the pieces are available on the internet, but there was some emphasis lacking to make it all clear to me.

You’ll need to have a bridge set up on your host. Install the ‘bridge-utils’ package first. Then the relevant section of my /etc/network/interfaces file looks like:

# The primary network interface
auto eth0
iface eth0 inet manual
up ifconfig $IFACE up

auto br0
iface br0 inet static
address 10.0.0.60
netmask 255.255.255.0
gateway 10.0.0.1
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
bridge_hello 0

You could probably use ‘dhcp’ instead of a ‘static’ address on the bridge. The point is that your ipv4 address should be on the bridge, not on the actual interface.

Then create a ‘br-ifup’ script in your vm directory. This is based on /etc/qemu-ifup. This script is passed the name of the interface (tap0) which brings the interface up, and then adds it to your bridge.
#!/bin/sh
#sudo -p "Password for $0:" /sbin/ifconfig $1 172.20.0.1
sudo /sbin/ifconfig $1 up
sudo /usr/sbin/brctl addif br0 $1

Then run kvm with something like this:

sudo kvm -m 128 -smp 1 -drive file=disk0.qcow2 -net nic -net tap,script=br-ifup

‘/etc/kvm-ifup: could not launch network script’ means that the script passed in ‘script=’ could not be found.

‘Could not initialize device ‘tap” means that kvm is unable to create the TAP/TUN interface. Running kvm as root via sudo is the easy solution.

‘warning: could not open /dev/net/tun: no virtual network emulation’ probably means that the ‘tun’ module isn’t loaded. You can load it with ‘sudo modprobe tun’.

The tap interface is removed from the bridge when the guest is shutdown.

Using for loops in Chef

One of the great features of Chef is that you write configurations in Ruby. When wanting to push a number of configuration files out for nagios, I initially turned to the Remote Directory resource. However this could interfere with configuration files created and owned by the debian package, so I needed to be more specific. In the past with Puppet I’ve had a remote file definition (that uses the file type) for each configuration file. This works fine, but gets to be repetitive when it doesn’t need to be. With Chef, you can combine a little ruby with the Remote File resource like so:

for config in [ "contacts.cfg", "contactgroups.cfg" ] do
  remote_file "/etc/nagios3/#{config}" do
    source "#{config}"
    owner "root"
    group "root"
    mode 0644
    notifies :restart, resources(:service => "nagios"), :delayed
  end
end

The benefit of this approach is that it makes your configuration management cleaner and more DRY. This is perhaps at the cost of a little complexity, albeit at a degree that I think is pretty easily understood by reading the code.

Beware of MAC address generation on libvirt 0.4.4

Two or three times now libvirt (0.4.4-3ubuntu3.1, Ubuntu Intrepid 8.10) has automatically generated overlapping MAC addresses on me. I can’t find the source for this MAC address generation in 0.4.4, but in 0.6.1 which is included in Ubuntu Jaunty 9.04 it’s virGenerateMacAddr in src/util.c. This leads me to believe it’s been rewritten, and I’m hoping it’s better. It looks perfectly fine.

Paravirtualized disks with KVM

In the midst of Jaunty testing, I decided to try out paravirtualized disks with KVM. I switched to virtio for networking a while ago with good results. I already built the guest I’m testing with, so I wanted to modify the libvirt xml file, but didn’t see anything of note related to virtio and storage. I did however deduce that the trick is to change the bus attribute in the target element from ‘ide’ to ‘virtio’. Because Ubuntu uses UUID’s instead of paths, the change from ‘sda’ to ‘vda’ didn’t affect startup. I was confused at first though as mount still showed ‘/dev/sda1’ but the dmesg output clearly lacked an ‘sd’ device but had a newly acquired ‘vd’ device.

Bonnie++ was run on a single cpu guest with 768MB ram. The guest is Ubuntu Jaunty 9.04 with all of todays packages, using virtio for network + disk. Otherwise pretty standard with everything that might matter. The Host is also Jaunty, running on a Dell 1955 blade with a couple Xeons and about 7GB of RAM. KVM is ‘1:84+dfsg-0ubuntu7’, and libvirt is ‘0.6.1-0ubuntu1’.

The numbers aren’t all that interesting. Virtio was a little bit faster. I’m not that familiar with using bonnie. Who knows if caching or anything negated these tests, I didn’t try to research turning it off. The performance testing was mostly an afterthought of setting it up, as I see no reason not to use it now.

With ide disk driver:

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
it-util01     1496M 37162  94 56108  20 39997  14 42209  89 247590  59  4706  93
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
it-util01,1496M,37162,94,56108,20,39997,14,42209,89,247590,59,4705.6,93,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++

With virtio disk driver:

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
it-util01     1496M 39831  88 56480  13 41427  14 45489  86 291109  57  7915  90
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
it-util01,1496M,39831,88,56480,13,41427,14,45489,86,291109,57,7914.5,90,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++

Comodo is shady

A few minutes ago I got a cold call on my cell phone. I almost didn’t answer, I tend not to answer calls to my cellphone from unknown numbers. I have teams of lawyers and medical people out there looking for me sometimes, so sometimes I must.

The caller said that my SSL certificate was expiring soon with Company A (who I forget because it’s an old certificate for email I don’t use anymore since I switched to Google for mail) and they’d like the chance to win me over. I paused as I added this all up in my head. After I realized it was just telemarketing, I said “No, thanks” and hung up. Then I get an email from them. Scroll down and read it, them come back.

I like the Creating Trust Online part. Is this a strong arm technique meant to scare me into purchasing from them? Are they trying to create some kind of trust in a “we know more than you, buy our stuff” way? Is this Louis character rogue or is this standard operating procedure?

Ways to get me to never buy products or services from you:
1) Call me
2) Call me, then send me an email

I almost filed the call under weird and forgot about it, thanks for the email that I can search for later when I’m shopping for SSL certificates so I know who not to call.

                                                                                                                                                                                                                                                               
Delivered-To: btm@loftninjas.org
Received: by 10.142.215.17 with SMTP id n17cs645196wfg;
        Thu, 12 Mar 2009 10:48:23 -0700 (PDT)
Received: by 10.150.95.15 with SMTP id s15mr422861ybb.247.1236880102854;
        Thu, 12 Mar 2009 10:48:22 -0700 (PDT)
Return-Path: 
Received: from sharon.nj.office.comodo.net (mail.nj.office.comodo.net [38.104.66.254])
        by mx.google.com with ESMTP id 1si2384323gxk.79.2009.03.12.10.48.18;
        Thu, 12 Mar 2009 10:48:19 -0700 (PDT)
Received-SPF: pass (google.com: domain of louis.cicero@comodo.com designates 38.104.66.254 as permitted sender) client-ip=38.104.66.254;
Authentication-Results: mx.google.com; spf=pass (google.com: domain of louis.cicero@comodo.com designates 38.104.66.254 as permitted sender) smtp.mail=louis.cicero@comodo.com
Received: (qmail 13908 invoked by uid 1001); 12 Mar 2009 17:48:17 -0000
Received: from mmonroe.comodo.net (HELO louisc) (192.168.68.79)
    by sharon.nj.office.comodo.net (qpsmtpd/0.40) with ESMTP; Thu, 12 Mar 2009 13:48:17 -0400
From: "Louis Cicero" 
To: 
Subject: Info on compromised root key
Date: Thu, 12 Mar 2009 13:48:16 -0400
Message-ID: <00a201c9a33a$b955fa20$4f44a8c0@comodo.net>
MIME-Version: 1.0
Content-Type: multipart/alternative;
	boundary="----=_NextPart_000_00A3_01C9A319.32445A20"
X-Mailer: Microsoft Office Outlook 11
Thread-Index: AcmjOrkMPeS02oldT1mZI5bKFnL3rA==
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3350
X-Comodo-Virus-Checked: Checked by ClamAV on sharon.nj.office.comodo.net
X-Comodo-ClamAV-Virus-Program: ClamAV 0.92.1

This is a multi-part message in MIME format.

------=_NextPart_000_00A3_01C9A319.32445A20
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit

http://www.computerworld.com/action/article.do?command=viewArticleBasic
 &articleId=9124558&intsrc=it_blogwatch

 

http://bits.blogs.nytimes.com/2008/12/30/outdated-security-software-threaten
s-web-commerce/

 

 

 

1024-bit encryption is 'compromised'

Upgrade to 2048-bit, says crypto expert

Written by James Middleton

vnunet.com  

According to a security debate sparked off by cryptography expert Lucky
Green on Bugtraq yesterday, 1,024-bit RSA encryption should be "considered
compromised".

The Financial Cryptography conference earlier this month, which largely
focused on a paper   published by
cryptographer Dan Bernstein last October detailing integer factoring
methodologies, revealed "significant practical security implications
impacting the overwhelming majority of deployed systems utilising RSA as the
public key algorithm".

Based on Bernstein's proposed architecture, a panel of experts estimated
that a 1,024-bit RSA factoring device can be built using only commercially
available technology for a price range of several hundred million to $1bn.

These costs would be significantly lowered with the use of a chip fab. As
the panel pointed out: "It is a matter of public record that the National
Security Agency [NSA] as well as the Chinese, Russian, French and many other
intelligence agencies all operate their own fabs."

And as for the prohibitively high price tag, Green warned that we should
keep in mind that the National Reconnaissance Office regularly launches
Signal Intelligence satellites costing close to $2bn each.

"Would the NSA have built a device at less than half the cost of one of its
satellites to be able to decipher the interception data obtained via many
such satellites? The NSA would have to be derelict of duty to not have done
so," he said.

The machine proposed by Bernstein would be able to break a 1,024-bit key in
seconds to minutes. But the security implications of the practical
'breakability' of such a key run far deeper.

None of the commonly deployed systems, such as HTTPS, SSH, IPSec, S/MIME and
PGP, use keys stronger than 1,024-bit, and you would be hard pushed to find
vendors offering support for any more than this.

What this means, according to Green, is that "an opponent capable of
breaking all of the above will have access to virtually any corporate or
private communications and services that are connected to the internet".

"The most sensible recommendation in response to these findings at this time
is to upgrade your security infrastructure to utilise 2,048-bit user keys at
the next convenient opportunity," he advised.

But a comment   from
well known cryptographer Bruce Schneier casts doubt on Bernstein's findings
in practical application.

"It will be years before anyone knows exactly whether, and how, this work
will affect the actual factoring of practical numbers," he said.

But Green, much to the clamour of "overreaction" from the Slashdot
community, added: "In light of the above, I reluctantly revoked all my
personal 1,024-bit PGP keys and the large web-of-trust that these keys have
acquired over time. The keys should be considered compromised."

Whatever the practical security implications, one sharp-witted Slashdot
reader pointed out: "Security is about risk management. If you have
something to protect that's worth $1bn for someone to steal, and the only
protection you have on it is 1,024-bit crypto, you deserve to have it stolen

 

 

 

Louis Cicero

Business Development Executive - Comodo 

Direct Line 1- 908- 376-0145

Main Office US: +1 888.COMODO1 (888.266.6361) ext.4062

Fax US: +1 866-405-5816

Louis.Cicero@Comodo.com 

Creating Trust Online

Comodo   Helps
Leading Cutlery eTailer Increase Individual Transactional Value By Over 250%

Generating sha512 passwords

Normally I would use ‘openssl passwd’ to generate encrypted passwords for scripts and config files, but it doesn’t appear to support sha256 and sha512 yet. There doesn’t appear to be an openssl ticket for this yet. Ubuntu has switched to using SHA512 by default (see ENCRYPT_METHOD in /etc/login.defs). In the course of tracking down the use of passwd/root-password-crypted not working in a jaunty pxe/network install (LP: 340841), I needed to generated a sha512 password to replace the md5 password in the d-i config file.

15:11 < cjwatson> $ echo cjwatson:foo | chpasswd -S -c SHA512
15:11 < cjwatson> cjwatson:$6$K./rc/OhIRi$ylKWgewTkGP3TyXfwj8nnKyIhph66WucLseLjGKKzRM0oRcuRzng2szcC/JZpY13dLxmlILx7eSfdfMHTruH40

Samba/winbind 3.3.1 on Ubuntu jaunty

I’ve been working on testing jaunty before it goes live. Winbind stopped working and I initially assumed it was another configuration change. In the end, it was. The caching functionality wasn’t very straight forward so it took me a while to get to a point where I could test configurations without the cache messing with the results. Intrepid to Jaunty is Samba 3.2.3 to 3.3.1, which being a different major version includes some changes. Mostly the internet is chock full of examples that don’t specify the version of Samba that they’re for, and it’s been changing a lot.

It looks like 3.0.21a added support for ‘idmap backend = ad’ for retrieving uid/gid information from active directory. At some point ‘idmap config’ showed up, for maintaining multiple domains. I assume this was around 3.0.25 where ‘idmap domains’ showed up. Apparently with 3.3.0, the ‘idmap backend’ is back, which became depreciated with the 3.0.25 changes. There is talk in the release notes of using ‘idmap uid’ and ‘idmap gid’. I’ve seen errors about these not existing, I just went without. Without further ado, here’s my working winbind config:

[global]
security = ADS
server string = %h server (Samba %v)
workgroup = WM
realm = CORP.WIDEMILE.COM
idmap config WM : backend = ad
idmap config WM : schema_mode = rfc2307
idmap config WM : range = 1000-20000
winbind enum users = Yes
winbind enum groups = Yes
winbind use default domain = Yes
winbind nested groups = Yes
template shell = /bin/bash
template homedir = /home/%U
allow trusted domains = No

The other interesting thing was the caching. I eventually read the code while watching the output of ‘winbind -i -d10 -n -s /etc/samba/smb.test.conf’ and saw that ‘-n’ which is supposed to disable caching doesn’t affect the idmap cache. The ‘winbindd_cache.tdb’ and ‘winbind_idmap.tdb’ files were not said cache. It ended up hiding in ‘/var/run/samba/gencache.tdb’, with who knows what else. You need to delete this file manually each run. I filed a bug over it too.

CouchDB and binary attachments

After a couple of the couchdb developers commented on my earlier post about binary data and couchdb, I took their advice in the next round of testing. I upgraded to CouchDB pre-0.9.0 from svn, then wrote read/write tests for storing data using byte[], base64 String, and byte[] via attachment. The updated code is available in the same github gist. These tests were not scientific. I ran each combination of data type over n threads for 100 iterations and compared the total times. I averaged the results when using more than one thread for the total. Consider the data completely empirical, but the relationship stands. The binary attachments are fast. That’s all we wanted to know.

So fast even I can’t get consistent numbers and the java libraries throw “httpMethodBase.getResponseBody – Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.” I left the ‘attach read’ out of the graph to keep the numbers on a closer scale.

Writing binary data to CouchDB

I’m doing some performance testing with CouchDB and jcouchdb and I wanted to know if I should write binary data using a bytearray or as a base64 encoded string. The latter is definitely the correct answer. I initially tried using couchdb4j, but I found that it’s exception handling is flawed, or well, doesn’t exist. So I dropped that after a day of tinkering with it. I’ve since been writing a performance testing tool in Java to reuse some of the code in a Java product we have when I’m satisfied with the results. You can find the source that produced these numbers on github for now. I’ve got some more tests to add, and will spend some time thinking about where to put the final tool.

I’m using couchdb 0.8.0-1 as installed out of the box on Ubuntu 8.10 from the package. The graph on the left (which I quickly made in OO and is terrible) is the result of four total runs. Each run was ten threads each writing one hundred documents. The first two runs are writing a binary array and then a base64 encoded string of an 88k image, then again with a 9.5k image. The base64 runs include the time it took to encode the file, but the binary array runs took three times longer to complete. Futon also hates displaying the binary array data.

I’ll be adding another method to test reads since that’s what we’ll be doing primarily. I want to test the concurrency on the reads, then compare those numbers to the results of running multiple couchdb nods behind nginx to ensure the overhead is low and performance really increases. I know Tim Dysinger has been doing some testing and that he and other folks from #couchdb on irc.freenode.net are going to test some pretty large clusters, so it will be interesting to see how our numbers compare.

The number of threads changes the results quite a bit. Tuning may make significant difference or none at all. The one hundred iterations of the 9.5k image takes

number of threads:[base64 seconds, bytearray seconds]
1:[2,5]
5:[8, 21]
20:[34, 90]

I’ll let make another post next week after more testing is done.

The Public Domain

I just finished reading The Public Domain. Before I had even finished the book, I had purchased multiple copies online, tried to arrange to get more copies in the library [and failed], and began scheming up ways to get others to read it.

I’ve always had a community oriented mindset. Having limits on copyright, patents and their ilk has always been an important issue to me. However this book frames the issue from many directions, helping you see just how much we stand to lose if the tides do not change.

Songs written by Ray Charles, who played a part in the birth of soul, may never have been released in today’s environment, where copyright extends far beyond the life of the artist.

Do you remember before Wikipedia? An excellent question, when was the last time you looked up something in a regular encyclopedia? What would the Internet be like today if we argued about net neutraility fifteen years ago. Would you have put your faith in a world-wide band of individual software developers to change the way blue chip companies like IBM do business? Really?

The book touches on mashups in music and how it’s nearly impossible to do the sampling you could do a few years ago now. We’re not just talking about sampling new music either, copyright has been extended beyond the life of the artist retroactively so the few copyrights with a viable business model get to maintain. That was never the reason for the monopoly power behind copyright; it exists to fuel innovation, not create new business models. If we risked so many musical genre’s of the past (like soul, aforementioned) what are we losing out on because of the limits today?

What about all the music, books, and material that cannot be archived and digitized because of the copyrights? We can’t begin to fathom how immensely important this information could be to us in fifteen years. The Internet is a perfectly example of amazing sources of creativity that couldn’t have been planned for in a study.

Read this book, it’s even online under the Creative Commons. Pass it on. I’ll even send you a copy if you promise to.