FCC Declares DDoS, I declare Shenanigans
On Sunday, May 7, 2017 John Oliver told his audience about Net Neutrality. During his 20 minute segment he indicated that gofccyourself.com will redirect people to the FCC page to leave comments. You can viewthe video clip, approximately 20 minutes long and definitely R rated and NSFW, at https://www.youtube.com/watch?v=92vuuZt7wak
Help Secure Everyone’s Email by Encrypting
Previously I wrote about the protection I am adding to my mail by using PGP or GPG. You can find the article by clicking here. My involvement with the EFF and AVNation have also included comments about privacy: AVNation Privacy & EFF Mail Links.
Something I realized while thinking about this subject is that if one sends very few encrypted e-mails, the ones that are encrypted will stand out in the mail being sent. Now you might wonder what I am doing that requires encrypting. The previous blog post explains why I am encrypting my mail.
I have an additional reason now, confuse the government and anyone else monitoring traffic. This idea is discussed in Cory Doctorow’s book Little Brother http://craphound.com/littlebrother.The section below is used under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 license. This quote below came from line 1826 in the HTML version available on Mr. Doctorow’s website.
“So how come you weren’t on Xnet last night?”
I was grateful for the distraction. I explained it all to him, the Bayesian stuff and my fear that we couldn’t go on using Xnet the way we had been without getting nabbed. He listened thoughtfully.
“I see what you’re saying. The problem is that if there’s too much crypto in someone’s Internet connection, they’ll stand out as unusual. But if you don’t encrypt, you’ll make it easy for the bad guys to wiretap you.”
“Yeah,” I said. “I’ve been trying to figure it out all day. Maybe we could slow the connection down, spread it out over more peoples’ accounts –”
“Won’t work,” he said. “To get it slow enough to vanish into the noise, you’d have to basically shut down the network, which isn’t an option.”
“You’re right,” I said. “But what else can we do?”
“What if we changed the definition of normal?”
And that was why Jolu got hired to work at Pigspleen when he was 12. Give him a problem with two bad solutions and he’d figure out a third totally different solution based on throwing away all your assumptions. I nodded vigorously. “Go on, tell me.”
“What if the average San Francisco Internet user had a lot more crypto in his average day on the Internet? If we could change the split so it’s more like fifty-fifty cleartext to ciphertext, then the users that supply the Xnet would just look like normal.”
“But how do we do that? People just don’t care enough about their privacy to surf the net through an encrypted link. They don’t see why it matters if eavesdroppers know what they’re googling for.”
“Yeah, but web-pages are small amounts of traffic. If we got people to routinely download a few giant encrypted files every day, that would create as much ciphertext as thousands of web-pages.”
This action is a relatively small action and is rather simple to do. However, the fact that it will change the traffic view could be helpful for others. It will prevent other PGP/GPG encrypted traffic from being such an outlier as to be noticed. As EFF posted on Data Privacy Day, privacy is a team sport. There are additional directions for how to do this task at https://ssd.eff.org/, hover over the tutorials section. If you want to test if it worked, My public key identifier is C93A52C6. You can download my public key from directly from my site.
I also will freely admit, I am not sure if it will make a difference, but it could not hurt.
January 31, 2017
No, you can’t look in my computer…
Some of you may already be aware that the Electronic Frontier Foundation (EFF) is one of the groups I support. Privacy, security, and freedom for the individual is one of my touchstones. I have written about these topics previously, both here and at AVNation.tv. (Yes, there will be overlap between this post and the one over there. My opinion hasn’t changed.)
There are proposed rule changes within the Federal Rules of Criminal Procedure that the EFF has made me aware of. I do not claim to be an expert on all the legalities and intricacies, however from the comments that the EFF have provided I immediately felt it was important to comment on. The proposed amendment to procedural Rule 41 would allow a judge to issue a warrant allowing law enforcement to remotely enter (hack) a computer when “the district where the media or information is located has been concealed through technological means,” or when the media are on protected computers that have been “damaged without authorization and are located in five or more districts.”
The first portion of this means that if one uses a means to hide their location, for any reason, a search warrant would be allowed. At AVNation I spoke about how this applies to business environments where Virtual Private Networks (VPN) are used to provide a secure connection between remote users and the office. A byproduct of that process is that one’s location is incorrect quite often, sometimes on purpose. When I travel to China I use VPN for personal use. I purposely set my VPN to connect me to a point of presence located in the US. This decision allows me to access my e-mail as well as other sites, such as news sites like New York Times or Los Angeles Times. I can continue on about the Great Firewall of China, but these couple of links should help provide background https://en.wikipedia.org/wiki/Great_Firewall or https://www.eff.org/search/site/china%20firewall.)
I also use a VPN connection, as well as other tools, when I am using a public hotspot. In fact I am using one right now as I sit in Starbucks using their WiFi. This approach prevents eavesdroppers to my communication. I will say that Google and Starbucks do a good job keeping things safe, however not everyplace is as secure. I want to keep my data encrypted as long as I can. Yes, there is Hyper Text Transfer Protocol Secure (HTTPS) that is secure and I use it as much as possible, but not every site supports it or for all traffic.
I can continue on as to why I use VPN, the important thing to take away is that there are legitimate legal reasons to use VPN. The fact that I use it should not change the way my data/privacy is viewed by the courts. To overly simplify it would be like saying, you locked the door to your car so you have given us a reason to issue a search warrant.
The second portion of the new procedure is also damaging in that it allows for innocent computers to be searched if they have been remotely hacked. If a computer is an unwitting member of a botnet that would meet a qualification for a search warrant. The infected or innocent computer could be searched even if the owner is not involved or suspected of wrong doing. Basically if someone has already broken into your computer, the government can break into it again as your computer might be doing bad things.
To me there is a third reason that this issue is important – this process is being done under the guise of procedural rules. There is no debate, no review by elected officials, just a procedural change to allow more access. Yes, Congress has to vote to approve the rules, but there was very little notice of the process. Luckily groups such as EFF and others are around to alert people to the changes. There is the comment of, “Well if you aren’t doing anything wrong, you have nothing to worry about.” I agree and understand that sentiment, but I also believe that once the first domino has fallen the erosion of privacy will continue. To quote James Madison, “There are more instances of the abridgement of freedom of the people by gradual and silent encroachments by those in power than by violent and sudden usurpations.” This procedural step is a gradual and silent move to most people.
Also if there is nothing to worry about, please send me your laptop or phone without clearing the history first. I will be more than happy to inspect it for you.
Much of this information was gathered from the webpage https://www.eff.org/deeplinks/2016/06/help-us-stop-updates-rule-41.
The lock pick image is public domain from Wikimedia. More information about it at https://commons.wikimedia.org/wiki/File%3ALockpicking_Pickset.jpg.
The Independent Musicians
As some of you might know in my previous life, I was an audio technician touring with various groups – some known and unknown. I also happen to have eclectic musical tastes. In the past few years I have stumbled upon some musicians through the Interwebs. I have supported some through Kickstarter, Bandcamp, buying direct, and most recently Patreon. I also have had some interesting conversations with artists on Twitter.
The most recent interaction I had got me thinking and created this post. But first the recap of the conversation with Marian Call (@mariancall http://mariancall.com) and Kim Boekbinder [Impossible Girl] (@KimBoekbiner http://theimpossiblegirl.com). There were branches in the conversation so I tried to make it as understandable as a Twitter stream from an iPhone can be.
Me: @KimBoekbinder @mariancall i am curious why go to cities where sales are strong and not go to uncharted areas to increase audience base?
Marian: @BradfordBenn @KimBoekbinder I try to alternate. You can’t eat if you play too many uncharted areas. Strong strong weak, strong strong weak.
@BradfordBenn @mariancall Oh yeah – that’s what labels pay for. Those of us without labels can only afford to go where we are wanted.
@BradfordBenn @KimBoekbinder Touring is incredibly expensive, on the order of hundreds per day. If you don’t recover that you sink.
@KimBoekbinder @BradfordBenn My exception was the 50 states tour. I carefully planned strong and weak cities for months.
@KimBoekbinder @BradfordBenn It was a great experience and made lots of new fans, but after 9 months I wound up with $0 in the bank.
@BradfordBenn What @mariancall said. Only I have so few strong cities I can’t get far enough to increase my presence.
@mariancall @BradfordBenn Not only is tour expensive – it is exhausting. So you can’t just work another job to make ends meet.
Me: @mariancall @KimBoekbinder understand the costs of touring. Thanks for clarifications, new world since i was touring as an audio tech. Marian:
@BradfordBenn @KimBoekbinder Audio techs rule. Me:
@KimBoekbinder @mariancall still support both of you & your work and would like to see you both play live. How can i help?
@BradfordBenn @mariancall So cool. I love touring, wish I could just go and go and go.
@BradfordBenn @mariancall Where is Wonderment?
@BradfordBenn @KimBoekbinder Where are you, first of all?
Me: Wonderment is a state of mind, learning and seeing things that are interesting. I travel quite a lot for work, my home is South Bend, IN but have spent time in So Cal the past 3 months. Yes, i listen to you on planes
Marian: Sometimes it takes a couple years but we get there!
@BradfordBenn Folks who get really excited about planning a concert near them, and who can bring 30-50 people, mostly get their way.
@bradfordbenn Not to pile on you! It’s a good question. It’s a funny business, far less profit and far more risk than most folks think.
Me: @mariancall didn’t think piling on. Thanks for concern. Think having good conversation. Might even become a blog post.
Marian: @BradfordBenn Being on my email list is the first best step: http://mariancall.fanbridge.com this year I won’t tour much, but I will a little.
@mariancall yup am on the list and have already bought Sketchbook. Will get CD also cause i prefer WAV to FLAC and MP3
I know much of the things that they were talking about from my past experiences, but the scale was very different. Understanding this different economy and music sales process in this century is interesting and different from other businesses. When I travel for work and make sales calls, I often ask to go see the potential customers that are not familiar with my company. When I travel on sales calls, I can interweave existing customers with new customers because there are multiples of each in one city. For a musician that is not always possible, as there are only so many customers (fans) in each city. However the costs remain high for each city, hotel, transport, equipment rental, venue costs… etc.
You may ask, why am I sharing this post and conversation. There are a couple of reasons.
The first was that I found it interesting so I thought my reader would also. As someone involved in the professional audio industry it is very good to hear from other people involved in the process.
It reminds me why it is important to purchase music and not just stream it or download. Pay or support someone for their effort. I am not saying you have to support everyone, but support the artists that you like.
Go out and try new music, search the interwebs, branch out, you might find something you like. Go to concerts that friends have recommended. I think you get the idea.
There are more music outlets than iTunes, Amazon, and Google.
A few suggestions of some of the artists I have been supporting:
Convention Caricature Caused by Production Values
As my faithful readers know, I had a less than stellar production experience while attending the Supernatural Convention. For those of you who are not familiar, Supernatural is a television series on the CW network. The lovely wife was lucky enough to win free admission to the convention. I went along to take pictures, they can be found at http://photos.bradfordbenn.com/Events/Supernatural-Convention-Nov-2013. (At the moment the images are very raw and still need some adjustments, so do not be surprised if there are some changes
The first thing I want to clearly indicate is that the volunteers, the people who barter their services as facilitators in exchange for tickets to various events, were great. They were all very helpful. They provided information as best they had it. Much of the disappointment is about the choices made for the audio, video and lighting equipment. I am not singling out an equipment manufacturer or brand, it is a result of using equipment incorrectly. Let me stay that again, I am not saying that any of the equipment used was inferior, I am saying that the use of the equipment was not appropriate.
First lets talk about the room to get an idea of the room. The ballroom that was used is over 15,000 square feet, it can seat up to 1,900 people for theater style use. It is 105 feet long x143 feet wide x18 feet tall, it is a large room. I do not think that the room was full to 1,900 but more likely to 1,700 based on the need for video and back stage areas.
At the front of the room was a stage about 18 inches off the floor and probably 24 feet by 12 feet. Each side was flanked by a 12 foot wide by 9 foot tall rear projection video screen. Next to each screen was a powered speaker. 1/3rd of the way back against each of these wall was another speaker. Notice that the picture I took is in focus…
I did not get backstage to see the video system but I can tell you it was standard definition at best. It was not very bright or sharp. There was also a constant ground loop bar scrolling on the video screen. Since the speakers were out front I will say they are a 12 inch 2 way powered speaker rated at 131 dB peak with a 75 degree conical coverage pattern. The brand does not matter as it was a quality product just being asked to perform a task it was not designed for. There was also a powered 300W floor monitor on the stage for the talent and a duplicate on the front of the stage as a “fill” speaker.
There were two Ellipsoidal Reflector Spotlights against the wall on each side for fill lighting. They were basically even with the heads of the talent and were not very bright. It was low enough that they were simply plugged into a standard 15A outlet. They were not very effective at all, to the point that most times it was simply the house lights being on all the time to see the stage.
I did not see a front of house position, so I did not see how the lights, audio, and video were controlled. However I would not be surprised to see the system used in a set and forget mode as there were often problems.
The production problems started from the beginning, the video was out of focus from the beginning. It was definitely Standard Definition and then was not clear on top of that. It was just from a camera from the back of the room. I do know that there was some video processing as a few times there was text overlaid on the video image. The best way I can describe it is saying it was like 1990’s high school video. Also about once a session someone could be seen walking through the projection cone, so the backstage area did not have any clear indication of the cone.
The audio problems started very soon after the event itself. I figured the system was just having some teething pains as the show had just started. The first problem with the audio system was the entire system sounding boomy and not as clear as the equipment could provide. Much of it I think due to system trying to cover a space that is too large.
Two and a half hours in to the event and my first questioning of the system approach started. There were wireless drop outs, a dead microphone, and audience/question microphones at the edge of the room. The problem with the audience microphones being were three items in my opinion. The first was they were not loud enough in the talent foldback monitor, they were wireless when they could have been wired, and they were located so that the talent was always looking away from the main audience.
Let me explain the looking offstage comment. By placing the audience microphones at the front of the seating area and at the outer edges of the room, the talent was often looking off stage not at the main audience. The reason for the talent looking off stage was that they were being polite and having conversation and making eye contact with the question asker. The talent was doing the proper thing. The problem is that the single camera in the back of the room simply had them in profile. It kept the audience from getting to see the complete interaction.
Four hours in, the system was not sounding any better in fact it was getting more pronounced with deficiencies. I believe that part of it is the pile-on effect. The first flaw had been found so it was easier to find other ones. The use of a compressor and/or de-esser would have greatly helped the sonic performance for the guests. The audience would have had an easier time listening and there would not have been as many plosive sounds.
Fifteen minutes later the talent was literally walking off stage to listen to the guests directly as the monitor was not reinforcing the comments to the main stage. The audience comments were audible in the house system but not in the monitors. Of course there were also times that the audience microphones were not working at all.
The last presenter of the day had some audio sources with him. Now I am not going to say that I understand all of the voodoo that the talent was using with his ghost hunting audio devices. The approach was to literally have the talent hold the handheld battery powered speaker up to the microphone for the audience to hear.
One of the things I did not mention was how often there was a ground hum, it was not constant it would come and go throughout the day. It got worse during the 2nd day when the entire house left audio system was replaced by a ground hum. Yes, no audio for the left side of the house.
That night there was a Karaoke event. It was a lot of fun, but it could have easily been much better with better equipment. The same system was being used to reinforce the Karaoke event. There was no low end, the system was in full clip throughout the evening. I am not sure where the clip was occurring, it could have been the sub feed from the Karaoke system they brought in. Either way it was audibly distorted. I am very glad I had ear plugs in. Especially when the feedback started. It was not momentary feedback.
The second day started with the wireless microphone failing and needing to be replaced 10 minutes into the first session. Yes, ten minutes. Then came more feedback. It got to to the point where the presenters were making fun of the audio quality. Yes, from the stage talent was making comments about the system performance. It obviously was not the first time these problems have occurred.
The same issues occurred on the 2nd day of the event. So rather than hash through all of the issues, you can read the tweet stream at the previous blog post (Tweets against the audio machinery).
That night there was a concert with Louden Swain. There was no music audio system, it was the same system as the rest of the convention. Many times the stage volume overwhelmed the public address system. The talent was actually adjusting the aiming of the speakers to improve the sound and I think they did a decent job.
After the concert there was a limited attendance event, with a separate PA system that I believe was brought in by the DJ for the event. This system was able to keep up much better, not only was the room smaller the equipment was more suited to the use. The system was two Self powered 15, two-way system with a maximum output of 132 dB. It was much better not just for voice but for music as well.
The third day was much the same in terms of performance. However the issues with the monitors and feedback got to be so bad it was comical. One panelist asked if they were going deaf as they could not hear a single question, the audience started relaying the questions for them. During a two person panel, the talent heard so much feedback they started doing synchronized microphone movements “ringing out” the monitors to try to fix the issue. At one point during a break in the panels, feedback rang out with no talent or microphone on stage. It was so loud and painful that guests were screaming from fear and pain.
The reason I bring these up is that the audio and video system actually impacted the guest experience. No one there other than the wife knew what I do, and yet there were still conversations going on around me about the problems with the audio and video. People were talking about how bad it was, why were there so many problems, this convention happens multiple times…etc. The event became a caricature of poor audio and cheap conventions.
Many of the problems could have been avoided simply by selecting different equipment. The equipment was reputable just not the right selections for the room and use. This convention is a key example where renting a good system for the space would have greatly improved the experience. I am not naive and realize that this event is for profit and realize that by reducing the equipment costs means more profits. The fact that tickets ranged from US$650 to US$150 for all three days plus additional fees for the autographs and picture opportunities makes me feel like the frugality is unwarranted.
Tweets against the audio machinery
As some of you who follow my Twitter feed, I went to a fan convention with the wife last week. I am still gathering my thoughts and writing a blog post about the experience. However I wanted to gather all of the Tweets together in one location for those that might has missed some of the experience. So presented in chronological order and unedited are my Tweets about the event. Dates and times are Pacific Standard.
21 November 2013 20:12
For those of you who think my twitter stream is eclectic, brace yourself.
@GentlyMad is taking me to a fan convention for Supernatural…
21 November 2013 20:15
For those of you following along the link is http://t.co/r9p51GywY3 i am looking forward to meeting @feliciaday the rest is unknown.
21 November 2013 20:24
1st tweet of #BURCON, waited in line to register and @GentlyMad’s not available yet. Could have still been drinking, watching hockey.
22 November 2013 12:27
Instead of listening to @AVNationTV live podcast @GentlyMad has taken me to #BURCON and the video is out of focus
22 November 2013 14:39
Must resist urge to go tweak audio at this #BURCON event. I think @GentlyMad would kick me if i did. Must restrain myself….
22 November 2013 14:59
#AVtweeps how often do you change batteries at panel event? Wireless drop outs, understandable but thinking wired for question mics
22 November 2013 16:20
So at #BURCON with @GentlyMad watching video mistakes and listening to drop outs. Feel bad to be making light of other people’s problems…
22 November 2013 16:30
So this session is being brought to you without a compressor or de-esser. Must resist the urge to go fix the mix… Hope not someone i know
22 November 2013 16:48
More guest audio in the stage monitor and perhaps less level in the house to make people talk louder #BURCON
22 November 2013 16:54
Current play back method is presenter holding speaker to microphone from MP3 player. presenter had it, was planning to use. Line in please.
22 November 2013 19:29
#BurCon Day 1 is almost done, a karaoke dance party to go. I really hope they bring in an audio music system and not use the voice system.
22 November 2013 21:53
@cabbey yes #BURCON is using speech system for music/karaoke system. No subwoofer and no punch.
22 November 2013 22:01
Earplugs firmly in place. Much needed. Audio system: All CLIP all the time at #BURCON
22 November 2013 22:25
I really enjoy the 60Hz waterfall on the video as well. Man av at its finest
22 November 2013 23:09
If you can’t ride a fader to prevent feedback in the house system at #BURCON i can recommend some feedback suppressors.
22 November 2013 23:10
Hey #BURCON why have the stage lights so low? Photography is allowed why not allow the patrons to get good shots?
22 November 2013 23:11
Yes Snarky Mode is activated. @GentlyMad said it was allowed as long as i take pictures.
22 November 2013 23:20
Well clipping for hours has got to be good for drivers
22 November 2013 23:43
I know the purpose of reverb and autotune, it should be used on karaoke, unfortunately it is not being used at #BURCON. Ah ear plugs.
23 November 2013 10:27
Realy #BurCon the wireless mic died 10 minutes into the first session. Then feedback. Now people making fun of audio.
23 November 2013 10:39
You know the audio is a problem when @GentlyMad is looking at me knowing i want to fix it…..
23 November 2013 12:36
#AVtweeps just a friendly reminder, don’t skimp on audio monitors. Difficult to watch #BURCON talent comment on audio on stage.
23 November 2013 13:14
#BurCon audio hits continue. 60Hz hum is louder than talent. It just started…. Hmmmmmmmmmm @GentlyMad is amazed i haven’t clawed ears off.
23 November 2013 13:21
Now #BurCon talent needs to walk off stage to hand mic to audience questions. SPL keeps going up to point of ringing and slapback is louder.
23 November 2013 14:05
@brockmcginnis nope it is a live fan convention so it is the production staff. When talent makes fun of audio & video…. Well ……..
23 November 2013 14:27
@brockmcginnis yes i agree i should not slag people but the system they are using is showing its wrinkles and uncut rough edges.
23 November 2013 15:52
Best line at #burcon so far, something i swear @GentlyMad would say. “I just threw my microphone cozy at him.” By @dicksp8jr (windscreen)
23 November 2013 16:42
there are these things called mute buttons on audio consoles. the team at #BURCON should use them as @GentlyMad is asking me questions…
23 November 2013 17:07
Literally the house left of the PA @ #burcon was no content just ground hum. Now feedback and ringing…. Sigh
23 November 2013 17:51
More audience in the monitors please #BurCon the talent can’t hear the questions. Sigh
23 November 2013 22:29
Tonight’s #BurCon question. Will @loudonswain have a PA or just the voice system. Any guesses
23 November 2013 22:49
For those of you scoring at home, and those that are alone, there is no Music PA. Just feedback, stage volume and voice PA for vocals.
23 November 2013 22:49
But i have @GentlyMad and a camera plus some cool @BorrowLenses glass so it is still all good.
23 November 2013 22:53
@mattcohen4real is doing a good job tweaking the speakers at #BurCon @GentlyMad says i can’t help. Really on both counts
24 November 2013 00:25
PA at #BurCon after party is much better than the main system. Amazing what happens when system matches use. There is low end and headroom!
24 November 2013 14:37
No @feliciaday you are not deaf, the audio system @ #BurCon is not keeping up. I know it can be better….. Sigh
24 November 2013 19:53
#burcon really could use a high pass filter on the microphone. It is so boomy i am putting in ear plugs…. Things i do for @GentlyMad
24 November 2013 20:28
Appropriatte way to end #BurCon, dead microphone…. 2 minutes into panel
25 November 2013 18:57
@brockmcginnis @rAVeBlogSquad @stillbeingmolly i will be writing up a blog post about production at #BurCon & how it impacted event for all
Metadata is your friend
Previously I wrote about how one can store too much data. I was guilty of that personally. I have way too much data hard to sort through easily. This collection of data was not just images I have taken; it is also documents, spreadsheets, and presentations. What is often overlooked is that there are tools out there to address the issue head on, but most of us don’t use them. It is the power of Metadata.
For those of you not familiar with the term metadata, it is data about data. Yes, that sentence is circular, on purpose. Metadata is a way to describe data using additional data. A few examples to illustrate the idea could be thought of as the “Tag Cloud” to the right on this blog. I manually go in and add descriptive tags for each post so that people can find them easily. That is just one example; another one is key wording or captions in pictures. The actual data is the image itself; the metadata is describing the data that is contained within the image.
The key is to actually fill it out and use the metadata options in pieces of software. This can make finding something much later, much easier. Metadata is not limited just to photographs and blog posts; the much-maligned Microsoft Office products include the ability to add metadata to the file. Microsoft does not call metadata “metadata”, they call it “Properties”. This data can be very helpful.
Let’s say that you were writing a letter to an airline about the difficulties you had with booking a flight with frequent flier miles. Now when you save the file, you might give it a filename such as “United July 2011”. Now later you go looking for the file, will you be able to find it just based on the filename? What happens for something less directly identifiable? It becomes a little harder. However if I add a brief sentence that says “Correspondence about trouble booking a flight using frequent flier middles” and put in keywords of “United, Frequent Flier, Reward, Travel”. Both Mac and Windows operating systems provide utilities to find files using metadata. It is the search tool built into the finder.
You might be thinking to yourself, “I do not need to do all this extra work, I can keep track of my files.” I would like to leave you with perhaps the most compelling reason to fill out your metadata – media files.
All the MP3 and other media files that are organized in iTunes are organized using metadata. Can you imagine how difficult it would be to go through 4,730 files to find one specific piece of media? How about if you have multiple versions of the same song? Without metadata, media management would be very difficult.
Now if you will excuse me, I have to go fill in some document properties.
Click for larger image
Technology Stills Needs Personal Touch
I was originally going to write a blog post about the conversation topic I alluded to in a few Tweets on the evening of June 29, 2011; however United Airlines changed the topic. This blog post is about the frustration when technology does not actually make things easier. It also gets more frustrating after asking for help when the technology fails.
I wanted to book an award fare to fly myself and the L&T Wife to California on United. So I went to the United website, logged in with my frequent flier number – you know the one that literally has almost half a million miles in the past 11 years. I went through and looked at all the options for flights before finally picking one. I signed myself and the Wife up for it, picked our seats, continued to the payment page and entered my credit card number. Clicked the Submit button, and nothing happened. Clicked button again, nothing happened.
I changed browser from Firefox to Safari and tried again all the way from the beginning I could not save or hold my work. Nothing happened under Safari as well. I then decided to call United Rewards Reservations, which is when the frustration started. This is a basic synopsis of the conversation
- "Hello, I am having trouble booking reward travel on the website."
- "When and where are you trying to travel to?"
- I respond with the information
- "No, there are no seats available for the dates you want."
- "But the website shows many open seats."
- "I am sorry sir the website is wrong."
- "Okay, so what are my options?"
- "There is a flight three days earlier for outbound and two days later for the return."
Whiskey Tango Foxtrot I thought – I did not say it. I was polite to the agent as they are just reporting what the screen is showing.
We go round and round and finally get the exact same itinerary, as I had created online. I did not care if it was a mileage saver fare or not, her system was defaulting to fares that take less miles. If I was asked I would have said, I had picked specific flights online.
Then came the time to make payment. Online it was 75,000 miles per person; via the phone it was 100,000 miles per person. I ask why the difference.
The agent had no good explanation, so I asked for a supervisor. During this time I was placed on hold, without music or other audio so I had no indication I was still connected. The supervisor could not assist me.
As we passed the thirty-minute mark the supervisor indicated I should be transferred to Web Support to assist. After a few minutes with the Web Support person I was able to book my flight.
It was extremely frustrating. I tried to do it via self-service on the web. It did not work. I tried to call for help and that did not work for the first 40 minutes. It took approximately 45 minutes on the phone and three agents to finish the transaction I already had details for. If the first person I communicated with listened to my original issue they might have thought to transfer me to the web team earlier. Instead I believe that they were just going off the script, not really helping the customer.
I tweeted out my frustration and decided to wait 24 hours to see if there was a response before posting. So far I have heard nothing.
Now some people may be thinking that it is only 50K miles, ~10% of your tally. To put the value of that in context, 50K miles is a round trip somewhere in the US with the right planning. Now that this trip is booked, I will get to call again to add my dietary needs as I can’t do that from the website. I think I will wait a day or two.
For those of you that have an impact on customer interaction, think about what happens when your website doesn’t work. How will you help that person? Have you provided them with enough information to know where to go for help? Is the first point of contact going to listen and respond or just follow a script? That one decision can change a customer interaction from a phone call to a frustration and wasting time for everyone involved.
Digital Audio that is Good Enough
Another airplane flight, another blog post. This one is about the “new modes” of audio delivery. As many of my readers know I work in the audio industry, I do not often blog about it as I am concerned about the impact my comments could have. Not that I would get in trouble with my employer, heck I was looking for a job when I got this one; but more that people would take my comments and opinions as if I was speaking for my employer. So let my blog, my domain, my opinions, written in my nonworking hours and me unequivocally state that these are my personal thoughts and opinions.
The new mode of delivery I am thinking of is digital distribution of audio products. I purchase music as a digital format less often than most people think. The reason is that most delivery methods are compressed. I believe that compression should be applied judiciously. Not all compression is bad, as I sit listening to music on my iPod on a plane. I decided the quality of music is the item I want for this application.
That is the key; the application is that I want to travel with a large selection of music. It does not have to be pristine as the listening environment is less than pristine. I do however want for airplane flights and time in hotels to be able to have music. I do not always know what kind of music I am going to want to listen to three days from now. I would rather have the selection at a compression ratio that I find appropriate.
I am purposefully omitting numbers, as too often when numbers are listed it becomes a contest by numbers, such as one saying that they will only listen to music at 96kHz sample rate. When I ask why, the answer is often well it is a higher number it must be better. I wonder if that person would be able to tell the difference between 48kHz and 96kHz recordings in the listening conditions I am currently in; a tin can traveling through the air at 300mph with an internal ambient noise of 70dB SPL A weighted through noise canceling ear buds. Probably not so easily, I am not going to say it is impossible; I am going to say it is improbable. I believe and can hear that there is a difference between sample rates in other environments.
At the same time, other listening environments that are acceptable applications for compressed audio for some people are not for me. In my car I have CDs loaded in the changer and a smaller election of non-compressed audio files on the attached iPod. In that environment I can hear a difference between the full quality and the compressed audio. I do not listen to satellite radio music channels in the car often as that compression annoys me and I can hear it. For other people they do not find it objectionable.
The key is that I am deciding. I can control how much compression and the amount of data that is important and acceptable to me. Often buying audio products as digital downloads that decision is someone else’s and I might not agree with it. Paying 99 cents for a compressed piece of music that is just for “fun” can make sense. Paying $15 for a digital download of a CD that is compressed as 11 separate songs versus buying the CD for $15 is something I will not do.
Why you may ask? I have done it, and I have regretted spending the money. The digital download has some audio artifacts that the CD does not. I then can also decide if I want to compress the audio to put it in another format. Not only that, I get to decide the compression protocol as MPEG3 is not always the best. If more people had uncompressed delivery methods I would buy more audio via digital distribution.
The key is to use the best test equipment that we have, our ears, to make the decision for yourself. The way I approach it, is your source should be as ideal as possible and then you have the control to decide what is acceptable compression tradeoffs.
Also please remember that one answer is not the right answer for everyone. The amount of compression that I find objectionable might be perfectly acceptable to someone else. So don’t turn your nose up and ruin other people’s enjoyment just because it doesn’t meet your standards. If people are having fun or the message is getting across isn’t the most important parts of audio being accomplished.
And yes my photographer friends the same thing can be said about JPEG compression. I start with RAW and then I decide how to impact the image as I process it to JPEG or other formats.
Making the interface work for me
Often times the controls for a piece of software are not the friendliest locations for one-handed operation. By one-handed operation I mean one hand on the keyboard, one hand on the mouse. When working in graphic programs I find myself working that way quite often. It could be as basic as a drawing program where I need to use the Z key to initiate the zoom function and then using the mouse to decide where to zoom. Other times it is more complex, such as selecting an image, zooming into a one pixel to one pixel rendering, panning, and then marking the image as a keeper or a chucker. It could just as likely be a drawing program where I am documenting an idea. For my #AVTweeps, just think AutoCAD.
Recently I found myself being sore at the end of an image review session from unnatural movements. My data management workflow is outlined at previous blog post. However looking at the actual process I began to find lots of moving of the hands. My review process is based around the use of Adobe® Photoshop® Lightroom® (quite the mouthful so Lightroom for short). The program itself is very powerful and does help me manage my images, pictures, and photos. The program lacks some ergonomics for the one handed user.
The way I cull images is I go into the library mode and review the images at a resolution to fit onto the screen. I then quickly look at it and decided if it is a Pick, Unmarked, or a Reject. These selections are done using the P U and X keys. Notice how they are laid out on the keyboard.
Not very easy to navigate with one hand. Now let’s say I want to zoom into an area, one can either use the mouse to enter a 1:1 view or press shift and spacebar to enter the same mode, then use the mouse to zoom to areas. I do this to see how much aberration is viewable and if it is in focus, once again I decide if it is a pick, unfledged, or rejected. Lightroom has a setting to advance to the next image after assigning a value to the image.
That setting seems like it would save time, and it does quite often. However if I want to assign two things to an image, I have to back up to the image. If I find an image of the same subject later in the batch that is better than a pick I decided on, I go back to unmarked the previously picked image. So now I have a few options. I can expose the filmstrip at the bottom of the application window and click on it with the mouse and then press U. If this image was just the previous image I can use the arrow keys. If you notice both of these options require me to take my right hand off the mouse and place it on the right half of the keyboard. Now I could also just use my left hand on the right side of the keyboard however that still means changing positions.
Let’s say I want to see if a crop makes an image better. An example of a crop changing an image happened at the baseball game I took pictures at, since I was sitting in the stands some of the images have the back of people’s heads in them. Cropping the heads out made the pictures better, but some were still chuckers not keepers. In Lightroom I enter crop mode by pressing R, this would enter Develop module, where I would use the mouse to make the crop. I would then finish with the crop. I would then want to mark the image as a keeper or chucker. I cannot do that in the Develop mode, I have to be in Library mode. To return to Library mode I would either take my right hand off the mouse to do the keyboard contortions or move the mouse away from the work area. Neither solution is very ergonomic.
There are keyboards available that are designed to fix some of these issues by changing the keyboard layout and having labels on the keyboard. However some are more expensive than the program itself. Also they are dedicated to the program, so I would still need my regular keyboard for such things as entering text. Not really an idea I was looking for.
I started thinking about it more and more and came up with a more practical solution in my not so humble opinion. I purchased a customizable gamer keypad, a Logitech G13 Programmable Gameboard with LCD Display as it is Mac compatible – yes it is also Windows compatible. (If you decide to buy one after reading my blog, using this link will give me a little commission.) This would let me decide how the keystrokes would be used. I could lay them out to my satisfaction.
I then determined what keys I used most. They are both left and right handed, and some of them require multiple hands, such as entering Library Mode (Command + Option + 1).
These main keys were then assigned to the keypad as I found would work best for me. (Drop me a line if you would like to copy of the configuration file.)
I had 200 plus images from a business trip and figured that would be a great way to test it out. So I went through the images, did the rating, cropping, and keywording in about an hour including uploading to a SmugMug gallery. There was another benefit that occurred that was unexpected, I was able to hide all of the tool palettes in Lightroom so the images were bigger on the screen during the review, remember bigger is better. I do not have exact times for similar tasks using the “standard” keyboard commands but the important thing is I was not sore and it was not as tiring to me.
The keypad allowed the thing that I think all tools should do, get out of the way and let me work. It did just that. Other than when I had to type in keywords, I used just the keypad and the mouse. I did not have to move my hands around the keyboard and mouse.
I also learned a couple more tricks in the process. I can use the keypad in more than one program, but keep the key functions the same. By key function I mean that the same key that sends an R to enter Crop mode in Lightroom can be configured to send a K in Photoshop or Command + K in Preview to perform the crop functions. The same key press to me, sends different keystrokes to the application. Much easier than having to remember all the different commands, similar to Cut, Copy, and Paste being the same in almost every program. That is a fine example of what I was trying to accomplish; cut (Command + X) copy (Command + C) and paste (Command + V) are not great mnemonic devices at first blush but the arrangement of the keys makes it very easy to use.
Are you slowing down the Internet?
As my faithful Twitter reader knows, I have been having some issues with my computer attaching to the network at the office. It has been Outlook locking me out, Windows Domain Server locking me out, IT changing the network configuration, entire system going down… etc. Some of these issues were due to the configuration changes that IT is making, some were unforeseen, some were just plain dumb luck.
Something that surprises me though is that for how much we like to cast aspersion on IT; sometimes we are our own worst enemy. By we, I mean the users. Not just at my company but pretty much everywhere IT has a love hate relationship with the users, the users love to hate IT. I am not saying that IT is beyond reproach, but some of the decisions we make, often times it makes it worse for everyone.
One of the most common complaints I am hearing is about the speed of the Internet. The next common complaint is the fact that many IT departments limit the streaming or some of the social network options. These concerns and complaints are all interrelated and is a case of size.
Many offices are connected with a T1 connection, which sounds “fast” but in reality it is not so much. The standard is that a T1 is 1.544 Mbps (megabits per second). The typical upper limit on residential DSL is 3 Mbps. Cable is much faster with an upper limit of 30 Mbps. Based on that it is easy to see why people often say, “The Internet is much faster at home.” Of course the first comment is why not just bring in something other than a T1? Yes, it is possible but for most business they are looking at uptimes and guaranteed bandwidth. Most contracts with a T1 or similar service state you will have a level of uptime or availability as well as guaranteed minimum speeds.
Most residential broadband services rate the speed as “up to 22Mbps” or something similar. They also typically do not have a guarantee on your uptime or availability. The Comcast Guarantee does not have a guarantee for availability or speed; the Residential Agreement also does not have a speed or availability commitment, the only credits occur after a 24 hour continuous outage. The business agreement has the same issue of lacking performance commitments.
So if I were running a business would I rely on a connection that might be non functioning for a day with no speed minimum, or would I rather have a higher availability and slower speed? I would take the one with a real service level agreement of what bandwidth and connectivity will be delivered.
The next item that impacts the speed is the amount of people using that connection to the Internet. At your house where you might have speeds up to ten times faster, you will typically have no more than four people using the connection at the same time. Now compare that to a business environment, forty people sharing a connection would not be unheard of would it? Not only is it less bandwidth but more people are using it
So if there are 40 people sharing a 1.544 Mbps or 1,554 kbps connection, let’s divide it equally. It is now each person getting 38.6 kbps. Remember dial up modems at 33.6 kbps? Now one user decides to stream a video, the typical bandwidth options are 300 kbps, 500 kbps, or 700kbps. If the user decides to stream the video at 700kbps they have effectively used half of the entire T1, okay it is only 45% but don’t forget the rest of the content on the page. So now because of one person everyone is experiencing delivered speed that can be slower than a dial up modem. Remember the bandwidth is shared for everyone.
Yes, the same thing happens in hotels, coffee houses, airport lounges … etc. bandwidth is shared.
So if I was responsible for productivity and availability of the Internet at a business, what is the first thing I would do? Turn off streaming. Why? It is a bandwidth hog and there are typically more important things to use the bandwidth on that will directly impact staying in business.
Yes, I still think that many IT departments make decisions that are not helpful to the end-users. Yes, I think that the help desk often doesn’t. I just want to point out that we as the users are sometimes the problem. Please, before you decide to fire up Pandora or Slacker, or surf YouTube think about if you are slowing down others? Don’t be a bandwidth hog.
My solution? I take lunch after most people and stay later than most. Why? Since everyone has left for lunch or for home, I get better bandwidth. I also listen to music using my iPod.
Understanding the Process
As things are becoming more and more automated, I feel that the understanding of the process is being lost. I believe that tools should make my life easier and allow me to spend my time doing other things. However there is a downside, does one always understand the automation that is being accomplished? While these can be great timesavers, what happens when it doesn’t work or you don’t like the results? Understanding the process that the automation process is simplifying is key.
A common example is defining an IP network. Most people simply connect to a network and let a Dynamic Host Configuration Protocol (DHCP) server assign the address. This happens at the office, the home, the coffee shop, pretty much everywhere. When it doesn’t work for whatever reason understanding where to start troubleshooting is a mystery to some. I use DHCP quite a bit; I also do know how to do the entire process manually. I can manually – not that I want to – calculate the subnet network and assign the addresses. When there is no DHCP, I am still able to get connected. If I am still unable to get connected, I am able to call tech support and describe the problem effectively.
While IP networking is a common example it occurs with other technologies as well. I do have an interest in photography and have been doing more processing on images. For some of the process I do it manually, for others I do use automation tool. An example of this process is this picture of Martin Brodeur I took.
Straight out of camera, no processing
I took the shot in a manual mode, shutter priority, I also told the camera where to focus to get Brodeur in focus and the background blurry. I could have accomplished a very similar effect using the Portrait Mode preset in the camera, but I wanted to control the look of the picture. After I took the picture I did some work on it in Lightroom, and Nik Software. In the process I adjusted for the lens, applied a vignette, applied noise reduction, and converted it to black and white. This process was a mix of manual and automated. I could have just clicked a few buttons and called it done. Instead I made decisions along the way, and I understood the impact of those decisions. I was able to decide the final mood of the image as a result.
Processed picture, click to see entire gallery
This result is much better because I controlled the process and got the result I wanted. Did using the automation for part of it save time? Yes it did save time. Since I had taken the time to learn about the conversion process http://www.dgrin.com/showthread.php?t=114917 I was able to understand the questions and obtain the result I wanted. Now if you will excuse me, I need to troubleshoot my network as the Wii is not connecting to the Internet.
The medium is as important as the message
Last week I participated in a Twitter chat, an #AVChat hosted by @AVWriter (Linda Seid Frembes) and I was the guest “talker”. It was an interesting experience. I am by no means a digital media expert, but I think that the idea of a Twitter Chat or Tweet Chat or TwitChat is very interesting. The idea of allowing people to connect and share ideas experience and ideas in the virtual world is a good one. Very similar to guild meetings in previous times, it allowed for people in the same trade to share knowledge. It also allowed people to participate how and where they were available with the availability of a replay or transcript. It worked well, however it turned into more question and response then conversations.
I in no way blame the #AVTweeps or Linda. I think it is the medium, Twitter. The idea that one could connect and participate as they preferred to seemed interesting. I am not sure it is appropriate for a moderated talk. Even though we were trying to make the gathering as portable as possible, there were still some vestiges of old school technologies. I was online with five devices with various Twitter clients on them and the best communication tool of the bunch was what I had attached to my head, I was talking on the phone with Linda. Part of this connection approach is that it was a first for both of us to have a moderated chat using Twitter.
What started to happen though was that Linda would tell me the next question and I would prepare my answer so that she could send it out, I could answer it and then discussion would take place. What ended up happening was that she would tweet the question and multiple people would respond. It would be similar to someone asking a presenter for an image of a tree and members of the audience also presenting images of a tree. It was interesting to see everyone’s response, but I am not sure that it is possible to comprehend all that information and respond all at the same time. I found myself reading on one screen and responding on another. The way I was able to not miss questions that were coming back was that Linda would alert me via phone that something went by I should answer.
The medium of Twitter was such that there was too much information flowing. I am not saying that is a bad thing. As I look back and read the transcript I can comprehend more than I did in “real time”. I liken the TwitChat to trying to hold a class in a trade show booth. People are paying attention, there are lots of conversations going on, ancillary distractions occur, and ultimately some information is missed. The information is not ignored and people are not being disrespectful or malevolent it is just that there is so much going on that things are missed.
The same issue occurs in photography, viewing a timed image presentation is not the same as looking at a static image at your own pace. The root cause of the issue is the fact that intervals might not be one that matches the viewer’s speed. The timing could be too slow so that the observer gets bored, too fast so that the image is not truly viewed, or just the amount of images overwhelms the audience. Obviously each person is going to have their own opinion of what is the proper timing.
Given the option of watching a presentation that self advances versus me controlling it, I will pick the viewer controlled one. If I have the choice of going to a gallery and strolling through artwork at my own speed or watching a presentation of the images where I can’t chose the time I will pick the gallery.
Of course there are applications for all of these mediums. A digital picture frame that advances once a minute of snapshots in an office with a large print of a photograph hanging behind the desk to allow for longer viewing could be the perfect solution. (The differences between an image and a photograph.) Having timed previews on the front page of a website while still allowing visitors to browse content on following pages is the same idea. A TwitChat could be just as effective for some as having a formal online seminar with a moderator and a PowerPoint or Keynote presentation.
It is simple a matter of selecting the right medium for the message. There is no one right answer, you have to pick what works for you as a viewer. As a presenter, think about how you can allow your audience determine their preferred method without ruining your message. It is not an easy process and it is sometimes overlooked but it is important to consider.
If you are wondering why I did not say “slideshow” for presentation, is that to me a slide is an image on a transparent media that is placed in a projector or viewer for display. It is its own medium just like analog is different from digital in audio. I wanted to make sure that the idea was clear.
Just because it is on the Internet doesn’t mean it is free
Recently I ran across this story http://thestolenscream.com/ about a picture that was taken from a photographer’s Flickr site and was being used around the world. He was not being compensated. It is both an amazing story of how something can go around the world from just being good and how at times people’s work is stolen. The video is 10 minutes long and is well done. The back story and video link is available here at http://fstoppers.com/fstoppers-original-the-stolen-scream/
Notice what I have done above, I clearly indicated where the information is located. I could have just as easily gone into YouTube and gotten an embed link to put into my blog. I also could have just as easily downloaded the video and edited out the credits. But that is an insult to the people who created it. I am basically stealing their time and effort.
I know that some of my readers are more familiar with audio video system integration than with photography. The same thing occurs there and other places as well. It might not be a picture it could be a grounding scheme or a user interface panel just for a sample. Perhaps it is finding information on a manufacturer’s website and including it in your information package. Often manufacturers are okay with that, if you are using the information to sell and use their products. However that does not always happen.
Last year I was very surprised when someone called me to complain about a training video I did that was on YouTube. I was not surprised that I got a complaint, rather I was surprised that it was on YouTube. I did not upload the video there. I uploaded it to my work website. Not a huge deal as it was information about our products, however it then started to sink in. This website had taken someone else’s work, made some edits, and were then presenting it as their own work. They even placed their company logo over the video as well.
Someone else was supplicating all of the time and effort placed into the video. I understand how anything on the Internet is capable of being copied. Basically that was what annoyed me the most was that the effort put forth to collect and present the information was not being recognized someone else was just taking it.
That seems small, no one harmed, right? That is somewhat correct. My company paid for me to make the video and the product was still being promoted. However what happened if it was not a sales tool but rather a picture of a landmark, a presentation about a topic, a system design, or a configuration file for a piece of equipment.
The information is being provided without compensation to the creator or even acknowledgment. Basically that person’s time, effort, and knowledge is being stolen. If it is licensed under Creative Commons terms the creator expects certain respect in the process. If it is not expressly stated that it is okay to use, it should not be used.
The best example is someone who is creating a presentation or proposal and need a picture of a movie theater. I found a nice theater image on Wikipedia taken by Fernando de Sousa from Melbourne, Australia and licensed under Creative Commons Attribution-Share Alike 2.0 Generic license. That license requires attribution. Mr. de Sousa is a professional photographer. He takes pictures for compensation. He shared his work, the results of his skill, equipment, experience, and knowledge. All that he asks for is credit. Will you provide it?
Think about it another way. You went through the process of creating a proposal for a project. You outlined the equipment and process you are going to use. You provided information about why you chose that approach. The person you made the proposal to decides not to hire you. Instead they take your proposal package and use it to create the project themselves. Would that annoy you? Would you expect compensation? How about if all you asked for was attribution?
So I ask everyone to please respect the Intellectual Property, time, effort, and knowledge that is provided on the Internet and provide attribution at least. Don’t take credit for other people’s work.
I am off to go place watermarks on my stuff, if you would like to use an image without it, just ask.
The airplane challenge for help, software, and interfaces
Another blog post written at 32,000 feet as that is when the issue hit me. I have various electronic devices as my dedicated reader knows. I have previously talked about various data access connection challenges. This new challenge is not one of my own doing. It is a poor user experience or use case definition. This problem was illustrated by Amazon and their Kindle applications, but it does not apply to just them. This challenge happens to many applications beyond this example.
I have found a time where the electronic delivery of a book advantages outstrip the disadvantages I previously outlined. This happened with a “for Dummies” book. At work, I am on a software implementation team rolling out a new application package. I wanted the “for Dummies” book for the application. I looked at Amazon and the book was available both in paperback and in Kindle form. The Kindle form was substatianaly less expensive, but the key item was I could get literally instant delivery. While on a conference call I was able to purchase the book, take delivery of it, and reference it during the call. It was very powerful and better than using the Internet search tools as it has high signal to noise and no rabbit trails.
The next day I had a business trip, I had my analog reading material and my electronic versions. On the plane flight I started to truly ready my newly purchased book. It was also the first time I had started to explore some of the Kindle application features. I saw that there were sections of the book that were underlined. Not underlined texted, but a dashed underline. I was not sure what it was at first, but I found out that it meant that other readers had highlighted that passage. The idea of crowd sourced highlighting was intriguing for me; it helps to know what areas one should pay attention to.
I wanted to see what other features were available. My brain needed a little break from thinking about business practices. I was going to use that time to browse through the help file and see what other features were available that I might not be using in the Kindle application. I was airborne when I wanted to do that. I had no Internet access on that flight. As a result of not being connected to the Internet the help file was not available.
That seems very counterintuitive, why would an electronic reading application not include a help file with it? Think about that for a moment. Something that is designed to read document while disconnected from the Intenet is not able to read its own help file while not connected. It is not just Kindle that has this design flaw. Cloudreader, Nook, and iBooks for iPad do not have a help file that is readily available. I am sure that I can continue to list others as well. It also occurs with applications for workstations.
Not all applications are that short sighted. Two applications on my iPad have help that is available offline. iAnnotate and DocsToGo install their help file as a document you can read from within the applications.
Makes perfect sense to me. An application that is designed to be portable, should have supporting documentation that is portable. So for those of you involved in the design and creation of applications, think about the user that is not connected to the Internet. They might want to refer to the supporting documents; you should make it easy for them. The fact that I turned to the help file already means that the application is not intuitive enough. Do not compound the issue by making it difficult to find the help.
Also this concept applies to those of you who are creating custom control interfaces using software created by others. On more occasions than I would care to count I have ended up troubleshooting a control system and having to guess. These guesses could range from what are the IP addresses to connect to the system to what the control system is using for the backend to how to get help.
For the application users, I recommend that you try out your applications before you are traveling with them or disconnected from the Internet to make sure you understand how to use it. The help files might not always be available.
Well the fasten seatbelt sign just came on….
<note this post was recreated after a website crash, good thing I backed it up>
Why Net Neutrality matters?
Over the past few weeks there has been talk about Net Neutrality, including the FCC making rulings. I will be the first to admit that me writing about the issue is a little late, as the decisions have already been made. The decisions are not final and with Joe Lieberman now wanting to be able to turn off the Internet it is time for us to get more involved with the issues.
The item I am concerned about is what happens when Internet access providers start favoring their services over the completion. Now some will say that there is the ability to change the provider of high speed Internet. This issue is not entirely true. Just as one cannot in the United States freely chose which cable television company to use, one cannot freely chose which high speed provider to use. The Internet providers are limited by both technological needs and government mandates. Yes, one can use satellite or wireless or other solutions but it is not always comparing equal delivery of services. Think about the issues AT&T had with traffic saturation and the iPhone.
Currently my options for high-speed Internet access at my home are:
- Comcast Cable Modem (22Mbps down and 6Mbps up)
- AT&T DSL (1.5Mbs down/384kbps up)
- Earthlink or other Dial Up (0.0336Mbs down/33.6kbps up)
- Hughes Net (2Mbps down300kbps up; capped at 400MB of data a month)
- FiOs and UVerse are not available
So given these conditions I am pretty sure that all of us would chose Comcast. Also given the pricing structure, Comcast makes the most sense financially. Now Comcast has some programs in place to provide additional services through them for their customer’s use. Comcast offering Mozy is an example of extra services.
From the Comcast press release: “Comcast High-Speed Internet customers automatically receive 2 GB of storage included with their subscription. This amount allows for storage of up to hundreds of photos, music files, or thousands of documents. Comcast also offers a 50 GB storage plan for $4.99 monthly or $49.99 annually, and a 200 GB storage plan for $9.99 monthly or $99.99 annually.” The webpage http://security.comcast.net/backup/details/ outlines the basic examples.
I knew that I needed more than 2GB of backup. I wanted offsite storage in addition to backup. The differences can be subtle between storage and backup, but that is another blog post. After looking at the options I decided to use JungleDisk, it is less expensive per month and has other features I want.
One can easily see how JungleDisk is competition to Mozy. They offer similar services and both require high-speed connectivity to work effectively. What happens if Comcast was to decide to put priority on the traffic to Mozy and degrade the traffic to JungleDisk?
The issue of how one selects a service becomes much more complex. If the bandwidth I am using to connect to JungleDisk is throttled back wouldn’t that change my experience and cause me to think about another solution. All of the sudden Mozy would be much more of an option as a result of being much faster for me as a Comcast user. Having a backup take an hour instead of two hours can be a very big deal – especially if one is trying to backup data before leaving on a trip.
Now you might say, under what guise would Comcast throttle traffic like that, "network management". I easily see a situation where Comcast would decide that backups running at 2AM on everyone’s computer were causing congestion. The first solution any reasonable business is to make sure its customers and partners’ experience is optimized to keep the complaints to a minimum. The majority of the users might be using Mozy since it is included and I would be in the minority using JungleDisk. So the decision made to correct the problem for the majority by providing priority to Mozy would make sense from a customer satisfaction evaluation. I am glossing over the way that this management can be done, it is not just how data is transmitted to my location it is also how the traffic is transmitted across the interconnections of the Internet itself.
Due to the partnership between Mozy and Comcast and possible bandwidth management, Mozy might gain me as a customer while JungleDisk would lose me as a customer. Beyond that I would lose as a consumer as the choice I made would be compromised. I would have to look at the ability to use the service not just the price of the service.
This issue can be applied to many other products, virus protection software, website hosting, picture hosting, voice services. Yes, Vonage and Skype can be blocked and already have been blocked by Internet Service Providers. The same ones that offer phone service. The FCC did require the voice services to be unblocked.
To paint with a very wide and absurd brushstroke, it would be akin to the electric company also selling light bulbs. Of course their light bulbs work better for most users. They did not allow for people to tailor their light bulb choices as the power was optimized to work with the electric company’s bulb vendor. So to get effective lighting, the user is relegated to purchasing what the electric company is selling even if it isn’t the best solution for them.
Let me know if you want me to talk about Comcast now having NBC/Universal content. I am sorry why is Netflix or ABC or Fox or Hulu or …. streaming so slowly?
So when people talk about Net Neutrality, it is not just something for the technophiles. It can impact anyone who uses the Internet.
Data Backup and Access
I have found a few things out over the past few weeks that I figure I will share with you my faithful reader. I have had a logic controller failure on my MacBook Pro which meant that I was sans laptop for approximately 10 days. The day after I received it, less than 12 hours later, the cable modem at my house failed.
So between not having my personal laptop and then Internet access being a car ride away, I discovered some items along the way.
- Backing up Data is important, but one also needs access to the data
There are a few other tangential things I have found out as well, such as changes to my photography workflow, online instructions should not be the only instruction, unfettered Internet access can be a key item but those will be separate posts.
Using my backup solutions none of my data was in jeopardy, however using that data was the challenge. I have been using JungleDisk as my incremental off site backup solution. It works very well for me, but has some choices along with it that I was not fully aware of when I made them. Using a block copy approach I could reduce the amount of bandwidth and storage space I use, however this does not come without its tradeoffs. By making this choice I would be unable to browse the files online, I would have to actually restore them using the client software. At the time I did not think that it was a big deal as I figured I could always just install the client on another computer and get all the data back.
A key item here is that it is my off site backup. Too many people think that just having a backup is sufficient. It is not as there are other things to consider than just a hard drive or computer failure. One has to think of other ways that Data can be destroyed: “Someone stole my car! There was an earthquake! A terrible flood! Locusts!!“ Having the data off site makes it much less likely that Data will be lost.
I could have just installed the client on another computer and get all the data back that still was not going to solve all my issues. As a result of not being able to browse the contents, I am going to change my approach yet again.
Some items will be backed up using block copy, other items will be backed up using file copy, and still other items will be backed up to either Mobile Me’s iDisk or to my Dropbox account. You might wonder what data would go to what place and how to keep it all organized, well that is actually fairly easy as long as I make the right decisions when starting. Just by putting files into different locations on my computer they will be backed up in different ways. Placing items into the Documents directory will place them on JungleDisk, placing items in the Dropbox folder will be on Dropbox obviously (still waiting for selective sync before 100% happy with it), and items stored in iDisk will be on MobileMe iDisk.
The key to this approach is to make sure that a file is stored in one location and only one location for live Data. I have often encountered problems where two files have the same name, but different time stamps or on different computers, so how do I know which one is current. Since all of these items are backed up to the "cloud" of the Internet I do not have to worry greatly about the loss of data. I still do backups to DVD and secondary hard drives every so often so that I am not completely at risk. For items that I want to make sure I backup in more than one location, well I have not hit any yet, but using ChronoSync to keep a "Backup" directory in sync is my plan. This will allow me to create a directory in one of the other storage locations that is labeled KeyJDBU (Key JungleDisk Backup items), then I can use ChronoSync to decide what to copy into it and keep in sync.
This approach of also having the key items in iDisk or Dropbox will also allow for the items to be browsable without having to restore all the data. It still does not solve another key issue, do I have the access to the programs to use the data once restored? I found that quite often the answer was no. Most of this situation was my own fault as I chose what format to store the Data in. Once again I could reinstall and have the data back, but that would take a while; especially with the licensing headaches some companies have put in place (that means you Adobe). I am now considering how to handle that issue.
Some times numbers don’t do something justice
So I use SmugMug to host my photos as they have some really cool features and people there. I also started following a few of them on Twitter, and there was a tweet that just made my head hurt, so I sat down to do the math on it. Okay, I also used Wolfram Alpha to help with it.
The Tweet from Baldy stated:
“Whoa! Vincent LaForet‘s new Canon Mark IV vid on SmugMug used over 20 terabytes of bandwidth in 300,000 views in 14 hours.”
So I started to try and figure out how many megabits/second that was so I could compare it to typical network connectivity that I am more familiar with, 100BaseT or Fast Ethernet, and Gigabit Ethernet. Well it just became amazing.
- First I converted 20TB to megabits
- 20TB= 20,000,000,000,000 bytes = 160,000,000,000,000 bits = 160,000,000 megabits.
Yes, that is 160 Billion megabits
- The next thing was to convert hours to seconds
- 14 hours = 840 minutes = 50,400 seconds
- Now to convert to megabits/second
- 160,000,000 megabits/50,400 seconds = 3,174 megabits/second = 3.2 gigabits/second.
So that is pretty freaking fast at to how quickly the data is coming out.
Wolfram Alpha had cool comparisons to put it in context. It is approximately equal to the text content of the Library of Congress. It is approximately equal to 1/8th of the estimated data content of the surface web (~~ 170 TB ).
Dang no wonder they are in need of 2 TB of flash memory for a server. You can see the picture and Don MacAskill CEO of SmugMug here http://bit.ly/3HlXzH
Signal to Noise
So a few days ago I posted a Tweet that said, “signal to noise is important, not just in audio but in life“. That post was an amalgam of someone’s tweet commenting on the palaver at their job result of the amount of Tweets I was getting from one stream. I realize that the single stream is not an indictment of all who Twitter, Twitterers?
I figured I would post here what I learned from a quick study over the past week. I am following 40 streams, 32 posted something in the past week, there were a total of 522 tweets, or an average of 16 tweets over the past week. However there was one person who posted 208 Tweets in one week, the vast majority of which were very repeative and redundant. Since a picture is worth a thousand words, how much is a graph worth?
40% From one stream
In addition the person also put down an identifier so that they would trend and are getting much of the information from AlertDeck. So that person is now not being followed now. The disappointing part is that they actually have something valuable to say; they have just started adding to much noise in trying to market themselves.
So my warning is that marketing via Twitter can be done, but if there is no content everything gets turned off. Stay tuned… I might decide to reveal who the offender is.
Oh yeah, I have also decided that Apple’s iWork’s Numbers ’08 is not very powerful when it comes to collating data as I still had to do much manually instead of just doing a Pivot Table in Excel. I also still can’t activate half my applications…
Interesting E Book Occurrence
So in a previous post I commented on how I was not sure that Electronic Books might not be perfect. Well Amazon did something that I had not thought of: they took back a purchase. Basically they deleted George Orwell’s books 1984 and Animal Farm from users’ Kindles due to rights issues. The complete stories are listed below…
Oh and I could not find anything on Amazon’s site about it…
Back to recovering from the Rally and commute back home