Drupal, the open source Content Management System (CMS) used to power everything from personal sites to the White House's Web site, is legendary for its flexibility and power. But Drupal has also been known for its labyrinthian administrative interface.
Drupal 7 represents a conscious effort to make Drupal easier to use, but the results are mixed.
Drupal 7's installation routine is relatively simple — create a database, modify the sample configuration file, and walk through a very short Web-based install. The experience is comparable to setting up a WordPress blog. It's fast, easy, and you'll have no problem setting up a Drupal site if you have any business at all administering a CMS.
After the installation, Drupal hands you off to the administrator dashboard. This is the "now what?" moment. In its just-installed state, Drupal is sort of the Ikea kit of content management — some assembly is required. Specifically, you need to start adding and configuring modules to extend the functionality of Drupal, and theme the site to achieve the look you'd like.
When I say the user-experience efforts for Drupal 7 had mixed results, I should clarify that I find Drupal 7 to be more intuitive and usable than Drupal 6. The user interface isn't just more attractive (it is) but also includes some crucial enhancements.
Managing modules, for example, is much easier in Drupal 7. Now you can install a module just by specifying the URL for the zip file or tarball, though you still have to use the Drupal community site to search for modules. The WordPress Dashboard allows searching for, installing, and managing plugins without leaving the Dashboard.
But Drupal 7 still has a way to go before I'd consider it entirely intuitive. You'll need to spend some time with the Drupal docs to become productive with the platform, even if you're just using default modules.
One example — you can enable tracking and get an extremely detailed log of activity on your Drupal site. But it's a multi-step process, and one could forgive an admin for being frustrated that nodes include a "Track" tab whether it's enabled or not — but without any indication how to enable gathering data.
The flip side is that the new user interface is much easier to navigate than Drupal 6. One major feature that I haven't seen elsewhere is the "Shortcut" concept. If there's an administrative function you use frequently, you can add it to the shortcuts so that it's just one click away when you're in the administrative interface. For example, to moderate comments, you have to navigate to the Content menu, then the Comments tab, and then click the "unapproved comments" button. If your site doesn't allow comments, then this location makes sense. If you're running a site that receives a lot of comments, then this is a hassle. However, you can add a shortcut that puts the unapproved comments piece just one click away.
In short, you will find yourself spending quite a lot of time pouring over the Drupal documentation while putting your site together. It's required that modules have not just "README" but also inline documentation and how-to-use modules from the admin and user perspective. Other projects should take a cue from the Drupal folks here.
Monday, May 30, 2011
Sunday, May 29, 2011
Having A Good Maintenance In Your Own Laptop Keyboard
The keyboard is the most frequently used components, which is also easy to the accumulation of dust. Just check there the following keycap which is the best place for dust accumulation, comparing to dell adp-70eb adapter. Excessive dust will accelerate the corrosion of conductive rubber keyboard to damage the keyboard's printed circuit.
Some tips you should notice when using the keyboard
Although most manufacturers have taken into account the durability of the keyboard, especially in the structure which was fully optimized, after a long time, the problem came, just like some key is out of control, or the letter on the board wear off. Consequently, you should notice the below tips:
Firstly, do not get angry with your own laptop keyboard. In my views, the biggest killer of the laptop keyboard is yourself. For many computer users, after the crash, because of their loss data, they could not help hitting the keyboard. We remind you that this will damage flexible glue that play a supportive role in the keyboard keys.
Secondly, water is the number one killer of laptop keyboard, even the whole laptop. Try not to eat, smoking or drinking water around the notebook to keep the keyboard clean. In particular, if there is too much liquid flowing into the keyboard, it is likely to penetrate deeply into the system, which result in short-circuit and hardware damage, not like HP Compad 463958-001 ac adapter. But such damage is out of guarantee limitations, you have to solve it with your own money. We hope you to be able to develop a good habit to eat or drink away from the notebook computer, which will minimize the chance of problems with the purpose to protect the laptop.
The Right Method to Clean The Keyboard
If we have to clean the keyboard, then we can use the vacuum cleaner with the smallest soft mouth. However, if you want to do a thorough cleaning, the only way is to remove the keycaps one by one, and it is very trouble. You can use a protective film for laptop computer keyboard. This soft uneven film is with lots of keys, just to cover the laptop's keyboard, both waterproof and dustproof, and also inexpensive. When you wanna to pick one, just notice it must fit with your laptop keyboard, as well as feel soft. If facing a stiff keyboard, it will certainly feel uncomfortable. In addition, it is very necessary of regular cleaning and blow the dust.
Some tips you should notice when using the keyboard
Although most manufacturers have taken into account the durability of the keyboard, especially in the structure which was fully optimized, after a long time, the problem came, just like some key is out of control, or the letter on the board wear off. Consequently, you should notice the below tips:
Firstly, do not get angry with your own laptop keyboard. In my views, the biggest killer of the laptop keyboard is yourself. For many computer users, after the crash, because of their loss data, they could not help hitting the keyboard. We remind you that this will damage flexible glue that play a supportive role in the keyboard keys.
Secondly, water is the number one killer of laptop keyboard, even the whole laptop. Try not to eat, smoking or drinking water around the notebook to keep the keyboard clean. In particular, if there is too much liquid flowing into the keyboard, it is likely to penetrate deeply into the system, which result in short-circuit and hardware damage, not like HP Compad 463958-001 ac adapter. But such damage is out of guarantee limitations, you have to solve it with your own money. We hope you to be able to develop a good habit to eat or drink away from the notebook computer, which will minimize the chance of problems with the purpose to protect the laptop.
The Right Method to Clean The Keyboard
If we have to clean the keyboard, then we can use the vacuum cleaner with the smallest soft mouth. However, if you want to do a thorough cleaning, the only way is to remove the keycaps one by one, and it is very trouble. You can use a protective film for laptop computer keyboard. This soft uneven film is with lots of keys, just to cover the laptop's keyboard, both waterproof and dustproof, and also inexpensive. When you wanna to pick one, just notice it must fit with your laptop keyboard, as well as feel soft. If facing a stiff keyboard, it will certainly feel uncomfortable. In addition, it is very necessary of regular cleaning and blow the dust.
Saturday, May 28, 2011
New Gmail Widget Tells You All About the People You’re Emailing
Google has introduced the people widget, a new Gmail feature that adds contextual information about those with whom you exchange email.
During the next two weeks, Google will roll out the new feature. It will appear on the right side of the Gmail interface and shows a picture of the person whose email you’re reading, a short paragraph summarizing the last email you received from that person, subjects from emails from the past month, an abbreviated calendar showing availability, and documents that person has recently created.
If you’re engaged in a group email conversation or a Google chat, all those participating will be listed along the right side, and above them will be icons leading you to a variety of ways to communicate with them, including chat, email, calendar events and phone calls.
This feature hasn’t arrived on our Gmail interface yet, but we’re eager to try it. It looks like a quick and efficient way to get your bearings when you receive an email.
During the next two weeks, Google will roll out the new feature. It will appear on the right side of the Gmail interface and shows a picture of the person whose email you’re reading, a short paragraph summarizing the last email you received from that person, subjects from emails from the past month, an abbreviated calendar showing availability, and documents that person has recently created.
If you’re engaged in a group email conversation or a Google chat, all those participating will be listed along the right side, and above them will be icons leading you to a variety of ways to communicate with them, including chat, email, calendar events and phone calls.
This feature hasn’t arrived on our Gmail interface yet, but we’re eager to try it. It looks like a quick and efficient way to get your bearings when you receive an email.
Friday, May 27, 2011
Thermal Imagers for Swine Flu Detection
Just as X-rays have become a common feature in baggage screening, a number of airports around the globe have now installed thermal imaging units to screen passengers who are suspected to be infected with the H1N1 virus. Infrared cameras equipped with thermal imaging capabilities were originally installed in major airports of Asia during the outbreak of SARS in 2002 and at the Beijing International Airport during the 2008 Olympics. Since the most common symptom of the swine flu virus is rise in body temperature, scanners with infrared cameras are an apt solution to identify infected cases.
The working of these scanners is simple. The thermal imagers attached to these scanners help determine the temperature of the body passing through it based on the amount of infrared radiation emitted. It works on the basis of a simple rule, which is: 'The higher the temperature, the higher the infrared radiation emission'. Using this principle, passengers with unusually high temperatures can be easily identified and escorted for further testing.
Typical thermal imaging scanners see in grayscale. Integrating the infrared cameras with software and setting them for isotherm allows you to choose different color palettes for a variety of temperature scales, thus enabling you to map temperatures on a human body. The machines can be set up such that when a traveler with a fever (typically more than 38 degrees Celsius) passes through the thermal imaging scanner, his features can be highlighted in a pre-assigned color. These thermal imagers are known to be extremely sensitive and have the ability to measure temperatures which are less than 0.14 degree Celsius.
Since elevated temperatures are indicative of several other illnesses, too, infrared cameras are used in airports as an efficient tool to screen thousands of travelers for dengue and bird flu as well.
Today, thermal imagers come with a number of high-end features such as hot spot tracking, alarming, and relay systems. Implementing such equipment at airports helps prevent any problems that can possibly escalate into epidemic proportions if not contained in time.
Having said this, it is vital to keep in mind that readings from infrared cameras cannot be used as the last word, simply because illnesses such as the swine flu have a gestation period of about two days before the temperatures of a human body start to rise. Thermal imaging scanners do not detect the virus itself but only abnormal temperatures, still playing a significant role in spotting possibly infected travelers.
The working of these scanners is simple. The thermal imagers attached to these scanners help determine the temperature of the body passing through it based on the amount of infrared radiation emitted. It works on the basis of a simple rule, which is: 'The higher the temperature, the higher the infrared radiation emission'. Using this principle, passengers with unusually high temperatures can be easily identified and escorted for further testing.
Typical thermal imaging scanners see in grayscale. Integrating the infrared cameras with software and setting them for isotherm allows you to choose different color palettes for a variety of temperature scales, thus enabling you to map temperatures on a human body. The machines can be set up such that when a traveler with a fever (typically more than 38 degrees Celsius) passes through the thermal imaging scanner, his features can be highlighted in a pre-assigned color. These thermal imagers are known to be extremely sensitive and have the ability to measure temperatures which are less than 0.14 degree Celsius.
Since elevated temperatures are indicative of several other illnesses, too, infrared cameras are used in airports as an efficient tool to screen thousands of travelers for dengue and bird flu as well.
Today, thermal imagers come with a number of high-end features such as hot spot tracking, alarming, and relay systems. Implementing such equipment at airports helps prevent any problems that can possibly escalate into epidemic proportions if not contained in time.
Having said this, it is vital to keep in mind that readings from infrared cameras cannot be used as the last word, simply because illnesses such as the swine flu have a gestation period of about two days before the temperatures of a human body start to rise. Thermal imaging scanners do not detect the virus itself but only abnormal temperatures, still playing a significant role in spotting possibly infected travelers.
Thursday, May 26, 2011
Hotmail Exploit Silently Snooped & Microsoft Audio CAPTCHA Easily Defeated
Bad news for Microsoft again as security researchers prove two very different fails - Hotmail exploited to silently "steal" email and Microsoft audio CAPTCHAs defeated. The audio CAPTCHAs were also easily broken for Digg, Yahoo, eBay, and Authorize.
More bad news for Microsoft in the form of security researchers proving two very different fails - audio CAPTCHAs defeated and Hotmail exploited to silently "steal" email.
Best Microsoft MCTS Training – Microsoft MCITP Training at Certkingdom.com
The first fail is not leveled solely against Microsoft, but Stanford University researchers found a way to break popular audio CAPTCHA technology used by Microsoft's Live.com, Yahoo, Authorize.net, eBay, and Digg. In "The Failure of Noise-Based Non-Continuous Audio Captchas" [PDF], the researchers built a program called Decaptcha that can listen to and decipher audio CAPTCHAs. The study called most CAPTCHA methods "inherently insecure." By using Decaptcha, the "per-captcha precision of Decaptcha is 89% for Authorize, 41% for Digg, 82% for eBay, 48.9% for Microsoft, 45.45% for Yahoo and, 1.5% for Recaptcha. We improve our previous work's result on eBay from 75% up to 82%." They concluded that Decaptcha's accuracy for commercially available audio CAPTCHAs rivals crowd-sourced attacks. To exploit the vulnerability with Decaptcha's system would require no specialized knowledge or hardware. "Its simple two-phase design makes it fast and easy to train on a desktop computer."
Although the researchers generated 4.2 million audio CAPTCHAs, which are offered as a choice for visually impaired users, not all technologies measured the same vulnerabilities. Google's reCAPTCHA which is used on sites like Facebook, Youtube, Twitter, 4chan, StumbleUpon and Ticketmaster, is not wide open to attack because the audio CAPTCHA scheme has semantic noise built into it - added noise like conversations in the background.
This is not the first proven flaw, or real-world attack against CAPTCHAs. Last year, Webroot showed how a Pushu variant Trojan was bypassing Microsoft's Hotmail and Live CAPTCHAs to spam users. Now there's more bad news for Hotmail users. For at least two or three weeks, a Hotmail exploit made it possible to trigger an attack to silently snoop on targeted victims' emails and contacts as well as add email forwarding rules to users' accounts.
According to Trend Micro, the attack could be carried out by simply opening or previewing a maliciously crafted email. It required no clicking on a link; instead if the tainted email was opened, embedded commands would upload a victim's contacts and emails to servers that attackers controlled. It also enabled email forwarding on the targeted Hotmail accounts, so that attackers could continue to "steal" correspondence and snoop on any of the victim's emails in the future. The vulnerability took "advantage of a script or a CSS filtering mechanism bug in Hotmail" and therefore automatically executed, downloading a script from a remote URL.
Trend Micro said Microsoft has patched the Hotmail bug, but it was being used for in-the-wild attacks. They discovered it after a colleague in Taiwan opened an email which was supposedly a security warning from Facebook. After first discovering the Hotmail bug, Trend Micro wrote, "The email message seems to have been specially crafted per recipient, as it uses each user's Hotmail ID in the malicious script that it embeds." They also pointed out an "often ignored" danger to businesses who allow employees to check their personal email accounts at work. If an employee was targeted and victimized, it "gives the attacker access to sensitive information that may be related to their company, including contacts and confidential messages."
At the time of posting, Microsoft had not replied about how many Hotmail users may have been compromised or if targeted Hotmail accounts, or any Hotmail users, have been notified.
Update: Microsoft Senior Response Communication Manager Bryan Nairn replied, “On Friday May 20, we updated our Windows Live Hotmail service to address a targeted security issue that could allow information disclosure if a customer was affected. No action was required for customers, as they were automatically protected by the update.”
More bad news for Microsoft in the form of security researchers proving two very different fails - audio CAPTCHAs defeated and Hotmail exploited to silently "steal" email.
Best Microsoft MCTS Training – Microsoft MCITP Training at Certkingdom.com
The first fail is not leveled solely against Microsoft, but Stanford University researchers found a way to break popular audio CAPTCHA technology used by Microsoft's Live.com, Yahoo, Authorize.net, eBay, and Digg. In "The Failure of Noise-Based Non-Continuous Audio Captchas" [PDF], the researchers built a program called Decaptcha that can listen to and decipher audio CAPTCHAs. The study called most CAPTCHA methods "inherently insecure." By using Decaptcha, the "per-captcha precision of Decaptcha is 89% for Authorize, 41% for Digg, 82% for eBay, 48.9% for Microsoft, 45.45% for Yahoo and, 1.5% for Recaptcha. We improve our previous work's result on eBay from 75% up to 82%." They concluded that Decaptcha's accuracy for commercially available audio CAPTCHAs rivals crowd-sourced attacks. To exploit the vulnerability with Decaptcha's system would require no specialized knowledge or hardware. "Its simple two-phase design makes it fast and easy to train on a desktop computer."
Although the researchers generated 4.2 million audio CAPTCHAs, which are offered as a choice for visually impaired users, not all technologies measured the same vulnerabilities. Google's reCAPTCHA which is used on sites like Facebook, Youtube, Twitter, 4chan, StumbleUpon and Ticketmaster, is not wide open to attack because the audio CAPTCHA scheme has semantic noise built into it - added noise like conversations in the background.
This is not the first proven flaw, or real-world attack against CAPTCHAs. Last year, Webroot showed how a Pushu variant Trojan was bypassing Microsoft's Hotmail and Live CAPTCHAs to spam users. Now there's more bad news for Hotmail users. For at least two or three weeks, a Hotmail exploit made it possible to trigger an attack to silently snoop on targeted victims' emails and contacts as well as add email forwarding rules to users' accounts.
According to Trend Micro, the attack could be carried out by simply opening or previewing a maliciously crafted email. It required no clicking on a link; instead if the tainted email was opened, embedded commands would upload a victim's contacts and emails to servers that attackers controlled. It also enabled email forwarding on the targeted Hotmail accounts, so that attackers could continue to "steal" correspondence and snoop on any of the victim's emails in the future. The vulnerability took "advantage of a script or a CSS filtering mechanism bug in Hotmail" and therefore automatically executed, downloading a script from a remote URL.
Trend Micro said Microsoft has patched the Hotmail bug, but it was being used for in-the-wild attacks. They discovered it after a colleague in Taiwan opened an email which was supposedly a security warning from Facebook. After first discovering the Hotmail bug, Trend Micro wrote, "The email message seems to have been specially crafted per recipient, as it uses each user's Hotmail ID in the malicious script that it embeds." They also pointed out an "often ignored" danger to businesses who allow employees to check their personal email accounts at work. If an employee was targeted and victimized, it "gives the attacker access to sensitive information that may be related to their company, including contacts and confidential messages."
At the time of posting, Microsoft had not replied about how many Hotmail users may have been compromised or if targeted Hotmail accounts, or any Hotmail users, have been notified.
Update: Microsoft Senior Response Communication Manager Bryan Nairn replied, “On Friday May 20, we updated our Windows Live Hotmail service to address a targeted security issue that could allow information disclosure if a customer was affected. No action was required for customers, as they were automatically protected by the update.”
Tuesday, May 24, 2011
Steve Molkentin, Microsoft Career Factor Idol Winner – The Job Seeker
This is the next blog in the continuing series of interviews with top-echelon and renowned professionals. In this blog, I interview Steve Molkentin, Microsoft Career Factor Idol Winner – The Job Seeker.
Enjoy!
Stephen Ibaraki
Best Microsoft MCTS Training – Microsoft MCITP Training at Certkingdom.com
Steve MolkentinSteve is a passionate IT professional who firmly believes that Information Technology is nothing without a grounding in excellent customer service.
As a career Microsoft IT professional, Steve has been involved in designing and delivering deployments of all key server and OS products into the enterprise since NT4 and Windows 98, and is seeking to grow his skill and certify as a Microsoft Certified IT Professional on Windows Server 2008 R2 and Windows 7 through the Career Factor program. He's also looking forward to sharing his experiences through social media and in speaking engagements during and afterwards. A motivated and dynamic leader, Steve is excited about developing a skill set that will benefit a future employer and allow him to offer significant benefit through his knowledge of best practices and the application to the enterprise.
Living in Brisbane, Australia, Steve enjoys spending time with his family (gardening & watching movies & TV with his wife; reading books, playing Lego, & chasing his young son and daughter) and encouraging the IT community within Australia and New Zealand as a core team member of www.autechheads.com – the largest online user group of IT Pros in the region.
He enjoys playing guitar, Xbox/Kinect, reading & generally being a proud #geek evangelist. He often talks too much and enjoys television far too much.
DISCUSSION:
Q:
Steve, thank you for coming in today to share your insights and experiences with the audience.
A: "No problem – honoured to chat with you."
Q:
What initially drove your passion for technology?
A: "I've been interested in technology for a long time. The first indications of future geekdom spawn from Year 10 in High School (circa 1988), where I worked at the local Fruit Shop all summer to save enough money to purchase a Commodore 64 with a 5 & 1/4" disk drive. It was awesome. Beyond that, it was the opportunity to get my hands on the latest gear and help solve problems for people in an area I showed aptitude."
Q:
Can you profile your history up to your selection by Career Factor?
A: "I lost the full-time job I was in just prior to the global financial crisis, and bounced through a few contracts which helped keep the money coming in but didn't allow me to establish myself anywhere, and I felt my technical skill set suffered because of it (in that I didn't progress it as cash was tight and there was little opportunity to get my hands on the latest technology during that time)."
Q:
What was the catalyst in applying for Career Factor?
A: "I saw a friend of mine (Farhan Sattar) tweet about the fact he was applying, and I asked him what it was. I looked into it to discover the applications closed that night Australian time, and left it at that. Another good friend (AuTechHeads Group Lead Matt Marlor) was chatting with me on Twitter with an hour to go to the deadline, and really encouraged me to do it. I filled in the application and uploaded the video with minutes to spare. Ultimately, I saw it as an opportunity to upskill in preparation for hunting for a (full time) job again in 2011."
Q:
Can you describe your personal experiences for the challenges and process leading up, winning, and after winning Career Factor?
A: "Leading into Career Factor I was excited about the opportunity it presented – upskill, be mentored, grow as a technical professional. Hearing I'd been selected in the program was massive...especially as the only person in the southern hemisphere (which presents its own challenges, but such is life)."
Q:
How does social media accentuate what you are doing in Career Factor?
A: "It's critical. To be able to connect with like-minded technology professionals, as well as direct access to product owners and teams within Microsoft really helps when you're working through a section of new technology you don't understand or are having trouble with, they're just a tweet away. It's also good to keep track with my CF buddies and how they are doing, and what's happening within tech trends globally."
Q:
From your experiences with Career Factor, what tips would you provide to job seekers to help them in their journey?
A: "Stay in touch with technology trends; have a great support network of friends & tech professionals to talk to and talk through how you're feeling (being unemployed can really affect how you feel); set solid and achievable goals as far as plans for study and exams."
Q:
What are the most exciting opportunities you are working on with Career Factor?
A: "Studying for certification to update/upgrade my old MCSE to an MCITP cert. It's a lot of work, but entirely worth it. I won't make an MCITP before TechEd like I was planning, but working hard."
Q:
What are your future career aspirations? What are you most passionate about?
A: "Establish myself in an organization where I can have real influence and offer direction that will help the company save money yet utilize the latest Microsoft technologies to help their business run better. More than anything, I want the teams I run to deliver a superior level of customer service for the business and its customers."
Q:
What drives your passion for Microsoft and Microsoft technology solutions?
A: "Microsoft works hard to develop a complete suite of enterprise-grade and market-leading products that exist to help a business compete in a global marketplace. My experience with them has always been that they deliver on what the products set out to do, and in doing that you know that the next version will (generally) add significant features that benefit their customers."
Q:
What are your tips, lessons, and best resources for those wanting a career in computing?
A: "Talk to people in the industry. Network every chance you get. There are lots of good online resources to start learning, quite often including 90-day evaluation versions of products that are the full versions that you can use to gain experience and help with certification. Getting certified helps too, but (at least in Australia) it's not critical. Have a great attitude and be prepared to start on a HelpDesk answering the same question 50 times in an hour. And get involved with a user group – invaluable skills and information can be gained from working with and listening to your peers."
Q:
Why does certification fit into your career plan?
A: "It's proof of what I know: a validation of the effort put in and an acknowledgement of the skills I've worked hard to gain. Microsoft certifications are globally recognized, so I know the skills I have are transferable all over the world."
Q:
Can you describe your experiences with IT communities and then provide your recommendations?
A: "Being involved with a user group has been a complete win for me – the Microsoft user community is alive and well online and offline, and I've found a heap of support, encouragement, conversation and thought that has challenged the way I think about technology. It's critical to any IT professional to be involved… to both learn and give back. I entirely recommend getting involved. I'm connected with a great offline one (Brisbane Infrastructure Group), and a spectacular online one (http://www.autechheads.com) which has challenged the way I think about the application of technology as much as confirmed the way I understand it should be used."
Q:
In all that you do, what are the biggest challenges, and their solutions?
A: "Convincing the organization to spend coin on technology in smarter ways. If the decision has been made to licence a specific way, it can be tough to help the company see the investment by altering that even though significant benefits exist when you do. Also, sometimes convincing the organization OR the tech professional to see the benefit in certification. One sees it as a threat that their employee becomes more valuable to others; the other sees it as a burden (sometimes)."
Q:
Provide your predictions of future IT trends and their implications/opportunities?
A: "Cloud services will feature prominently (but that doesn't do away with the need for supremely skilled IT Professionals). More and more options will be available in the cloud, offering flexibility to the organization that will challenge old "we need metal in the server room" mindsets. Also the locked in SOE on specific hardware will become less commonplace, and more and more organizations will lean on a "BYO" client approach. Services will need to suit this model, and IT teams will need to be prepared and supportive of these changes. Good customer service ALWAYS wins out."
Q:
Please share some stories (something surprising, unexpected, amazing, or humorous) from your studies, work, or time with Career Factor?
A: "The camaraderie within the CF team has surprised me pleasantly, especially as I feel so generally disconnected from everyone being all the way over here in Australia. It shouldn't have, as all the other participants are wonderful people. I'm really looking forward to meeting them at TechEd in Atlanta. Also the support from the IT pro community of my journey has been excellent and very encouraging."
Q:
If you were doing this interview, what 3 questions would you ask and then what would be your answers?
A:
1. What has been the best resource you've found to help you on your Career Factor experience?
A1: The practice exams, no question, along with the Microsoft Press books. Detailed, functional and spot on.
2. How have you found juggling life and study through your time in Career Factor?
A2: Really difficult at some points. Finding the motivation or beating off the demons of anxiety and the feelings of being depressed due to being unemployed has been hard. The support from the wider community has been invaluable.
3. What are you looking forward to most about TechEd North America?
A3: Seeing a much larger community come together and the conversations and challenges that represents. TechEd in Australia is about 3,500 people – North America is (I'm told) at least 3 times that! Can. Not. Wait.
Q:
What three lessons have you learned from your life experiences?
A:
1. Trusting people doesn't cost anything. Believing in them does, but it's so worth the expense.
2. Smile when you talk to people, especially when you answer the HelpDesk phone. Usually people are cranky, and an understanding and positive voice on the end of the phone can usually fend off all sorts of tirades that were previously locked and loaded.
3. Don't be afraid to say no, but don't have it as your default setting.
Q:
Steve, we will continue to follow your contributions with Career Factor and more broadly. We thank you for sharing your time, wisdom, and accumulated deep insights with our audience.
A: "Thanks so much!"
Enjoy!
Stephen Ibaraki
Best Microsoft MCTS Training – Microsoft MCITP Training at Certkingdom.com
Steve MolkentinSteve is a passionate IT professional who firmly believes that Information Technology is nothing without a grounding in excellent customer service.
As a career Microsoft IT professional, Steve has been involved in designing and delivering deployments of all key server and OS products into the enterprise since NT4 and Windows 98, and is seeking to grow his skill and certify as a Microsoft Certified IT Professional on Windows Server 2008 R2 and Windows 7 through the Career Factor program. He's also looking forward to sharing his experiences through social media and in speaking engagements during and afterwards. A motivated and dynamic leader, Steve is excited about developing a skill set that will benefit a future employer and allow him to offer significant benefit through his knowledge of best practices and the application to the enterprise.
Living in Brisbane, Australia, Steve enjoys spending time with his family (gardening & watching movies & TV with his wife; reading books, playing Lego, & chasing his young son and daughter) and encouraging the IT community within Australia and New Zealand as a core team member of www.autechheads.com – the largest online user group of IT Pros in the region.
He enjoys playing guitar, Xbox/Kinect, reading & generally being a proud #geek evangelist. He often talks too much and enjoys television far too much.
DISCUSSION:
Q:
Steve, thank you for coming in today to share your insights and experiences with the audience.
A: "No problem – honoured to chat with you."
Q:
What initially drove your passion for technology?
A: "I've been interested in technology for a long time. The first indications of future geekdom spawn from Year 10 in High School (circa 1988), where I worked at the local Fruit Shop all summer to save enough money to purchase a Commodore 64 with a 5 & 1/4" disk drive. It was awesome. Beyond that, it was the opportunity to get my hands on the latest gear and help solve problems for people in an area I showed aptitude."
Q:
Can you profile your history up to your selection by Career Factor?
A: "I lost the full-time job I was in just prior to the global financial crisis, and bounced through a few contracts which helped keep the money coming in but didn't allow me to establish myself anywhere, and I felt my technical skill set suffered because of it (in that I didn't progress it as cash was tight and there was little opportunity to get my hands on the latest technology during that time)."
Q:
What was the catalyst in applying for Career Factor?
A: "I saw a friend of mine (Farhan Sattar) tweet about the fact he was applying, and I asked him what it was. I looked into it to discover the applications closed that night Australian time, and left it at that. Another good friend (AuTechHeads Group Lead Matt Marlor) was chatting with me on Twitter with an hour to go to the deadline, and really encouraged me to do it. I filled in the application and uploaded the video with minutes to spare. Ultimately, I saw it as an opportunity to upskill in preparation for hunting for a (full time) job again in 2011."
Q:
Can you describe your personal experiences for the challenges and process leading up, winning, and after winning Career Factor?
A: "Leading into Career Factor I was excited about the opportunity it presented – upskill, be mentored, grow as a technical professional. Hearing I'd been selected in the program was massive...especially as the only person in the southern hemisphere (which presents its own challenges, but such is life)."
Q:
How does social media accentuate what you are doing in Career Factor?
A: "It's critical. To be able to connect with like-minded technology professionals, as well as direct access to product owners and teams within Microsoft really helps when you're working through a section of new technology you don't understand or are having trouble with, they're just a tweet away. It's also good to keep track with my CF buddies and how they are doing, and what's happening within tech trends globally."
Q:
From your experiences with Career Factor, what tips would you provide to job seekers to help them in their journey?
A: "Stay in touch with technology trends; have a great support network of friends & tech professionals to talk to and talk through how you're feeling (being unemployed can really affect how you feel); set solid and achievable goals as far as plans for study and exams."
Q:
What are the most exciting opportunities you are working on with Career Factor?
A: "Studying for certification to update/upgrade my old MCSE to an MCITP cert. It's a lot of work, but entirely worth it. I won't make an MCITP before TechEd like I was planning, but working hard."
Q:
What are your future career aspirations? What are you most passionate about?
A: "Establish myself in an organization where I can have real influence and offer direction that will help the company save money yet utilize the latest Microsoft technologies to help their business run better. More than anything, I want the teams I run to deliver a superior level of customer service for the business and its customers."
Q:
What drives your passion for Microsoft and Microsoft technology solutions?
A: "Microsoft works hard to develop a complete suite of enterprise-grade and market-leading products that exist to help a business compete in a global marketplace. My experience with them has always been that they deliver on what the products set out to do, and in doing that you know that the next version will (generally) add significant features that benefit their customers."
Q:
What are your tips, lessons, and best resources for those wanting a career in computing?
A: "Talk to people in the industry. Network every chance you get. There are lots of good online resources to start learning, quite often including 90-day evaluation versions of products that are the full versions that you can use to gain experience and help with certification. Getting certified helps too, but (at least in Australia) it's not critical. Have a great attitude and be prepared to start on a HelpDesk answering the same question 50 times in an hour. And get involved with a user group – invaluable skills and information can be gained from working with and listening to your peers."
Q:
Why does certification fit into your career plan?
A: "It's proof of what I know: a validation of the effort put in and an acknowledgement of the skills I've worked hard to gain. Microsoft certifications are globally recognized, so I know the skills I have are transferable all over the world."
Q:
Can you describe your experiences with IT communities and then provide your recommendations?
A: "Being involved with a user group has been a complete win for me – the Microsoft user community is alive and well online and offline, and I've found a heap of support, encouragement, conversation and thought that has challenged the way I think about technology. It's critical to any IT professional to be involved… to both learn and give back. I entirely recommend getting involved. I'm connected with a great offline one (Brisbane Infrastructure Group), and a spectacular online one (http://www.autechheads.com) which has challenged the way I think about the application of technology as much as confirmed the way I understand it should be used."
Q:
In all that you do, what are the biggest challenges, and their solutions?
A: "Convincing the organization to spend coin on technology in smarter ways. If the decision has been made to licence a specific way, it can be tough to help the company see the investment by altering that even though significant benefits exist when you do. Also, sometimes convincing the organization OR the tech professional to see the benefit in certification. One sees it as a threat that their employee becomes more valuable to others; the other sees it as a burden (sometimes)."
Q:
Provide your predictions of future IT trends and their implications/opportunities?
A: "Cloud services will feature prominently (but that doesn't do away with the need for supremely skilled IT Professionals). More and more options will be available in the cloud, offering flexibility to the organization that will challenge old "we need metal in the server room" mindsets. Also the locked in SOE on specific hardware will become less commonplace, and more and more organizations will lean on a "BYO" client approach. Services will need to suit this model, and IT teams will need to be prepared and supportive of these changes. Good customer service ALWAYS wins out."
Q:
Please share some stories (something surprising, unexpected, amazing, or humorous) from your studies, work, or time with Career Factor?
A: "The camaraderie within the CF team has surprised me pleasantly, especially as I feel so generally disconnected from everyone being all the way over here in Australia. It shouldn't have, as all the other participants are wonderful people. I'm really looking forward to meeting them at TechEd in Atlanta. Also the support from the IT pro community of my journey has been excellent and very encouraging."
Q:
If you were doing this interview, what 3 questions would you ask and then what would be your answers?
A:
1. What has been the best resource you've found to help you on your Career Factor experience?
A1: The practice exams, no question, along with the Microsoft Press books. Detailed, functional and spot on.
2. How have you found juggling life and study through your time in Career Factor?
A2: Really difficult at some points. Finding the motivation or beating off the demons of anxiety and the feelings of being depressed due to being unemployed has been hard. The support from the wider community has been invaluable.
3. What are you looking forward to most about TechEd North America?
A3: Seeing a much larger community come together and the conversations and challenges that represents. TechEd in Australia is about 3,500 people – North America is (I'm told) at least 3 times that! Can. Not. Wait.
Q:
What three lessons have you learned from your life experiences?
A:
1. Trusting people doesn't cost anything. Believing in them does, but it's so worth the expense.
2. Smile when you talk to people, especially when you answer the HelpDesk phone. Usually people are cranky, and an understanding and positive voice on the end of the phone can usually fend off all sorts of tirades that were previously locked and loaded.
3. Don't be afraid to say no, but don't have it as your default setting.
Q:
Steve, we will continue to follow your contributions with Career Factor and more broadly. We thank you for sharing your time, wisdom, and accumulated deep insights with our audience.
A: "Thanks so much!"
Monday, May 23, 2011
Planning For Social Media Integration
Social media has the ability to connect with a bigger audience quickly and can help to increase the number of followers and prospects and improve brand credibility. Having a social media presence often has unexpected benefits, but to leverage them, you may need to be opportunistic and up to date on current events and trends. Most brands are already using social media, but it needs to be used efficiently to benefit your business. Strategists and practitioners will learn that failing to plan is planning to fail, so with a little planning it is possible to make social media integration work to your benefit.
Social media is essentially a channel, service or network used for intelligence, communication and visibility. If lucidity and genuineness were prevailing maxims over past years, it is accountability, metrics and outcomes that will serve as the foundation for social media success in the coming years ahead. It takes more than just a Facebook or Twitter presence; effective social media integration needs to address business dynamics. It is a real-time communications tool that can be integrated into any aspect of your business. Many companies treat social media as its own entity and a new department of their company.
Evaluate your objectives for a flawless integration
It is important to ascertain the main goals and objectives of your business. If the objective of the company is to increase sales, it is necessary to understand the current strategies for increasing sales. Before social media integration it may have been your strategy to increase the number of customers by increasing brand awareness. This would have been possible by advertising in magazines or putting up billboards. With social media integration, the strategy remains the same, only the medium changes. Keeping that in mind, you can then advertise on your Facebook page or give out a promotional code to your Twitter followers thereby allowing you to track your results.
Social media allows easy integration into the other aspects of your business. While it is one thing to be involved in social networks like Facebook and Twitter, it is a whole other thing to integrate these networks and communities into your marketing strategies. A social media marketing strategy entails taking the time to step back, assess your organizational aspirations, and align them to your social media goals to engage your customers and donors and enhance your online brand and reputation.
Social media integration for effective results
An effective social media solutions gives you goals and metrics to ensure that your efforts have a lasting impact. You can develop a trusting relationship once you have spent the time listening to and engaging with your customer and constituents. Your followers become contributors and advocates, spreading your message and in turn increasing traffic and revenue.
Social media is essentially a channel, service or network used for intelligence, communication and visibility. If lucidity and genuineness were prevailing maxims over past years, it is accountability, metrics and outcomes that will serve as the foundation for social media success in the coming years ahead. It takes more than just a Facebook or Twitter presence; effective social media integration needs to address business dynamics. It is a real-time communications tool that can be integrated into any aspect of your business. Many companies treat social media as its own entity and a new department of their company.
Evaluate your objectives for a flawless integration
It is important to ascertain the main goals and objectives of your business. If the objective of the company is to increase sales, it is necessary to understand the current strategies for increasing sales. Before social media integration it may have been your strategy to increase the number of customers by increasing brand awareness. This would have been possible by advertising in magazines or putting up billboards. With social media integration, the strategy remains the same, only the medium changes. Keeping that in mind, you can then advertise on your Facebook page or give out a promotional code to your Twitter followers thereby allowing you to track your results.
Social media allows easy integration into the other aspects of your business. While it is one thing to be involved in social networks like Facebook and Twitter, it is a whole other thing to integrate these networks and communities into your marketing strategies. A social media marketing strategy entails taking the time to step back, assess your organizational aspirations, and align them to your social media goals to engage your customers and donors and enhance your online brand and reputation.
Social media integration for effective results
An effective social media solutions gives you goals and metrics to ensure that your efforts have a lasting impact. You can develop a trusting relationship once you have spent the time listening to and engaging with your customer and constituents. Your followers become contributors and advocates, spreading your message and in turn increasing traffic and revenue.
Sunday, May 22, 2011
Six rising threats from cybercriminals
Hackers never sleep, it seems. Just when you think you've battened down the hatches and fully protected yourself or your business from electronic security risks, along comes a new exploit to keep you up at night. It might be an SMS text message with a malevolent payload or a stalker who dogs your every step online. Or maybe it's an emerging technology like in-car Wi-Fi that suddenly creates a whole new attack vector.
Best Microsoft MCTS Training – Microsoft MCITP Training at Certkingdom.com
Whether you're an IT manager protecting employees and corporate systems or you're simply trying to keep your own personal data safe, these threats -- some rapidly growing, others still emerging -- pose a potential risk. Fortunately, there are some security procedures and tools available to help you win the fight against the bad guys.
1. Text-message malware
While smartphone viruses are still fairly rare, text-messaging attacks are becoming more common, according to Rodney Joffe, senior vice president and senior technologist at mobile messaging company Neustar and director of the Conficker Working Group coalition of security researchers. PCs are now fairly well protected, he says, so some hackers have moved on to mobile devices. Their incentive is mostly financial; text messaging provides a way for them to break in and make money.
Khoi Nguyen, group product manager for mobile security at Symantec, confirmed that text-message attacks aimed at smartphone operating systems are becoming more common as people rely more on mobile devices. It's not just consumers who are at risk from these attacks, he adds. Any employee who falls for a text-message ruse using a company smartphone can jeopardize the business's network and data, and perhaps cause a compliance violation.
"This is a similar type of attack as [is used on] a computer -- an SMS or MMS message that includes an attachment, disguised as a funny or sexy picture, which asks the user to open it," Nguyen explains. "Once they download the picture, it will install malware on the device. Once loaded, it would acquire access privileges, and it spreads through contacts on the phone, [who] would then get a message from that user."
In this way, says Joffe, hackers create botnets for sending text-message spam with links to a product the hacker is selling, usually charging you per message. In some cases, he adds, the malware even starts buying ring tones that are charged on your wireless bill, lining the pocketbook of the hacker selling the ring tones.
Another ruse, says Nguyen, is a text-message link to download an app that supposedly allows free Internet access but is actually a Trojan that sends hundreds of thousands of SMS messages (usually at "premium SMS" rates of $2 each) from the phone.
Wireless carriers say they do try to stave off the attacks. For instance, Verizon spokeswoman Brenda Raney says the company scans for known malware attacks and isolates them on the cellular network, and even engages with federal crime units to block attacks.
Still, as Joffe notes jokingly, there is "no defense against being stupid" or against employee errors. For example, he recounts that he and other security professionals training corporate employees one-on-one about cell phone dangers would send them messages with a fake worm. And right after the training session, he says, many employees would still click the link.
Best Microsoft MCTS Training – Microsoft MCITP Training at Certkingdom.com
Whether you're an IT manager protecting employees and corporate systems or you're simply trying to keep your own personal data safe, these threats -- some rapidly growing, others still emerging -- pose a potential risk. Fortunately, there are some security procedures and tools available to help you win the fight against the bad guys.
1. Text-message malware
While smartphone viruses are still fairly rare, text-messaging attacks are becoming more common, according to Rodney Joffe, senior vice president and senior technologist at mobile messaging company Neustar and director of the Conficker Working Group coalition of security researchers. PCs are now fairly well protected, he says, so some hackers have moved on to mobile devices. Their incentive is mostly financial; text messaging provides a way for them to break in and make money.
Khoi Nguyen, group product manager for mobile security at Symantec, confirmed that text-message attacks aimed at smartphone operating systems are becoming more common as people rely more on mobile devices. It's not just consumers who are at risk from these attacks, he adds. Any employee who falls for a text-message ruse using a company smartphone can jeopardize the business's network and data, and perhaps cause a compliance violation.
"This is a similar type of attack as [is used on] a computer -- an SMS or MMS message that includes an attachment, disguised as a funny or sexy picture, which asks the user to open it," Nguyen explains. "Once they download the picture, it will install malware on the device. Once loaded, it would acquire access privileges, and it spreads through contacts on the phone, [who] would then get a message from that user."
In this way, says Joffe, hackers create botnets for sending text-message spam with links to a product the hacker is selling, usually charging you per message. In some cases, he adds, the malware even starts buying ring tones that are charged on your wireless bill, lining the pocketbook of the hacker selling the ring tones.
Another ruse, says Nguyen, is a text-message link to download an app that supposedly allows free Internet access but is actually a Trojan that sends hundreds of thousands of SMS messages (usually at "premium SMS" rates of $2 each) from the phone.
Wireless carriers say they do try to stave off the attacks. For instance, Verizon spokeswoman Brenda Raney says the company scans for known malware attacks and isolates them on the cellular network, and even engages with federal crime units to block attacks.
Still, as Joffe notes jokingly, there is "no defense against being stupid" or against employee errors. For example, he recounts that he and other security professionals training corporate employees one-on-one about cell phone dangers would send them messages with a fake worm. And right after the training session, he says, many employees would still click the link.
Saturday, May 21, 2011
Google plans expanded 24/7 phone support … but not just yet
In many cases, customers seem eager to switch from the costly and complicated Windows environments, he says. "Since we've been doing Google Apps, people say, 'That's wonderful, thank you, but when you can help me with the desktop?' I've heard that a couple thousand times in the last few years."
Microsoft still argues that Google Docs isn't capable of importing all Microsoft Office documents without losing formatting, a problem for people who have to share documents with users of the nearly ubiquitous Microsoft Word.
Google has steadily improved compatibility with Microsoft Office formats, but Girouard admits that it's not perfect and criticizes Microsoft for using the Office install base as a club against Google.
"Ownership of a proprietary format is the famous final stand," Girouard says. "At some point maybe we'll be saying, 'Oh my goodness, Microsoft Word is not very good at importing a Google Doc.' But that's the luxury they have of leaning on their format compatibility, having a proprietary format that everybody uses. But inevitably, in my mind, it's a terrible way to promote yourself to say, 'We have a proprietary format, other people can't handle it as well as we can, so you should use our products.' It's kind of an anti-innovation message that I think doesn't do the market justice and it doesn't do them justice. We'll get better very quickly at importing Microsoft documents because we have to. But if that's their angle to tell people you should stick with Microsoft, I think it's a dead end."
In addition to improving compatibility with Microsoft Word when importing documents into Google Docs, Google offers "Cloud Connect" to let users continue to use installed versions of Microsoft Word while syncing the documents online and collaborating on edits.
Although Google espouses a "100% Web" work environment, it will provide offline access to Gmail and Google Docs in the Chrome browser later this year and is partnering with Citrix to stream Windows applications to Chromebooks. Chrome computers aren't just for Gmail and Google Docs customers, either, Girouard insists.
"I feel like they should legitimately stand on their own," he says. "Of course, they work together really well. But I don't see us as the company that needs to get people to be 100% on Google and that's all they use. It's just not how we design products."
Google claims its own surveys show 75% of workers could be moved from Windows PCs to Chromebooks, but even Google isn't claiming that many people will actually make the switch. Enterprise IT moves slowly by nature because of inertia and hardware refresh cycles, and customers want to see others use a product successfully for a time before making the leap themselves, Girouard notes.
"We're providing bridges because we don't think Microsoft is disappearing from these companies entirely," he says.
Perhaps Google's biggest success in the enterprise is Android. When consumers started ditching their BlackBerries and "dumb" phones for iPhones and Androids, IT shops adapted relatively quickly, allowing access from new types of smartphones to existing corporate email systems. (See also: "The complicated new face of personal computing")
Android as a business tool "is happening with us or without us, frankly," Girouard says. "One of the things we just figured out recently is over 90% of Google Apps businesses are using Android phones. Not necessarily exclusively, but they're using some Android phones. Which, I think, is pretty cool."
Google will continue to improve Android phones, and now tablets, for business use. A Google Docs app was recently released for Android, although without offline access. That's likely to come later. Girouard isn't saying when, but he notes that the same HTML5 caching capability that lets iPad and iPhone users view Gmail offline is essentially what will be used to provide offline Google Docs access in PC Web browsers.
Google is providing a Web service for IT shops to centrally manage Android devices. But one thing customers probably won't see is business-level support for Android phones. Given the involvement of carriers and device manufacturers, who often install their own software on Android phones, it's unlikely Google would provide anything close to 24/7 phone support for Android, even for businesses that have deployed thousands of them. That kind of support will be reserved for Google Apps and Chromebooks customers.
"That's not a situation where Google is in good position to be the only support of choice," Girouard says. "We are providing a cloud service to manage Android devices, and Chromebooks as well. But the Android devices themselves, if you have a bad SD card, I don't think that's something we're going to get in the middle of."
Microsoft still argues that Google Docs isn't capable of importing all Microsoft Office documents without losing formatting, a problem for people who have to share documents with users of the nearly ubiquitous Microsoft Word.
Google has steadily improved compatibility with Microsoft Office formats, but Girouard admits that it's not perfect and criticizes Microsoft for using the Office install base as a club against Google.
"Ownership of a proprietary format is the famous final stand," Girouard says. "At some point maybe we'll be saying, 'Oh my goodness, Microsoft Word is not very good at importing a Google Doc.' But that's the luxury they have of leaning on their format compatibility, having a proprietary format that everybody uses. But inevitably, in my mind, it's a terrible way to promote yourself to say, 'We have a proprietary format, other people can't handle it as well as we can, so you should use our products.' It's kind of an anti-innovation message that I think doesn't do the market justice and it doesn't do them justice. We'll get better very quickly at importing Microsoft documents because we have to. But if that's their angle to tell people you should stick with Microsoft, I think it's a dead end."
In addition to improving compatibility with Microsoft Word when importing documents into Google Docs, Google offers "Cloud Connect" to let users continue to use installed versions of Microsoft Word while syncing the documents online and collaborating on edits.
Although Google espouses a "100% Web" work environment, it will provide offline access to Gmail and Google Docs in the Chrome browser later this year and is partnering with Citrix to stream Windows applications to Chromebooks. Chrome computers aren't just for Gmail and Google Docs customers, either, Girouard insists.
"I feel like they should legitimately stand on their own," he says. "Of course, they work together really well. But I don't see us as the company that needs to get people to be 100% on Google and that's all they use. It's just not how we design products."
Google claims its own surveys show 75% of workers could be moved from Windows PCs to Chromebooks, but even Google isn't claiming that many people will actually make the switch. Enterprise IT moves slowly by nature because of inertia and hardware refresh cycles, and customers want to see others use a product successfully for a time before making the leap themselves, Girouard notes.
"We're providing bridges because we don't think Microsoft is disappearing from these companies entirely," he says.
Perhaps Google's biggest success in the enterprise is Android. When consumers started ditching their BlackBerries and "dumb" phones for iPhones and Androids, IT shops adapted relatively quickly, allowing access from new types of smartphones to existing corporate email systems. (See also: "The complicated new face of personal computing")
Android as a business tool "is happening with us or without us, frankly," Girouard says. "One of the things we just figured out recently is over 90% of Google Apps businesses are using Android phones. Not necessarily exclusively, but they're using some Android phones. Which, I think, is pretty cool."
Google will continue to improve Android phones, and now tablets, for business use. A Google Docs app was recently released for Android, although without offline access. That's likely to come later. Girouard isn't saying when, but he notes that the same HTML5 caching capability that lets iPad and iPhone users view Gmail offline is essentially what will be used to provide offline Google Docs access in PC Web browsers.
Google is providing a Web service for IT shops to centrally manage Android devices. But one thing customers probably won't see is business-level support for Android phones. Given the involvement of carriers and device manufacturers, who often install their own software on Android phones, it's unlikely Google would provide anything close to 24/7 phone support for Android, even for businesses that have deployed thousands of them. That kind of support will be reserved for Google Apps and Chromebooks customers.
"That's not a situation where Google is in good position to be the only support of choice," Girouard says. "We are providing a cloud service to manage Android devices, and Chromebooks as well. But the Android devices themselves, if you have a bad SD card, I don't think that's something we're going to get in the middle of."
Thursday, May 19, 2011
Mobile Applications The Technology That May Replace The Computers
With technological advances happening day by day and with the world gearing up for the future in mobile revolution, there has been a sudden surge of many companies in the mobile technology field. Nowadays, mobile users are getting addicted to mobiles that can offer them with many applications and more benefits. Mobile users now purchase new mobiles based on different factors like the mobile features, different applications which a mobile can offer, and also the platform on which a mobile runs on.
This rapidly increasing necessity for better functionality in mobile phones has given birth to the range of innovative technologies for customized mobile application development. Mobile applications today have replaced what the televisions were around thirty years ago. Today, mobile runs on various operating platforms and customers have a choice to try out Java, iPhone, Blackberry, Symbian, Palm, Maemo, MeeGo, BREQ, Bada, and the latest Android. The choice of various mobile platforms have given rise to intense competition of developing better mobile applications which can suit their platforms in the best way, which in a way is passed on to the end consumers as they get their hands to better and better mobile applications.
Today, third party mobile software developers develop applications which numerous features that has helped to shift mobile from a normal calling device into a necessary business tool. Mobile devices, today offers innumerable multitasking functions due to its various Mobile application software which helps consumers get the benefits of internet, video calling, and live TV streaming, Emails etc apart from using it as a normal communication device. One such example can be the Windows Mobile devices. The operating system of the Windows Mobile desktop application devices had various multitasking features like registry access and full file system access. It also had a feature wherein one could install an application using cab files and it also had an option of changing the entire user interface with another.
Windows mobile devices were considered to be the replacement of Windows desktop computers as they come with personal information synchronization and a complete office suite. Unfortunately, for Microsoft's inability to market the phone to the masses and its inability to develop applications for the platform led to the downfall of the Windows mobile device, but Microsoft filled in that vacuum left by the windows mobile phones by launching their brand new Windows Phone 7 , which has interface like none other available at present. Microsoft is also focusing to create an environment so as to encourage developers make application for its Windows phone 7.
This rapidly increasing necessity for better functionality in mobile phones has given birth to the range of innovative technologies for customized mobile application development. Mobile applications today have replaced what the televisions were around thirty years ago. Today, mobile runs on various operating platforms and customers have a choice to try out Java, iPhone, Blackberry, Symbian, Palm, Maemo, MeeGo, BREQ, Bada, and the latest Android. The choice of various mobile platforms have given rise to intense competition of developing better mobile applications which can suit their platforms in the best way, which in a way is passed on to the end consumers as they get their hands to better and better mobile applications.
Today, third party mobile software developers develop applications which numerous features that has helped to shift mobile from a normal calling device into a necessary business tool. Mobile devices, today offers innumerable multitasking functions due to its various Mobile application software which helps consumers get the benefits of internet, video calling, and live TV streaming, Emails etc apart from using it as a normal communication device. One such example can be the Windows Mobile devices. The operating system of the Windows Mobile desktop application devices had various multitasking features like registry access and full file system access. It also had a feature wherein one could install an application using cab files and it also had an option of changing the entire user interface with another.
Windows mobile devices were considered to be the replacement of Windows desktop computers as they come with personal information synchronization and a complete office suite. Unfortunately, for Microsoft's inability to market the phone to the masses and its inability to develop applications for the platform led to the downfall of the Windows mobile device, but Microsoft filled in that vacuum left by the windows mobile phones by launching their brand new Windows Phone 7 , which has interface like none other available at present. Microsoft is also focusing to create an environment so as to encourage developers make application for its Windows phone 7.
Tuesday, May 17, 2011
Microsoft Exchange 2010 balanced by Kemp
Hoping to simplify the ornery task of load balancing Microsoft Exchange servers, Kemp Technologies has released a virtual machine-based version of its Exchange 2010 application delivery controller, called Virtual LoadMaster Exchange (VLM-Exchange).
VLM-Exchange could be valuable to organizations that are finding their needs growing beyond what a single Exchange Server can handle, said Peter Melerud, Kemp vice president of product management. It would allow the organization to set up a second server to help handle the traffic.
VLM-Exchange could also play a role in increasing the availability of the Exchange service by allowing an organization to set up redundant servers that can handle traffic should the primary servers fail. And this virtual appliance addresses a configuration problem faced by administrators upgrading to the latest version of Exchange, Exchange 2010, which doesn't allow Microsoft's internal load balancer and the company's clustering technology to be used on the same server.
This new version Kemp's technology was first released as an appliance a month ago as LoadMaster-Exchange for Microsoft Exchange 2010. LoadMaster-Exchange is a version of the Kemp's general purpose load balancer, Loadmaster 2200, that has been configured for Exchange.
The package is pre-configured for a number of virtual services: one handling client access based on RPC (Remote Procedure Call), one for setting up a hub and edge configuration for SMTP (Simple Mail Transfer Protocol), and one for running Web-based services such as Outlook Web Access, Outlook Anywhere and ActiveSync.
VLM-Exchange runs in a Hyper-V virtual machine, which should be ideal for organizations that already deploy Exchange or other software within Hyper-V virtualized environments.
"Exchange works very well in Hyper-V hypervisors, so there will be a significant number of [our] customers deploying Exchange on top of Hyper-V," Melerud said. "Companies can just create another virtual machine within their Hyper-V environments."
This version also addresses a new problem that administrators may face when upgrading to Exchange 2010, one encountered if they run multiple Exchange servers as a single instance, with some sort of load balancing evenly distributing work to each of the machines. For the first time with Exchange, administrators cannot run the Microsoft Windows Server own internal load balancer functionality, called the Network Load Balancing (NLB), and the Microsoft clustering technology on the same server. This change is problematic for organizations that need the clustering technology to manage multiple Exchange servers as a single service.
"In prior earlier versions of Exchange, you didn't have to have an external load balancer. You could have used NLB. In Exchange 2010, you must have an external load balancing appliance," Melerud said.
Microsoft, which verified Melerud's description of how Exchange 2010 works, suggests either running each Exchange service on a separate server, or running an external load balancer, such as Kemp's. Melerud argues for using a virtual machine-based load balancer as it eliminates the cost for paying for separate server and associated software licenses. It also cuts the labor costs of configuration.
VLM-Exchange could be valuable to organizations that are finding their needs growing beyond what a single Exchange Server can handle, said Peter Melerud, Kemp vice president of product management. It would allow the organization to set up a second server to help handle the traffic.
VLM-Exchange could also play a role in increasing the availability of the Exchange service by allowing an organization to set up redundant servers that can handle traffic should the primary servers fail. And this virtual appliance addresses a configuration problem faced by administrators upgrading to the latest version of Exchange, Exchange 2010, which doesn't allow Microsoft's internal load balancer and the company's clustering technology to be used on the same server.
This new version Kemp's technology was first released as an appliance a month ago as LoadMaster-Exchange for Microsoft Exchange 2010. LoadMaster-Exchange is a version of the Kemp's general purpose load balancer, Loadmaster 2200, that has been configured for Exchange.
The package is pre-configured for a number of virtual services: one handling client access based on RPC (Remote Procedure Call), one for setting up a hub and edge configuration for SMTP (Simple Mail Transfer Protocol), and one for running Web-based services such as Outlook Web Access, Outlook Anywhere and ActiveSync.
VLM-Exchange runs in a Hyper-V virtual machine, which should be ideal for organizations that already deploy Exchange or other software within Hyper-V virtualized environments.
"Exchange works very well in Hyper-V hypervisors, so there will be a significant number of [our] customers deploying Exchange on top of Hyper-V," Melerud said. "Companies can just create another virtual machine within their Hyper-V environments."
This version also addresses a new problem that administrators may face when upgrading to Exchange 2010, one encountered if they run multiple Exchange servers as a single instance, with some sort of load balancing evenly distributing work to each of the machines. For the first time with Exchange, administrators cannot run the Microsoft Windows Server own internal load balancer functionality, called the Network Load Balancing (NLB), and the Microsoft clustering technology on the same server. This change is problematic for organizations that need the clustering technology to manage multiple Exchange servers as a single service.
"In prior earlier versions of Exchange, you didn't have to have an external load balancer. You could have used NLB. In Exchange 2010, you must have an external load balancing appliance," Melerud said.
Microsoft, which verified Melerud's description of how Exchange 2010 works, suggests either running each Exchange service on a separate server, or running an external load balancer, such as Kemp's. Melerud argues for using a virtual machine-based load balancer as it eliminates the cost for paying for separate server and associated software licenses. It also cuts the labor costs of configuration.
Monday, May 16, 2011
Shakers and movers of yesteryear: Where are they now?
Thousands of people have helped shape the networked world in the past 25 years. Here we catch up with a few of them to see what they are up to and how they've changed – or not.
Marc Andreessen. Andreessen co-founded Netscape Communications Corp. with Jim Clark in 1994 to market Andreessen's creation, the Netscape web browser. The public went wild for the Web, and Netscape Navigator. Overnight, the young Andreessen was a tech superstar. An annoyed Bill Gates made it clear that giant Microsoft wasn't going to just stand by and watch Netscape take over the desktop. Redmond fought back with a free browser for Windows, Internet Explorer. The era of the browser wars, as many call it, had begun. It was a David vs. Goliath struggle — and Goliath in this case won. But AOL purchased Netscape in 1999 for $4.2 billion. Andreessen went on to find yet more entrepreneurial success, including with Opsware which he sold to HP to $1.6 billion in 2007, and is today a co-founder of Ning and sits on the board of Facebook, eBay, and HP.
Read other highlights over the past 25 years as NWW looks back
The origin if high-tech's made-up lingo
Eric Benhamou. He co-founded LAN-networking company Bridge Communications in 1981, serving as vice president of engineering until it merged with 3Com (the company co-founded by Robert Metcalfe, Howard Charney, Bruce Borden and Greg Shaw) in 1987. He became 3Com's CEO from 1990 to 2000. There, he battled it out in a rivalry with Cisco. Typical of the era, 3Com acquired several firms, such as US Robotics (which itself had acquired Palm in 1995) at a fast clip. Not all mergers and acquisitions went smoothly (Kerbango, which 3Com bought in 2000 for $80 million, with its Internet radio was maybe a little ahead of its time). 3Com made the Palm subsidiary an independent company in 2000, and HP purchased Palm last year for $1.2 billion. Benhamou was chairman of the board at 3Com until its sale to HP in April 2010 for about $2.7 billion. He was also CEO of Palm from 2011 to 2003. Today, he is chairman of the board of Cypress Semiconductor and chair and CEO of his Benhamou Global Ventures, a venture-capital firm he founded, as well as teaching at a number of business schools.
Whitfield Diffie. A pioneer in cryptography with his ground-breaking research into public-key crypto (and he coined the phrase "public key" in 1975), Diffie's work helped lay the foundation for new ways to secure and validate shared data. After nearly two decades at Sun, Diffie is now vice president of information security and cryptography at the Internet Corp. for Assigned names and Numbers (ICANN).
Lou Gerstner. Was chairman of the board at IBM from 1993 until his retirement in 2002. He had been CEO at RJR Nabisco before he joined IBM as CEO to confront a bleak period in IBM's history where Big Blue was struggling for new direction after the peak of the mainframe era. Gerstner is credited with the turnaround strategy that pointed IBM in the direction of IT services, packaged solutions and the Internet. Now retired form IBM, Gerstner serves as a senior advisor at The Carlyle group and to Sony, as well as director of the national Committee on U.S.-China relations.
Marc Andreessen. Andreessen co-founded Netscape Communications Corp. with Jim Clark in 1994 to market Andreessen's creation, the Netscape web browser. The public went wild for the Web, and Netscape Navigator. Overnight, the young Andreessen was a tech superstar. An annoyed Bill Gates made it clear that giant Microsoft wasn't going to just stand by and watch Netscape take over the desktop. Redmond fought back with a free browser for Windows, Internet Explorer. The era of the browser wars, as many call it, had begun. It was a David vs. Goliath struggle — and Goliath in this case won. But AOL purchased Netscape in 1999 for $4.2 billion. Andreessen went on to find yet more entrepreneurial success, including with Opsware which he sold to HP to $1.6 billion in 2007, and is today a co-founder of Ning and sits on the board of Facebook, eBay, and HP.
Read other highlights over the past 25 years as NWW looks back
The origin if high-tech's made-up lingo
Eric Benhamou. He co-founded LAN-networking company Bridge Communications in 1981, serving as vice president of engineering until it merged with 3Com (the company co-founded by Robert Metcalfe, Howard Charney, Bruce Borden and Greg Shaw) in 1987. He became 3Com's CEO from 1990 to 2000. There, he battled it out in a rivalry with Cisco. Typical of the era, 3Com acquired several firms, such as US Robotics (which itself had acquired Palm in 1995) at a fast clip. Not all mergers and acquisitions went smoothly (Kerbango, which 3Com bought in 2000 for $80 million, with its Internet radio was maybe a little ahead of its time). 3Com made the Palm subsidiary an independent company in 2000, and HP purchased Palm last year for $1.2 billion. Benhamou was chairman of the board at 3Com until its sale to HP in April 2010 for about $2.7 billion. He was also CEO of Palm from 2011 to 2003. Today, he is chairman of the board of Cypress Semiconductor and chair and CEO of his Benhamou Global Ventures, a venture-capital firm he founded, as well as teaching at a number of business schools.
Whitfield Diffie. A pioneer in cryptography with his ground-breaking research into public-key crypto (and he coined the phrase "public key" in 1975), Diffie's work helped lay the foundation for new ways to secure and validate shared data. After nearly two decades at Sun, Diffie is now vice president of information security and cryptography at the Internet Corp. for Assigned names and Numbers (ICANN).
Lou Gerstner. Was chairman of the board at IBM from 1993 until his retirement in 2002. He had been CEO at RJR Nabisco before he joined IBM as CEO to confront a bleak period in IBM's history where Big Blue was struggling for new direction after the peak of the mainframe era. Gerstner is credited with the turnaround strategy that pointed IBM in the direction of IT services, packaged solutions and the Internet. Now retired form IBM, Gerstner serves as a senior advisor at The Carlyle group and to Sony, as well as director of the national Committee on U.S.-China relations.
Sunday, May 15, 2011
SIM Only Deals- To Avail SIM Cards At Extremely Low Prices With Added Benefits
A SIM(Subscriber Identity Module)card is like the heart of the handset. Without a SIM card a handset is like a show piece, thus to make mobile phone operable it is really essential. Although, technology has advances so much that today we have handsets that work without SIM cards. But, if you want to enjoy services from various network providers and the added benefits, you have to choose any SIM enabled handset.
In current scenario, mobile phones have become one of the basic needs for many people through out the world. There are many mobile phone subscribers who carry/keep more than one mobile phone. Thus, they need more then one SIM card and to avail them at very cheap price nothing is better than SIM only deals. Under these deals, SIM cards can be availed on contract and at no contract. The SIM only contract period generally extends up to 1 month. Thus, there is no long term restriction alike contract phones with contract on SIM cards. So, you can use the services of a network for the maximum of 1 month and then switch to other network.
SIM only deals work best with SIM free phones, which are usually compatible for global SIM cards and hence easily accept any SIM card. In the United Kingdom, Orange, O2, Three, T-mobile, Vodafone, Talk mobile and Virgin are the widely used network providers. All these service providers as well as many private vendors are selling SIM only deals at very nominal rates. Alike mobile phones, these deals also come with offers like "buy one get one free" and free monthly incentives like texts, calling minutes etc.
In fact, with the increasing trend of SIM free phones many service providers are now offering many expensive and exciting free gifts under SIM only deals. Like LCD TV, coffee machine, digital camera, tablet PC, laptop etc. can be taken home and that too for free with a very cheap SIM card. In simple words, a SIM card availed at very cheap price can get you many pricey and beneficial products for free.
Don't you feel, it's a great chance to grab such nice products and that too with a very cheap SIM card. Thus, if you think it worth then go for it right away.
In current scenario, mobile phones have become one of the basic needs for many people through out the world. There are many mobile phone subscribers who carry/keep more than one mobile phone. Thus, they need more then one SIM card and to avail them at very cheap price nothing is better than SIM only deals. Under these deals, SIM cards can be availed on contract and at no contract. The SIM only contract period generally extends up to 1 month. Thus, there is no long term restriction alike contract phones with contract on SIM cards. So, you can use the services of a network for the maximum of 1 month and then switch to other network.
SIM only deals work best with SIM free phones, which are usually compatible for global SIM cards and hence easily accept any SIM card. In the United Kingdom, Orange, O2, Three, T-mobile, Vodafone, Talk mobile and Virgin are the widely used network providers. All these service providers as well as many private vendors are selling SIM only deals at very nominal rates. Alike mobile phones, these deals also come with offers like "buy one get one free" and free monthly incentives like texts, calling minutes etc.
In fact, with the increasing trend of SIM free phones many service providers are now offering many expensive and exciting free gifts under SIM only deals. Like LCD TV, coffee machine, digital camera, tablet PC, laptop etc. can be taken home and that too for free with a very cheap SIM card. In simple words, a SIM card availed at very cheap price can get you many pricey and beneficial products for free.
Don't you feel, it's a great chance to grab such nice products and that too with a very cheap SIM card. Thus, if you think it worth then go for it right away.
Saturday, May 14, 2011
Kinect for Windows SDK official
Developers' Beta coming on May 16th
Microsoft has officially announced the Kinect for Windows SDK, opening up a world of possibilities for developers to create new apps, tools and games for PCs.
The new SDK, set for Beta release on May 16th, will give third parties access to the bare-bones tech that saw the motion-sensing peripheral become the fastest selling consumer device of all time.
The headline feature of the SDK seems to be access to something called "robust skeletal tracking" which allows the tracking of one or two persons within the Kinect sensor's field of vision.
Developers will also get full access to Kinect's advanced audio capabilities, which Microsoft says will include "four-element microphone array with sophisticated acoustic noise and echo cancellation for great audio."
There's also be integration with the Windows speech recognition API included as well as "beam formation" to track where the sound is coming from, along with full access to Kinect's camera tech.
Since its launch back in November this year, Kinect has seen some inspired software hacks so it'll be fascinating to see how developers fare having officially been given the green light to create new tools.
Microsoft also announced that it'd be giving each of the developers at the MIX11 forum in Las Vegas a Kinect box to go away and create stuff with.
Developers can sign-up to be notified of the release of the Kinect for Windows SDK here.
Microsoft has officially announced the Kinect for Windows SDK, opening up a world of possibilities for developers to create new apps, tools and games for PCs.
The new SDK, set for Beta release on May 16th, will give third parties access to the bare-bones tech that saw the motion-sensing peripheral become the fastest selling consumer device of all time.
The headline feature of the SDK seems to be access to something called "robust skeletal tracking" which allows the tracking of one or two persons within the Kinect sensor's field of vision.
Developers will also get full access to Kinect's advanced audio capabilities, which Microsoft says will include "four-element microphone array with sophisticated acoustic noise and echo cancellation for great audio."
There's also be integration with the Windows speech recognition API included as well as "beam formation" to track where the sound is coming from, along with full access to Kinect's camera tech.
Since its launch back in November this year, Kinect has seen some inspired software hacks so it'll be fascinating to see how developers fare having officially been given the green light to create new tools.
Microsoft also announced that it'd be giving each of the developers at the MIX11 forum in Las Vegas a Kinect box to go away and create stuff with.
Developers can sign-up to be notified of the release of the Kinect for Windows SDK here.
Friday, May 13, 2011
Top Benefits of 3rd Party Logistics
Popularly 3rd party logistics is defined as a supply chain process wherein either singular or multiple logistics functions are outsourced to a logistics solutions provider. With the immense advantages that 3rd Party Logistics (3PL) offers it is no doubt that enterprises are looking at these services with increased vigor. 3PL offers diverse solutions that include enhanced cargo management, best delivery, warehousing and distribution, cargo insurance and claims processing apart from assisting in documentation and forwarding.
In the course of this article we shall discuss the top benefits of third party logistics which include:
Reduced capital expenditure means less capital risks: 3rd Party logistics ensures that such expenditure and the risks that comes with such expenditure does not trouble the enterprise rather it is the outsourced logistics partner that will be bothered with such issues.
Enhanced coordination: Optimized third party logistics solutions can ensure enterprises engage in best coordination capabilities therefore the flow of goods can be efficiently carried out.
Net values rise with costs reduction: 3rd party logistics (3PL)ensures economies of scale and of scope which therefore brings in best ways to reduce costs and in turn raise the value of the company.
Enhanced competitive edge: 3PL enables enterprises to negotiate for better terms therefore they can engage services for competitive rates, providing them with ways to save costs and opt for the best solutions providers that does all the work.
Best savings: Since 3PL involves outsourcing your logistical headache, it also means that the administrative aspect of logistics is also effectively outsourced; therefore enterprises need not employ separate labor to book and track containers, because even that aspect will be looked after by your 3PL provider.
Shipment control rests with 3PL: Owing to their enhanced information services, coordination of supply chain aspects and processes rests with the 3PL provider, who can ensure best shipment control and can also provide best P.O monitoring capabilities.
Accountability of delivery rests with the 3PL provider: Since the enterprise has outsourced all the elements of its supply chain elements including supply chain quality, it can be relieved of the responsibility of ensuring everything happens on time and delivery is carried out as per plan, the onus of this responsibility rests squarely on the 3PL provider.
With such benefits on offer, enterprises would do well to opt for an optimized 3PL solutions provider to ensure best supply chain quality and functioning.
In the course of this article we shall discuss the top benefits of third party logistics which include:
Reduced capital expenditure means less capital risks: 3rd Party logistics ensures that such expenditure and the risks that comes with such expenditure does not trouble the enterprise rather it is the outsourced logistics partner that will be bothered with such issues.
Enhanced coordination: Optimized third party logistics solutions can ensure enterprises engage in best coordination capabilities therefore the flow of goods can be efficiently carried out.
Net values rise with costs reduction: 3rd party logistics (3PL)ensures economies of scale and of scope which therefore brings in best ways to reduce costs and in turn raise the value of the company.
Enhanced competitive edge: 3PL enables enterprises to negotiate for better terms therefore they can engage services for competitive rates, providing them with ways to save costs and opt for the best solutions providers that does all the work.
Best savings: Since 3PL involves outsourcing your logistical headache, it also means that the administrative aspect of logistics is also effectively outsourced; therefore enterprises need not employ separate labor to book and track containers, because even that aspect will be looked after by your 3PL provider.
Shipment control rests with 3PL: Owing to their enhanced information services, coordination of supply chain aspects and processes rests with the 3PL provider, who can ensure best shipment control and can also provide best P.O monitoring capabilities.
Accountability of delivery rests with the 3PL provider: Since the enterprise has outsourced all the elements of its supply chain elements including supply chain quality, it can be relieved of the responsibility of ensuring everything happens on time and delivery is carried out as per plan, the onus of this responsibility rests squarely on the 3PL provider.
With such benefits on offer, enterprises would do well to opt for an optimized 3PL solutions provider to ensure best supply chain quality and functioning.
Wednesday, May 11, 2011
Internal emails show Google's tight control over Android
Google may have become heavy-handed in pressuring its Android device manufacturers to follow certain guidelines, recently released internal documents show. The documents have been released as part of a continuing lawsuit between it and Skyhook wireless over Google's insistence that Motorola use its own GPS location services.
Skyhook had originally won a contract to replace Google's location services with its own in all Motorola phones. The move apparently bothered the Mountain View, Calif.-based company, and it allegedly pressured Motorola into dropping the agreement. Skyhook then sued Google, alleging anti-competitive behavior.
In one of the emails from May 2010, Android group manager Dan Morrill makes reference to a "compatibility standard." While such a set of guidelines shouldn't be all that surprising, the way he described it is: that it was obvious that "we are using compatibility as a club to make them do things we want," according to the New York Times.
Such terminology seems to suggest that Google's oft-repeated boast about Android being "open" may not be true. Indeed, carriers have increasingly clamped down on what it will allow phones to do, and now it appears Google is ready to make sure phone manufacturers do what it wants as well.
There could be a valid reason for this, however: unlike Apple, Google must deal with a multitude of devices and ensure that Android works properly on every device. Such a conundrum is the same type of problem that Microsoft has with Windows, and also required the Redmond company to set standards for what it would support.
In any case, Google seems to be treading a fine line between acting in the best interest of the entire ecosystem and outright anticompetitive behavior: Morrill's off-color comments certainly give critics fodder that Google is practicing the latter.
Betanews is looking for its readers' opinions on Google and Android. Do you feel that the Mountain View company is heading down the same monopolistic path as Microsoft did more than a decade ago? Sound off in the comments.
We'll run your opinions in a future story.
Skyhook had originally won a contract to replace Google's location services with its own in all Motorola phones. The move apparently bothered the Mountain View, Calif.-based company, and it allegedly pressured Motorola into dropping the agreement. Skyhook then sued Google, alleging anti-competitive behavior.
In one of the emails from May 2010, Android group manager Dan Morrill makes reference to a "compatibility standard." While such a set of guidelines shouldn't be all that surprising, the way he described it is: that it was obvious that "we are using compatibility as a club to make them do things we want," according to the New York Times.
Such terminology seems to suggest that Google's oft-repeated boast about Android being "open" may not be true. Indeed, carriers have increasingly clamped down on what it will allow phones to do, and now it appears Google is ready to make sure phone manufacturers do what it wants as well.
There could be a valid reason for this, however: unlike Apple, Google must deal with a multitude of devices and ensure that Android works properly on every device. Such a conundrum is the same type of problem that Microsoft has with Windows, and also required the Redmond company to set standards for what it would support.
In any case, Google seems to be treading a fine line between acting in the best interest of the entire ecosystem and outright anticompetitive behavior: Morrill's off-color comments certainly give critics fodder that Google is practicing the latter.
Betanews is looking for its readers' opinions on Google and Android. Do you feel that the Mountain View company is heading down the same monopolistic path as Microsoft did more than a decade ago? Sound off in the comments.
We'll run your opinions in a future story.
Tuesday, May 10, 2011
ViewSonic ViewPad 10 tablet: Windows plus Android doesn't add up
As is the case for Android 2.2 on smartphones, the built-in Email application supports only unsecured Exchange accounts, in addition to POP and IMAP. The ViewPad's version of Android doesn't support passwords -- meaning you can't secure access to the tablet's Android partition even at a basic level. There's a VPN feature in its Settings app, but the VPN capability doesn't work; ViewSonic says it plans to fix that issue in a future update.
As in the Window OS, there's a smattering of apps preinstalled, including Email (but not Gmail), Messaging, Music, Calculator, App Store (which goes to a private app store, not the Android Market), a couple games, and -- oddly -- the ConnectBot SSH client. I say "oddly" because ViewSonic told me it expects ViewPad 10 users to run Windows 7 for work and employ the Android OS for personal entertainment such as playing music. Never mind that Windows 7 has a perfectly good music player app; perhaps ViewSonic assumes companies will lock that down so that users can't use it.
ViewSonic didn't bother; neither should you
It's Microsoft's fault that Windows 7 isn't really designed to work on tablets, but ViewSonic is to blame for putting Microsoft's OS on a device that's not powerful enough to run it. ViewSonic is also to blame for using a nontablet version of Android on its tablet and for making that OS so awkward to use. It's ViewSonic's fault that its boot loader and Android interfaces don't match the physical button on its case.
I could go on about the ViewPad's heavy weight (1.93 pounds), overly thick case (nearly twice as thick as an iPad 2), high-glare screen, and lack of rear camera. I could note it comes with Wi-Fi only and that its 10-inch widescreen is a very awkward ratio. I could even say how many ports it has. But who cares? If the hardware were better, this tablet would still be unusable.
ViewSonic didn't bother to design his product so that all the components worked together. Instead, it took whatever body parts it could scrounge up and created a Frankentablet. Leave the monsters in the movies, and get a real tablet instead: an Apple iPad 2 or a Motorola Mobility Xoom or a Windows 7 ultralight laptop.
As in the Window OS, there's a smattering of apps preinstalled, including Email (but not Gmail), Messaging, Music, Calculator, App Store (which goes to a private app store, not the Android Market), a couple games, and -- oddly -- the ConnectBot SSH client. I say "oddly" because ViewSonic told me it expects ViewPad 10 users to run Windows 7 for work and employ the Android OS for personal entertainment such as playing music. Never mind that Windows 7 has a perfectly good music player app; perhaps ViewSonic assumes companies will lock that down so that users can't use it.
ViewSonic didn't bother; neither should you
It's Microsoft's fault that Windows 7 isn't really designed to work on tablets, but ViewSonic is to blame for putting Microsoft's OS on a device that's not powerful enough to run it. ViewSonic is also to blame for using a nontablet version of Android on its tablet and for making that OS so awkward to use. It's ViewSonic's fault that its boot loader and Android interfaces don't match the physical button on its case.
I could go on about the ViewPad's heavy weight (1.93 pounds), overly thick case (nearly twice as thick as an iPad 2), high-glare screen, and lack of rear camera. I could note it comes with Wi-Fi only and that its 10-inch widescreen is a very awkward ratio. I could even say how many ports it has. But who cares? If the hardware were better, this tablet would still be unusable.
ViewSonic didn't bother to design his product so that all the components worked together. Instead, it took whatever body parts it could scrounge up and created a Frankentablet. Leave the monsters in the movies, and get a real tablet instead: an Apple iPad 2 or a Motorola Mobility Xoom or a Windows 7 ultralight laptop.
Sunday, May 8, 2011
Microsoft Office 2010 takes on all comers
OpenOffice.org, LibreOffice, IBM Lotus Symphony, SoftMaker Office, Corel WordPerfect, and Google Docs challenge the Microsoft juggernaut
Ask most people to name a productivity suite and chances are they'll say Microsoft Office, but they might also name one of the numerous competitors that have sprung up. None have completely displaced the Microsoft monolith, but they've made inroads.
Most of the competition has positioned itself as being better by being cheaper. SoftMaker Office has demonstrated you don't always need to pay Microsoft's prices to get some of the same quality, while OpenOffice.org proved you might not need to pay anything at all. Meanwhile, services like Google Docs are available for anyone with an Internet connection.
[ Also on InfoWorld: "10 great free desktop productivity tools that aren't OpenOffice.org" | "Great Office 2010 features for business" | Follow the latest Windows developments in InfoWorld's Technology: Microsoft newsletter. ]
Microsoft's response has been to issue the newest version of Office (2010) in three retail editions with slightly less ornery pricing than before, as well as a free, ad-supported version (Microsoft Office Starter Edition) that comes preloaded on new PCs. Despite the budget-friendly competition, Office continues to sell, with Microsoft claiming back in January that one copy of Office 2010 is sold somewhere in the world every second. (Full disclosure: The author of this review recently bought a copy for his own use.)
How well do the alternatives shape up? And how practical is it to switch to them when you have an existing array of documents created in Microsoft Office? Those are the questions I had in mind when I sat down with both the new version of Microsoft Office and several other programs (and one cloud service) that have been positioned as low- or no-cost replacements.
Microsoft Office 2010
Despite all efforts to dethrone it, Microsoft Office remains the de facto standard for word processing, spreadsheets, presentations, and to a high degree, corporate email. Other programs may have individual features that are better implemented, but Microsoft has made the whole package work together, both across the different programs in the suite and in Windows itself, with increasing care and attention in each revision.
Test Center Scorecard
20% 20% 20% 15% 15% 10%
Microsoft Office 2010 10 10 10 8 9 9
9.5
Excellent
20% 20% 20% 15% 15% 10%
OpenOffice.org 3.3.0 7 7 7 7 7 7
7.0
Good
20% 20% 20% 15% 15% 10%
LibreOffice 3.3.1 7 7 7 7 7 7
7.0
Good
20% 20% 20% 15% 15% 10%
IBM Lotus Symphony 3.0 7 7 7 7 8 8
7.3
Good
20% 20% 20% 15% 15% 10%
SoftMaker Office 2010 9 9 9 7 7 9
8.4
Very Good
20% 20% 20% 15% 15% 10%
Corel WordPerfect Office X5 6 6 6 5 6 6
5.9
Poor
20% 20% 20% 15% 15% 10%
Google Docs 7 7 7 7 7 7
7.0
Good
If you avoided Office 2007 because of the radical changes to the interface -- namely, the ribbon that replaced the conventional icon toolbars -- three years' time might change your mind. First, the ribbon's no longer confined to Office only; it shows up in many other programs and isn't as alien as before. Second, Microsoft addressed one major complaint about the ribbon -- that it wasn't customizable -- and made it possible in Office 2010 for end-users to organize the ribbon as freely as they did their legacy toolbars. I'm irked Microsoft didn't make this possible with the ribbon from the start, but at least it's there now.
Finally, the ribbon is now implemented consistently in Office 2010. Whereas Outlook 2007 displayed the ribbon only when editing messages, Outlook 2010 uses the ribbon throughout. (The rest of Outlook has also been streamlined a great deal; the thicket of settings and submenus has been pruned down a bit and made easier to traverse.) One feature that would be hugely useful is a type-to-find function for the ribbon; there is an add-in that accomplishes this, but having it as a native feature would be great.
Aside from the interface changes, Office 2007's other biggest alteration was a new XML-based document format. Office 2010 keeps the new format but expands backward- and cross-compatibility, as well as native handling of OpenDocument Format (ODF) documents -- the .odt, .ods, and .odp formats used by OpenOffice.org. When you open a legacy Word .doc or .rtf file, for instance, the legend "[Compatibility Mode]" appears in the window title. This means any functions not native to that document format are disabled, so edits to the document can be reopened without problems in earlier versions of Office.
Note that ODF documents don't trigger compatibility mode, since Office 2010 claims to have a high degree of compatibility between the two. The problem is "high degree" doesn't always mean perfect compatibility. If you highlight a passage in an ODF document while in Word 2010, OpenOffice.org and LibreOffice recognize the highlighting. But if you highlight in OpenOffice.org or LibreOffice, Word 2010 interprets the highlighting as merely a background color assignment for the selected text.
Exporting to HTML is, sadly, still messy; Word has never been good at exporting simple HTML that preserves only basic markup. Also, exporting to PDF is available natively, but the range of options in Word's PDF export module is very narrow compared to that of OpenOffice.org.
Many other little changes throughout Office 2010 ease daily work. I particularly like the way the "find" function works in Word now, where all the results in a given document are shown in a navigation pane. This makes it far easier to find that one occurrence of a phrase you're looking for. Excel has some nifty new ways to represent and manipulate data: Sparklines, little in-cell charts that usefully display at-a-glance visualizations of data; and data slicers, multiple-choice selectors that help widen or narrow the scope of the data you're looking at. PowerPoint lets you broadcast a presentation across the Web (via Microsoft's PowerPoint Broadcast Service, the use of which comes free with a PowerPoint license) or save a presentation as a video.
One last feature is worth mentioning as a possible future direction for all products in this vein. Office users who also have a SharePoint server can now collaborate in real time on Word, PowerPoint, or Excel documents. Unfortunately, SharePoint is way out of the reach of most casual users. But given how many professional-level features in software generally have percolated down to the end-user level, I wouldn't be surprised if Microsoft eventually adds real-time collaboration, perhaps through Windows Live Mesh, as a standard feature.
Among the many new touches in Office 2010 is a much more useful document-search function, which shows results in a separate pane.
Among the many new touches in Office 2010 is a much more useful document-search function, which shows results in a separate pane.
Ask most people to name a productivity suite and chances are they'll say Microsoft Office, but they might also name one of the numerous competitors that have sprung up. None have completely displaced the Microsoft monolith, but they've made inroads.
Most of the competition has positioned itself as being better by being cheaper. SoftMaker Office has demonstrated you don't always need to pay Microsoft's prices to get some of the same quality, while OpenOffice.org proved you might not need to pay anything at all. Meanwhile, services like Google Docs are available for anyone with an Internet connection.
[ Also on InfoWorld: "10 great free desktop productivity tools that aren't OpenOffice.org" | "Great Office 2010 features for business" | Follow the latest Windows developments in InfoWorld's Technology: Microsoft newsletter. ]
Microsoft's response has been to issue the newest version of Office (2010) in three retail editions with slightly less ornery pricing than before, as well as a free, ad-supported version (Microsoft Office Starter Edition) that comes preloaded on new PCs. Despite the budget-friendly competition, Office continues to sell, with Microsoft claiming back in January that one copy of Office 2010 is sold somewhere in the world every second. (Full disclosure: The author of this review recently bought a copy for his own use.)
How well do the alternatives shape up? And how practical is it to switch to them when you have an existing array of documents created in Microsoft Office? Those are the questions I had in mind when I sat down with both the new version of Microsoft Office and several other programs (and one cloud service) that have been positioned as low- or no-cost replacements.
Microsoft Office 2010
Despite all efforts to dethrone it, Microsoft Office remains the de facto standard for word processing, spreadsheets, presentations, and to a high degree, corporate email. Other programs may have individual features that are better implemented, but Microsoft has made the whole package work together, both across the different programs in the suite and in Windows itself, with increasing care and attention in each revision.
Test Center Scorecard
20% 20% 20% 15% 15% 10%
Microsoft Office 2010 10 10 10 8 9 9
9.5
Excellent
20% 20% 20% 15% 15% 10%
OpenOffice.org 3.3.0 7 7 7 7 7 7
7.0
Good
20% 20% 20% 15% 15% 10%
LibreOffice 3.3.1 7 7 7 7 7 7
7.0
Good
20% 20% 20% 15% 15% 10%
IBM Lotus Symphony 3.0 7 7 7 7 8 8
7.3
Good
20% 20% 20% 15% 15% 10%
SoftMaker Office 2010 9 9 9 7 7 9
8.4
Very Good
20% 20% 20% 15% 15% 10%
Corel WordPerfect Office X5 6 6 6 5 6 6
5.9
Poor
20% 20% 20% 15% 15% 10%
Google Docs 7 7 7 7 7 7
7.0
Good
If you avoided Office 2007 because of the radical changes to the interface -- namely, the ribbon that replaced the conventional icon toolbars -- three years' time might change your mind. First, the ribbon's no longer confined to Office only; it shows up in many other programs and isn't as alien as before. Second, Microsoft addressed one major complaint about the ribbon -- that it wasn't customizable -- and made it possible in Office 2010 for end-users to organize the ribbon as freely as they did their legacy toolbars. I'm irked Microsoft didn't make this possible with the ribbon from the start, but at least it's there now.
Finally, the ribbon is now implemented consistently in Office 2010. Whereas Outlook 2007 displayed the ribbon only when editing messages, Outlook 2010 uses the ribbon throughout. (The rest of Outlook has also been streamlined a great deal; the thicket of settings and submenus has been pruned down a bit and made easier to traverse.) One feature that would be hugely useful is a type-to-find function for the ribbon; there is an add-in that accomplishes this, but having it as a native feature would be great.
Aside from the interface changes, Office 2007's other biggest alteration was a new XML-based document format. Office 2010 keeps the new format but expands backward- and cross-compatibility, as well as native handling of OpenDocument Format (ODF) documents -- the .odt, .ods, and .odp formats used by OpenOffice.org. When you open a legacy Word .doc or .rtf file, for instance, the legend "[Compatibility Mode]" appears in the window title. This means any functions not native to that document format are disabled, so edits to the document can be reopened without problems in earlier versions of Office.
Note that ODF documents don't trigger compatibility mode, since Office 2010 claims to have a high degree of compatibility between the two. The problem is "high degree" doesn't always mean perfect compatibility. If you highlight a passage in an ODF document while in Word 2010, OpenOffice.org and LibreOffice recognize the highlighting. But if you highlight in OpenOffice.org or LibreOffice, Word 2010 interprets the highlighting as merely a background color assignment for the selected text.
Exporting to HTML is, sadly, still messy; Word has never been good at exporting simple HTML that preserves only basic markup. Also, exporting to PDF is available natively, but the range of options in Word's PDF export module is very narrow compared to that of OpenOffice.org.
Many other little changes throughout Office 2010 ease daily work. I particularly like the way the "find" function works in Word now, where all the results in a given document are shown in a navigation pane. This makes it far easier to find that one occurrence of a phrase you're looking for. Excel has some nifty new ways to represent and manipulate data: Sparklines, little in-cell charts that usefully display at-a-glance visualizations of data; and data slicers, multiple-choice selectors that help widen or narrow the scope of the data you're looking at. PowerPoint lets you broadcast a presentation across the Web (via Microsoft's PowerPoint Broadcast Service, the use of which comes free with a PowerPoint license) or save a presentation as a video.
One last feature is worth mentioning as a possible future direction for all products in this vein. Office users who also have a SharePoint server can now collaborate in real time on Word, PowerPoint, or Excel documents. Unfortunately, SharePoint is way out of the reach of most casual users. But given how many professional-level features in software generally have percolated down to the end-user level, I wouldn't be surprised if Microsoft eventually adds real-time collaboration, perhaps through Windows Live Mesh, as a standard feature.
Among the many new touches in Office 2010 is a much more useful document-search function, which shows results in a separate pane.
Among the many new touches in Office 2010 is a much more useful document-search function, which shows results in a separate pane.
Saturday, May 7, 2011
13 features that make each Web browser unique
FireFTP, for instance, is one of the deeper extensions that's hard to spin up from the classic three languages: HTML, CSS, and JavaScript. It takes advantage of the access to the file system and the low-level access to the TCP/IP stack. Some people may feel the thinner APIs from the other browsers act like a better sandbox and thus offer more security -- and they're right. But many of the most sophisticated extensions for Firefox require the flexibility of dipping into native code and interfacing directly with the operating system.
Internet Explorer 9: Emphasis on energy efficiency
Everyone may be talking about JavaScript compilation engines and hardware integration, but the idea of measuring browser energy consumption is a new one. Here, Microsoft is leading the way, claiming that IE9 is the most energy-efficient browser.
Of course, there's no easy way to test this assertion, even with an electrical meter because the computer could be burning electricity on some background task. However, the idea is meaningful, in large part because handheld devices need to be very careful with power consumption. While no one really notices if their video card on the game machine requires a separate pipeline from the Middle East to keep it running, everyone squawks when the phone dies halfway through the afternoon.
IE9 does not yet run on phones, but it may affect laptop energy conservation. Furthermore, simply paying attention to browser energy consumption may put Microsoft ahead of what could soon become a very important game.
Chrome: A separate process for each tab
For the past few years, interest in multiprocess architectures has been growing among browser developers. Here, Google has taken the lead, splitting the work of Chrome tabs into different processes. This approach relies on the operating system to isolate crashes, thereby making the browser more stable. In other words, if one plug-in or Web page goes south, the OS isolates the danger, usually ensuring that the other tabs sail on unaware.
Of course, all browser makers are rolling out multiprocess technology in different ways and at different speeds. Open your PC's process display window and start cracking apart the tabs -- you'll see that the browsers spawn a few processes, but only Google Chrome keeps opening them up. Chrome is the browser most committed to separating the workload and letting the operating system act as a referee.
Some argue that this belts-and-suspenders approach is overkill and not worth the overhead, claiming that the browser makers should not fall back on the operating system for support. Others suggest the browser experience can end up being slower if related windows are split into different processes. To combat this, Chrome sometimes puts pages from the same domain in the same process, but you can expect arguments over the best way to handle multiprocessing to continue for the foreseeable future.
Internet Explorer 9: Jump lists and site pinning
Jump lists began as little menus attached to icons in Windows 7. Right-click an application's icon and you'll find shortcuts to app-specific tasks and recently accessed files as determined by the app's developer. Now these jump lists are part of IE9, and every Web designer can specify a quick list of important pages for users to access quickly with a right-click. IE9 takes the jump-list concept one step further by allowing you to "pin" websites to the bar at the top of each window where they can be easier to reach. The jump list adds a pull-down menu for these pinned websites. It's a good solution for common destinations, like email or shopping sites.
Internet Explorer 9: Emphasis on energy efficiency
Everyone may be talking about JavaScript compilation engines and hardware integration, but the idea of measuring browser energy consumption is a new one. Here, Microsoft is leading the way, claiming that IE9 is the most energy-efficient browser.
Of course, there's no easy way to test this assertion, even with an electrical meter because the computer could be burning electricity on some background task. However, the idea is meaningful, in large part because handheld devices need to be very careful with power consumption. While no one really notices if their video card on the game machine requires a separate pipeline from the Middle East to keep it running, everyone squawks when the phone dies halfway through the afternoon.
IE9 does not yet run on phones, but it may affect laptop energy conservation. Furthermore, simply paying attention to browser energy consumption may put Microsoft ahead of what could soon become a very important game.
Chrome: A separate process for each tab
For the past few years, interest in multiprocess architectures has been growing among browser developers. Here, Google has taken the lead, splitting the work of Chrome tabs into different processes. This approach relies on the operating system to isolate crashes, thereby making the browser more stable. In other words, if one plug-in or Web page goes south, the OS isolates the danger, usually ensuring that the other tabs sail on unaware.
Of course, all browser makers are rolling out multiprocess technology in different ways and at different speeds. Open your PC's process display window and start cracking apart the tabs -- you'll see that the browsers spawn a few processes, but only Google Chrome keeps opening them up. Chrome is the browser most committed to separating the workload and letting the operating system act as a referee.
Some argue that this belts-and-suspenders approach is overkill and not worth the overhead, claiming that the browser makers should not fall back on the operating system for support. Others suggest the browser experience can end up being slower if related windows are split into different processes. To combat this, Chrome sometimes puts pages from the same domain in the same process, but you can expect arguments over the best way to handle multiprocessing to continue for the foreseeable future.
Internet Explorer 9: Jump lists and site pinning
Jump lists began as little menus attached to icons in Windows 7. Right-click an application's icon and you'll find shortcuts to app-specific tasks and recently accessed files as determined by the app's developer. Now these jump lists are part of IE9, and every Web designer can specify a quick list of important pages for users to access quickly with a right-click. IE9 takes the jump-list concept one step further by allowing you to "pin" websites to the bar at the top of each window where they can be easier to reach. The jump list adds a pull-down menu for these pinned websites. It's a good solution for common destinations, like email or shopping sites.
Friday, May 6, 2011
Why Switching to Digital TV Was Necessary
Many people were surprised when all of a sudden the broadcasting networks shifted from analog to digital TV technology. A lot of consumers complained that the shift would only mean more expenses for them and having less of a chance to enjoy the benefits.
Digital TV is an advanced technology now being used by broadcasters. Since the shift in 2009, more and more channels can be broadcasted using the same space that the analog signal used. Since the shift occurred, watching TV has been much improved. Not only are there better pictures and the sound much improved but there are also more programming choices or multicasting, as well as interactive capacities.
Many broadcasters saw that the shift to digital TV for the technology allowed the freeing of more space in the broadcast spectrum. This technology made more room for communications which were becoming overwhelmed. With digital TV police, rescue and fire department communications now had more room to grow.
The home viewers also benefited greatly from the technology for more there were now more stations to choose from, better picture and sound as well as more efficient broadcasting. The broadcaster cold now offer a choice to the consumers between analog and High Definition digital programs or even multiple Standard Definition digital programs. This ability to offer more than one kind of broadcasting is called multi casting.
Multi casting also allows the broadcasters to offer more channels which contained more digital programming simultaneously. This is again because digital signals took up less space. What an analog could crowd, the digital made more room, in the same amount of space of the original analog signal. That meant that more and more people were getting more and more choices at their fingertips. Digital TV provided the interaction which was not possible with the analog services.
This is a relatively young technology and truly there are a lot of disadvantages. However, simply because there are disadvantages should not necessarily make it bad and it is important for people to consider taking advantage of digital technology which also opened the door to wireless technology including wireless TV, the internet and mobile phones.
Digital TV is an advanced technology now being used by broadcasters. Since the shift in 2009, more and more channels can be broadcasted using the same space that the analog signal used. Since the shift occurred, watching TV has been much improved. Not only are there better pictures and the sound much improved but there are also more programming choices or multicasting, as well as interactive capacities.
Many broadcasters saw that the shift to digital TV for the technology allowed the freeing of more space in the broadcast spectrum. This technology made more room for communications which were becoming overwhelmed. With digital TV police, rescue and fire department communications now had more room to grow.
The home viewers also benefited greatly from the technology for more there were now more stations to choose from, better picture and sound as well as more efficient broadcasting. The broadcaster cold now offer a choice to the consumers between analog and High Definition digital programs or even multiple Standard Definition digital programs. This ability to offer more than one kind of broadcasting is called multi casting.
Multi casting also allows the broadcasters to offer more channels which contained more digital programming simultaneously. This is again because digital signals took up less space. What an analog could crowd, the digital made more room, in the same amount of space of the original analog signal. That meant that more and more people were getting more and more choices at their fingertips. Digital TV provided the interaction which was not possible with the analog services.
This is a relatively young technology and truly there are a lot of disadvantages. However, simply because there are disadvantages should not necessarily make it bad and it is important for people to consider taking advantage of digital technology which also opened the door to wireless technology including wireless TV, the internet and mobile phones.
Thursday, May 5, 2011
Microsoft adapts product support lifecycle -- 'to the cloud!'
I've always thought that one of the keys to Microsoft's success in business computing is its support lifecycle policy. When you buy a Microsoft product for your business you can count on a long period of support and bug fixes and an even longer period of security updates. Now Microsoft is adapting its support lifecycle policy to the cloud.
Click here to read Microsoft's main page on its support lifecycle. I'm running Windows 7 64-bit on a ThinkPad. The OS shipped October 22, 2009 and "mainstream support" ends January 15, 2015. After that (for business products) there are 5 years of "extended support" in which free (well, no such thing, let's say included with the software price) Microsoft support ends (other than security updates), and you can't request feature changes anymore. But you can at least buy all other support options. After 10 years, usually the "in the wilderness" phase of support starts, but at least Microsoft keeps support info on its web site. This is the phase into which, for example, Windows 2000 recently entered.
In fact, if you ask me, in some cases the support lifecycle has gone too far. From the standpoint of wanting to improve the security of "the Windows ecosystem," Microsoft's decision to extend support for Windows XP to 2014 was counterproductive. As I said, typically this level of support runs out after about 10 years. Companies should be moving away from XP with all due speed. Not to digress too far on this; the real point I'm trying to make is that Microsoft has always been liberal about support periods, and this has helped it.
Imagine, by contrast, that you're considering buying Macs for your business. Apple provides support for the current and previous OS X versions. Support for 10.5 (Leopard), which shipped October 26, 2007, will end with the release of 10.7 (Lion), which will ship later this year. It's probably fair to say that generational upgrades of OS X aren't the life-changing event that moving from XP to Vista or Windows 7 is, but it's not nothing. IT has to test apps and configurations and develop a plan for rollout. You can't take your time upgrading the way Windows shops do.
lifecycle support
But as Microsoft says in their ads, "To The Cloud!"
I'm a Google Apps customer myself and I've experienced the scary/exciting moment of cloud computing from the customer standpoint: You start up your apps one morning and things are different. Wasn't that button over there before? Where'd the View menu go? And the cosmetic changes are the small stuff. Who knows what's changed in the internal behavior?
The big new concept in online support lifecycle is "disruptive change." Certain changes in software will be lableled as disruptive changes and trigger a set of rules including a minimum of 12 months of prior notice before implementation.
What is a disruptive change? "Disruptive change broadly refers to changes that require significant action whether in the form of administrator intervention, substantial changes to the user experience, data migration or required updates to client software." An example Microsoft provides is a required update to Outlook in order for it to work properly with Exchange Hosted Services.
Speaking of Outlook and other non-cloud apps, such applications sometimes communicate -- one might actually say integrate -- with cloud apps. The Outlook-hosted Exchange pairing is the most common and obvious example, but the number of potential pairings is large and the potential complexity great. Microsoft is clear that even if PC software changes are mandated by cloud changes, they don't affect the standard mainstream/extended support scheme for "on-premesis software."
And the new policies don't apply to security updates. Such updates need to be implemented quickly and, since Microsoft owns the implementation, will be.
There are two other cloud lifecycle policies Microsoft announced: The company will provide a minimum of 12 months prior notification before ending an online service for business and developer customers. Also Microsoft will retain customer data for a minimum of 30 days to facilitate customer migrations, renewal activities or the deprovisioning of the Online Service.
I imagine that private clouds are a different matter. I'll have to check into that.
Click here to read Microsoft's main page on its support lifecycle. I'm running Windows 7 64-bit on a ThinkPad. The OS shipped October 22, 2009 and "mainstream support" ends January 15, 2015. After that (for business products) there are 5 years of "extended support" in which free (well, no such thing, let's say included with the software price) Microsoft support ends (other than security updates), and you can't request feature changes anymore. But you can at least buy all other support options. After 10 years, usually the "in the wilderness" phase of support starts, but at least Microsoft keeps support info on its web site. This is the phase into which, for example, Windows 2000 recently entered.
In fact, if you ask me, in some cases the support lifecycle has gone too far. From the standpoint of wanting to improve the security of "the Windows ecosystem," Microsoft's decision to extend support for Windows XP to 2014 was counterproductive. As I said, typically this level of support runs out after about 10 years. Companies should be moving away from XP with all due speed. Not to digress too far on this; the real point I'm trying to make is that Microsoft has always been liberal about support periods, and this has helped it.
Imagine, by contrast, that you're considering buying Macs for your business. Apple provides support for the current and previous OS X versions. Support for 10.5 (Leopard), which shipped October 26, 2007, will end with the release of 10.7 (Lion), which will ship later this year. It's probably fair to say that generational upgrades of OS X aren't the life-changing event that moving from XP to Vista or Windows 7 is, but it's not nothing. IT has to test apps and configurations and develop a plan for rollout. You can't take your time upgrading the way Windows shops do.
lifecycle support
But as Microsoft says in their ads, "To The Cloud!"
I'm a Google Apps customer myself and I've experienced the scary/exciting moment of cloud computing from the customer standpoint: You start up your apps one morning and things are different. Wasn't that button over there before? Where'd the View menu go? And the cosmetic changes are the small stuff. Who knows what's changed in the internal behavior?
The big new concept in online support lifecycle is "disruptive change." Certain changes in software will be lableled as disruptive changes and trigger a set of rules including a minimum of 12 months of prior notice before implementation.
What is a disruptive change? "Disruptive change broadly refers to changes that require significant action whether in the form of administrator intervention, substantial changes to the user experience, data migration or required updates to client software." An example Microsoft provides is a required update to Outlook in order for it to work properly with Exchange Hosted Services.
Speaking of Outlook and other non-cloud apps, such applications sometimes communicate -- one might actually say integrate -- with cloud apps. The Outlook-hosted Exchange pairing is the most common and obvious example, but the number of potential pairings is large and the potential complexity great. Microsoft is clear that even if PC software changes are mandated by cloud changes, they don't affect the standard mainstream/extended support scheme for "on-premesis software."
And the new policies don't apply to security updates. Such updates need to be implemented quickly and, since Microsoft owns the implementation, will be.
There are two other cloud lifecycle policies Microsoft announced: The company will provide a minimum of 12 months prior notification before ending an online service for business and developer customers. Also Microsoft will retain customer data for a minimum of 30 days to facilitate customer migrations, renewal activities or the deprovisioning of the Online Service.
I imagine that private clouds are a different matter. I'll have to check into that.
Wednesday, May 4, 2011
New Wi-Fi gear aims to wipe out Ethernet edge switches
A third new service is a patent-pending technology called Orthogonal Array Beam Forming (OABF). WLAN vendors over the past two years have been adding support for various optional parts of the 11n standard, (see from May 2010, "Major Wi-Fi changes ahead") including transmit beam forming (sometimes "beamforming"). The same waveform is sent over 11n's multiple antennas, with the magnitude and phase adjusted at each transmitter to focus the beam direction toward a particular receiver. This increases the signal's gain so it's more stable, and can be "steered around" interferers so it's more reliable.
[Ruckus Wireless in 2009 was the first to introduce beam forming for 11n products, exploiting its unique multi-component antenna design. Wireless blogger Craig Mathias used that introduction to explore the topic.]
Meru has created what it says is a more fine-grained alternative. Each Wi-Fi signal is made up of about 60 sub-carriers over a wide swath of spectrum, says Graham Melville, Meru's director of product management. Meru's code can optimize each of the sub-carriers and the result, he says, is an improvement in gain, or sensitivity, on the order of 8-10 dB.
The result of the improved gain is a higher signal quality and higher data rates: where Meru saw 36Mbps before applying its beamforming technology, it saw 54Mbps after, for example. "It stays at the high data rates because the signal is stronger, and better quality," Melville says.
The new access points also can use the optional Meru Proactive Spectrum Analysis as part of another service, called Air Traffic Services. One of the AP400 radios can be assigned the job of continually monitoring the Wi-Fi radio frequencies for unauthorized radios, analyzing the spectrum usage and interference, and running Meru's integrated wireless intrusion prevention system.
Another network service is called Mobile Application Segregation: administrators can create a dedicated channel for individual applications or groups of them, high definition video, or wireless VoIP.
John Cox covers wireless networking and mobile computing for "Network World."
[Ruckus Wireless in 2009 was the first to introduce beam forming for 11n products, exploiting its unique multi-component antenna design. Wireless blogger Craig Mathias used that introduction to explore the topic.]
Meru has created what it says is a more fine-grained alternative. Each Wi-Fi signal is made up of about 60 sub-carriers over a wide swath of spectrum, says Graham Melville, Meru's director of product management. Meru's code can optimize each of the sub-carriers and the result, he says, is an improvement in gain, or sensitivity, on the order of 8-10 dB.
The result of the improved gain is a higher signal quality and higher data rates: where Meru saw 36Mbps before applying its beamforming technology, it saw 54Mbps after, for example. "It stays at the high data rates because the signal is stronger, and better quality," Melville says.
The new access points also can use the optional Meru Proactive Spectrum Analysis as part of another service, called Air Traffic Services. One of the AP400 radios can be assigned the job of continually monitoring the Wi-Fi radio frequencies for unauthorized radios, analyzing the spectrum usage and interference, and running Meru's integrated wireless intrusion prevention system.
Another network service is called Mobile Application Segregation: administrators can create a dedicated channel for individual applications or groups of them, high definition video, or wireless VoIP.
John Cox covers wireless networking and mobile computing for "Network World."
Tuesday, May 3, 2011
Connectix Prepares Virtual Server Beta
Utilizing virtual machine technology developed for its Virtual PC client product, Connectix has unveiled new server consolidation software that will soon enter beta testing. Dubbed Virtual Server, the product enables enterprises with large server clusters to reduce the amount of physical machines by running multiple virtual servers on each.
Connectix is seeking administrators who manage at least 25 Intel-based servers and hold an MCSE certification to test Virtual Server. Beta testers must be willing to run the Virtual Server 1.0 beta on an on-going basis and provide feedback to Connectix.
"Beta Testers will be able to enroll in several optional tele-seminars we are planning with Connectix architects, select Virtual Server partners, and third party industry analysts, regarding applications and emerging best practices for server consolidation," according to the company. Testers will also receive early access to Connectix product releases.
To apply for the Virtual Server beta, review the test requirements and fill out the beta program application. Connectix will select a limited number of applicants to participate
Connectix is seeking administrators who manage at least 25 Intel-based servers and hold an MCSE certification to test Virtual Server. Beta testers must be willing to run the Virtual Server 1.0 beta on an on-going basis and provide feedback to Connectix.
"Beta Testers will be able to enroll in several optional tele-seminars we are planning with Connectix architects, select Virtual Server partners, and third party industry analysts, regarding applications and emerging best practices for server consolidation," according to the company. Testers will also receive early access to Connectix product releases.
To apply for the Virtual Server beta, review the test requirements and fill out the beta program application. Connectix will select a limited number of applicants to participate
Monday, May 2, 2011
Wi-Fi Direct aims to be the 'Bluetooth Killer'
Imagine a wireless home network where devices communicate directly with one another instead of through the wireless router -- a sort of mesh network without the need to switch to ad hoc mode. Today the Wi-Fi Alliance announced it has almost completed the standard which could make these a reality: Wi-Fi Direct.
Wi-Fi Direct was known as "Wi-Fi Peer-to-Peer," and has repeatedly been referred to in IEEE meetings as a possible "Bluetooth Killer." By means of this standard, direct connections between computers, phones, cameras, printers, keyboards, and future classes of components are established over Wi-Fi instead of another wireless technology governed by a separate standard.
Even though the 2.4 GHz and 5 GHz bands are often dreadfully overcrowded in home networks, the appeal of such a standard is twofold: Any certified Wi-Fi Direct device will be able to communicate directly with any legacy Wi-Fi devices without the need for any new software on the legacy end, and transfer rates will be the same as infrastructure connections, thoroughly destroying Bluetooth. The theoretical maximum useful data transfer for Bluetooth 2.0 is 2.1 Mbps, while 802.11g has a theoretical maximum throughput of 54 Mbps.
"Wi-Fi Direct represents a leap forward for our industry. Wi-Fi users worldwide will benefit from a single-technology solution to transfer content and share applications quickly and easily among devices, even when a Wi-Fi access point isn't available. The impact is that Wi-Fi will become even more pervasive and useful for consumers and across the enterprise," Wi-Fi Alliance executive director Edgar Figueroa said in a statement today.
Wi-Fi Direct was known as "Wi-Fi Peer-to-Peer," and has repeatedly been referred to in IEEE meetings as a possible "Bluetooth Killer." By means of this standard, direct connections between computers, phones, cameras, printers, keyboards, and future classes of components are established over Wi-Fi instead of another wireless technology governed by a separate standard.
Even though the 2.4 GHz and 5 GHz bands are often dreadfully overcrowded in home networks, the appeal of such a standard is twofold: Any certified Wi-Fi Direct device will be able to communicate directly with any legacy Wi-Fi devices without the need for any new software on the legacy end, and transfer rates will be the same as infrastructure connections, thoroughly destroying Bluetooth. The theoretical maximum useful data transfer for Bluetooth 2.0 is 2.1 Mbps, while 802.11g has a theoretical maximum throughput of 54 Mbps.
"Wi-Fi Direct represents a leap forward for our industry. Wi-Fi users worldwide will benefit from a single-technology solution to transfer content and share applications quickly and easily among devices, even when a Wi-Fi access point isn't available. The impact is that Wi-Fi will become even more pervasive and useful for consumers and across the enterprise," Wi-Fi Alliance executive director Edgar Figueroa said in a statement today.
Sunday, May 1, 2011
Kindle users get Amazon offer for returned deleted books, gift certificates
While the distributor of several e-books was wrong to assume that the "classic" nature of certain titles allowed them to be sold under the public domain license, there's been considerable concern over Amazon's right to "undo" the sale of those titles through its electronic Kindle Store. Last July, Amazon CEO Jeff Bezos issued a mea culpa, saying the unannounced deletion of various titles including George Orwell's 1984 was "stupid, thoughtless, and painfully out of line with our principles."
This morning, as first noted by Gizmodo's Rosa Golijan, individuals affected by Amazon's unannounced deletions are now receiving e-mails that appear to be from Amazon, offering customers the opportunity to the company to deliver legitimate copies of their books free of charge, or alternately to receive $30 gift certificates or refund checks from Amazon.
The e-mail as quoted there is curious as it only mentions 1984, which was not the only deleted title. Last June, the retailer deleted illegitimate copies of Ayn Rand novels, including Atlas Shrugged, The Fountainhead, and The Virtue of Selfishness, one month prior to the deletions of Orwell's novels also including Animal Farm. Amazon has yet to confirm the legitimacy of the e-mails now being trafficked around the Web, nor is there evidence of similar e-mails regarding other deleted titles than the one that generated the most controversy because of its irony.
Many blogs and a few YouTube videos poked fun at the irony of, as they put it, a distributor "burning" books about book burning from a device called Kindle. Though some were confusing the title in question with Ray Bradbury's classic Fahrenheit 451, others accurately invoked Orwell's metaphorical "memory hole," which in his novel was a depository for all modern literature deemed irrelevant to the maintenance of the state.
From a technical and legal standpoint, however, Amazon may have been within its rights to do what it did, although it certainly turned out to be politically inconvenient for the retailer. Some distributors have been operating under the mistaken belief that since book distribution contracts historically have pertained only to printed material, the rights to distribute works electronically are up in the air, "jump balls" -- this was part of Google's original defense of its Google Books scanning project.
But the electronic version of a book is software. On the one hand, that qualifies it for copyright protection as one of "any and all forms" of publication, under book publishers' contracts; on the other, it gives book publishers the right to determine how, or if, they will distribute a copyrighted work as software. So if someone does that job for them and Amazon facilitates the sale, Amazon could be liable for copyright infringement -- a punishment which certainly made the retraction of the book urgent.
How Amazon went about that task in this case was perhaps ill-advised, especially since owners of Kindle and other e-book readers think of their electronic libraries as sacrosanct as their printed ones. The notion that they are purchasing software -- essentially, the limited right to use media in electronic form, as prescribed by the distributor -- may conflict with their feelings of books as possessions, and their equation of e-books with books from a moral standpoint.
Writing last month on behalf of the Free Software Foundation, Harvard University Law Professor John Palfrey argued that even though e-books are software, they hold the same sacred place in readers' hearts and should be protected as such: "The level of control Amazon has over their e-books conflicts with basic freedoms that we take for granted. In a future where books are sold with digital restrictions, it will be impossible for libraries to guarantee free access to human knowledge."
But that's for the reader of classic novels. One of the most lucrative platforms for e-book publishing in recent years has been technology books, far more so in some cases than for classic literature. And as it turns out, in a recent survey of 2,000 e-book customers, as O'Reilly publisher Joe Wikert reported last week, 81% of respondents use laptop computers to read their O'Reilly downloads, versus 29% on the iPhone, 14% for the Amazon Kindle, and 11% for the Sony Reader.
This morning, as first noted by Gizmodo's Rosa Golijan, individuals affected by Amazon's unannounced deletions are now receiving e-mails that appear to be from Amazon, offering customers the opportunity to the company to deliver legitimate copies of their books free of charge, or alternately to receive $30 gift certificates or refund checks from Amazon.
The e-mail as quoted there is curious as it only mentions 1984, which was not the only deleted title. Last June, the retailer deleted illegitimate copies of Ayn Rand novels, including Atlas Shrugged, The Fountainhead, and The Virtue of Selfishness, one month prior to the deletions of Orwell's novels also including Animal Farm. Amazon has yet to confirm the legitimacy of the e-mails now being trafficked around the Web, nor is there evidence of similar e-mails regarding other deleted titles than the one that generated the most controversy because of its irony.
Many blogs and a few YouTube videos poked fun at the irony of, as they put it, a distributor "burning" books about book burning from a device called Kindle. Though some were confusing the title in question with Ray Bradbury's classic Fahrenheit 451, others accurately invoked Orwell's metaphorical "memory hole," which in his novel was a depository for all modern literature deemed irrelevant to the maintenance of the state.
From a technical and legal standpoint, however, Amazon may have been within its rights to do what it did, although it certainly turned out to be politically inconvenient for the retailer. Some distributors have been operating under the mistaken belief that since book distribution contracts historically have pertained only to printed material, the rights to distribute works electronically are up in the air, "jump balls" -- this was part of Google's original defense of its Google Books scanning project.
But the electronic version of a book is software. On the one hand, that qualifies it for copyright protection as one of "any and all forms" of publication, under book publishers' contracts; on the other, it gives book publishers the right to determine how, or if, they will distribute a copyrighted work as software. So if someone does that job for them and Amazon facilitates the sale, Amazon could be liable for copyright infringement -- a punishment which certainly made the retraction of the book urgent.
How Amazon went about that task in this case was perhaps ill-advised, especially since owners of Kindle and other e-book readers think of their electronic libraries as sacrosanct as their printed ones. The notion that they are purchasing software -- essentially, the limited right to use media in electronic form, as prescribed by the distributor -- may conflict with their feelings of books as possessions, and their equation of e-books with books from a moral standpoint.
Writing last month on behalf of the Free Software Foundation, Harvard University Law Professor John Palfrey argued that even though e-books are software, they hold the same sacred place in readers' hearts and should be protected as such: "The level of control Amazon has over their e-books conflicts with basic freedoms that we take for granted. In a future where books are sold with digital restrictions, it will be impossible for libraries to guarantee free access to human knowledge."
But that's for the reader of classic novels. One of the most lucrative platforms for e-book publishing in recent years has been technology books, far more so in some cases than for classic literature. And as it turns out, in a recent survey of 2,000 e-book customers, as O'Reilly publisher Joe Wikert reported last week, 81% of respondents use laptop computers to read their O'Reilly downloads, versus 29% on the iPhone, 14% for the Amazon Kindle, and 11% for the Sony Reader.
Subscribe to:
Posts (Atom)