Category Archives: Technology

Digital VP Beena Ammanath

Ammanath also serves on the Cal Poly Computer Engineering ProgramIndustrial Advisory Board, helping to shape the future generation of computer scientists with her expertise. She recently was named one of the top female analytics experts in the Fortune 500 by Forbescontributor Meta S. Brown.

In this exclusive interview, Ammanath speaks to TechNewsWorld about AI, analytics, and diversity in tech.

TechNewsWorld: You are one of the thought leaders on artificial intelligence. How do you think AI will impact businesses and jobs?

Beena Ammanath: I have worked in a number of industries — e-commerce, financial, marketing, telecom, retail, software products and industrial — over the past two decades. I have seen how the growth of data from OLTP systems to data warehouses to big data and data science has impacted businesses.

I believe we are just at the tip of the iceberg with AI today. AI is not by itself an industry — more of a technology that is positioned to transform businesses across a number of sectors. AI will be so intertwined and pervasive within business operations in the future that it may be impossible to do business without AI. Fundamental business models of today are going to change, as AI evolves.

Tesla’s driverless car is still in its early AI stage, but it won’t be that long before drivers put their cars completely on autopilot. In a few years from now, Uber may not need drivers; just idle cars will be needed. But even more broadly, the whole transportation ecosystem is going to change.

The Palm Jumeirah Monorail in Dubai is a fully automatic driverless train that can shuttle up to 6,000 passengers an hour. The locomotive industry is poised for a revolution — not only passenger trains, but also long-haul goods transportation.

There will be an impact on jobs, but I see it more as job roles changing and not necessarily as job reduction. The jobs most at risk are those that are routine-intensive and are strictly defined with limited tasks. If you think of the transportation example, in a few years we may not need as many drivers, but we will need more programmers and support personnel.

The Reason Linux Desktop Does Better

First and foremost, Linux is literally free. Neither the operating system nor any of the programs you run will cost you a dime. Beyond the obvious financial benefit of getting software for free, Linux allows users to be free by affording access to the basic tools of modern computer use — such as word processing and photo editing — which otherwise might be unavailable due to the cost barrier.

Microsoft Office, which sets the de facto standard formats for documents of nearly every kind, demands a US$70 per year subscription. However, you can run LibreOffice for free while still handling documents in all the same formats with ease.

Free software also gives you the chance to try new programs, and with them new ways of pursuing business and leisure, without their prospective costs forcing you to make a commitment.

Instead of painstakingly weighing the merits of Mac or Windows and then taking a leap of faith, you can consider a vast spectrum of choices offered by hundreds of distributions — basically, different flavors of Linux — by trying each in turn until you find the one that’s right for you.

Linux can even save money on hardware, as some manufacturers — notably Dell — offer a discount for buying a computer with Linux preinstalled. They can charge less because they don’t have to pass on the cost of licensing Windows from Microsoft.

 

You Can Make It Your Own

There is practically nothing in Linux that can’t be customized. Among the projects central to the Linux ecosystem are desktop environments — that is, collections of basic user programs and visual elements, like status bars and launchers, that make up the user interface.

Some Linux distributions come bundled with a desktop environment. Ubuntu is paired with the Unity desktop, for example. Others, such as with Debian, give you a choice at installation. In either case, users are free to change to any one they like.

Most distributions officially support (i.e., vouch for compatibility) dozens of the most popular desktops, which makes finding the one you like best that much simpler. Within the pantheon of desktops, you can find anything from glossy modern interfaces like KDE Plasma or Gnome, to simple and lightweight ones like Xfce and MATE. Within each of these, you can personalize your setup further by changing the themes, system trays and menus, choosing from galleries of other users’ screens for inspiration.

The customization possibilities go well beyond aesthetics. If you prize system stability, you can run a distribution like Mint, which offers dependable hardware support and ensures smooth updates.

On the other hand, if you want to live on the cutting edge, you can install an OS like Arch Linux, which gives you the latest update to each program as soon as developers release it.

Thwarting API

A new tool is available to check the persistent harassment of online trolls. Google’s Jigsaw think tank last week launched Perspective, an early stage technology that uses machine learning to help neutralize trolls.

Perspective reviews comments and scores them based on their similarity to comments people have labeled as toxic, or that are likely to result in someone leaving a conversation.

Publishers can select what they want to do with the information Perspective provides to them. Their options include the following:

  • Flagging comments for their own moderators to review;
  • Providing tools to help users understand the potential toxicity of comments as they write them; and
  • Letting readers sort comments based on their likely toxicity.

Forty-seven percent of 3,000 Americans aged 15 or older reported experiencing online harassment or abuse, according to a survey Data & Society conducted last year. More 70 percent said they had witnessed online harassment or abuse.

Perspective got its training through an examination of hundreds of thousands of comments labeled by human reviewers who were asked to rate online comments on a scale from “very toxic” to “very healthy.”

Like all machine learning applications, Perspective improves as it’s used.

Partners and Future Plans

A number of partners have signed on to work with Jigsaw in this endeavor:

  • The Wikimedia Foundation is researching ways to detect personal attacks against volunteer editors on Wikipedia;
  • The New York Times is building an open source moderation tool to expand community discussion
  • The Economist is reworking its comments platform; and
  • The Guardian is researching how best to moderate comment forums, and host online discussions between readers and journalists.

Jigsaw has been testing a version of this technology with The New York Times, which has a team sifting through and moderating 11 thousand comments daily before they are posted.

Jigsaw is working to train models that let moderators sort through comments more quickly.

The company is looking for more partners. It wants to deliver models that work in languages other than English, as well as models that can identify other characteristics, such as when comments are unsubstantial or off-topic.

Display Top Latest iPhone Rumor List

Apple poked a hornet’s nest when it removed the standard headphone jack from the iPhone 7. It may do it again by replacing the Lightning port with USB-C in the next iPhone.

The Lightning port, introduced in 2012, is used to charge and connect accessories to the iPhone, but Apple plans to swap it for USB-C, which the company has been introducing into its computer lines, The Wall Street Journal reported Tuesday.

“It would be a bold step for Apple, because it would mean Apple would be dependent on the advance of the USB-C standard for any innovations they may want to make around physical connectors,” said IHS Markit Senior Director Ian Fogg.

In the past, Apple chose to use its own home-brewed connectors for the iPhone — first its dock connector, then Lightning.

“Both of them allowed Apple to innovate more quickly than the industry because they weren’t dependent on standards,” Fogg told TechNewsWorld, “and it enabled them to have a business model around accessories through third-party companies, where Apple could ensure quality and collect a license fee.”

USB-C: Good and Bad

It’s not likely that Apple will scrap the Lightning connector, said David McQueen, a research director at ABI Research.

“They’d only put USB-C in if it allows them to make the phone thinner,” he told TechNewsWorld.

“A standard connector would be better, because you could share the cables for it with the new MacBook and with other devices,” noted Kevin Krewell, a principal analyst at Tirias Research.

“That’s a good thing,” he said.

“The bad thing is you have to buy another cable,” Krewell told TechNewsWorld.

Apple will unveil three new iPhones in September, based on reports corroborated by the WSJ. The expected models are an iPhone 7s, a 7s Plus, and a 10th anniversary edition called “iPhone 8” or “X,” which could have a curved 5.8-inch OLED display.

“Switching from a Lightning connector to USB-C is a minor thing. It’s not going to make large numbers of people buy an iPhone,” said IHS Markit’s Fogg.

“On the other hand, innovating with the display, having a wide-aspect ratio display that fills the face of the phone without increasing the volume of the phone, is good for consumers and good for the experience of using the phone,” he observed.

 

OLED Offers VR Opportunity

Having an OLED in the next iPhone is a definite possibility, Tirias’ Krewell said.

“It’s just a matter of getting the right supply chain in place,” he pointed out.

“Apple’s wanted to switch to OLED, but getting the supply chain behind it to support their quality and standards and display resolution has been a challenge,” added Krewell.

OLED screens not only offer a more vibrant display with richer colors and deeper blacks, but also have lower persistence than other types of displays, which reduces motion blur.

“That makes OLEDs much more suited for things like virtual reality, ” IHS Markit’s Fogg said.

“Apple has resisted the temptation so far to make any play in that area,” he continued, “but a shift to an OLED, which we are expecting, would be an enabler for them to make a move to a VR experience if they want to.”

A large, end-to-end display also could make the iPhone more competitive in the market, maintained Patrick Moorhead, principal analyst at Moor Insights and Strategy.

“It would be exceptional and could bring them at parity with Samsung,” he told TechNewsWorld.

More Women in Tech

riana Gascoigne is the founder and CEO of Girls in Tech, a global nonprofit organization whose mission is to “engage, educate, and empower girls who are passionate about technology.”

Girls in Tech CEO Adriana Gascoigne

Founded in 2007, Girls in Tech claims 60 chapters with upwards of 50,000 members worldwide. The organization’s focus is not just on women in professional roles. It also offers support to anyone with an interest in technology, providing women with a platform for growth in the field.

In this exclusive interview, Gascoigne speaks to TechNewsWorld about he organization’s purpose, its accomplishments thus far, and its future hopes and plans.

 

Adriana Gascoigne: I was working at a startup and was one of very few women there. I’d look around the room every day and see that there was a huge problem of representation. I knew we needed to change the culture of the company to recruit more women and benefit more women, but we also needed diversity in product development.

If you have a diverse team, your product is going to be more successful. I think having a diverse group of people helps you to make a better product in the end, and I was striving to create a more diverse team so our customers could benefit from the end product.

The mission of Girls in Tech is still the same. Our tenets are empowerment, engagement, and education of women in STEM and tech. We focus on providing skills and a network so that women can succeed in whatever they want to do.

We want to serve as a support network, and provide advanced skills and a learning environment, so women can be exposed to different opportunities throughout their careers.

A woman’s career trajectory takes many different paths. We want to make sure that we have the resources, educational platforms and network to support women at many different stages of their career, and that they have the mentors and role models to follow.

Increase the safety of users

Twitter on Wednesday announced that over the next few months it will roll out changes designed to increase the safety of users:

  • Its algorithms will help identify accounts as they engage in abusive behavior, so the burden no longer will be on victims to report it;
  • Users will be able to limit certain account functionality, such as letting only followers see their tweets, for a set amount of time;
  • New filtering options will give users more control over what they see from certain types of accounts — such as those without profile pictures, or with unverified email addresses or phone numbers; and
  • New mute functionality will let users mute tweets from within their home timelines, and decide how long the content will be muted.

Twitter also will be more transparent about actions it takes in response to reports of harassment from users.

“These updates are part of the ongoing safety work we announced in January, and follow our changes announced on February 7,” a Twitter spokesperson said in a statement provided to TechNewsWorld by Liz Kelley of the company’s communications department.

A Fine Balance

“We’re giving people the choice to filter notifications in a variety of ways, including accounts who haven’t selected a profile photo or verified their phone number or email address,” the spokesperson noted.

The feature is not turned on by default but provided as an option.

Still, suggesting special handling for accounts without a profile picture — known as “eggs” because of the ovoid shape of the space left for the picture — and those without an email address or phone number could pose a privacy dilemma.

Twitter “is walking a fine line here between censorship and useful communication,” observed Michael Jude, a program manager at Stratecast/Frost & Sullivan.

 

Making the Internet Safe for Tweeters

Twitters’ ongoing efforts to curb abuse show that the company is “aware they have a serious problem, and what they’ve done so far is less than adequate,” remarked Rob Enderle, principal analyst at the Enderle Group.

Previous attempts ” were pretty pathetic, really, and Twitter needed to do something more substantive,” he told TechNewsWorld. “This seems to be far more substantive.”

Still, the new measures “don’t address the cause of the behavior — and until someone does, they will only be an increasingly ineffective Band-Aid,” Enderle cautioned.

 

No Place for the Timid

The latest tools may be successful at first, but “people will find ways around them,” Frost’s Jude told TechNewsWorld.

Twitter’s approach “is purely defensive,” he said. “It ought to just open up its space with the appropriate disclaimers; that would be easier and cheaper, and people who are easily offended would be put on notice that Twitter isn’t a safe space.”

Quantum Leap Could Redefine

No, I’m not talking about that Quantum Leap. IBM just made a really interesting announcement in that it is enhancing its online quantum computer systems with a new API and improving its simulator so it can handle 20 qubits.

While listening to the prebriefing was a bit like pretending I was Penny trying to understand Sheldon Cooper on Big Bang Theory, I think this move does showcase yet another huge approaching computing wave that could eclipse the one we currently are trying desperately, but largely failing, to ride.

I’ll share some thoughts on quantum computing and close with my product of the week: the Arlo Security Camera system from Netgear, which has to be the best comprehensive home security system in the market.

It is easy to get lost in the terminology surrounding quantum computing and glaze over. Basically, quantum computing is a revolutionary, not evolutionary, system that is pretty much indistinguishable from magic.

Let me give you an example. With a regular computing system at a machine language level you have 1s and 0s — an element is one or the other. With quantum computing, an element is both at the same time. This is like someone asking if your new car is black or white, and you can answer “yes” and be completely accurate.

In the world we think we live in, two opposites aren’t the same thing. In the quantum world, they sort of are. The most sick — or fun — explanation for this is Schrodinger’s cat (here’s a TED video about it), which is about how a cat who died in a closed box exists as both living and dead until the box is opened. Schrodinger supposedly was so disturbed about his analysis that he decided to abandon quantum physics and take up biology. I’m guessing talking smack about cats forced a career change.

When we currently talk about parallel computing, we talk about taking a single program, breaking it up into parts, and then executing it to get around the limitations of Moore’s law and avoiding the need to have a processor in our computer running hotter than the core of the sun. That gives you speed without heat.

With quantum computing, things happen pretty much at the same time. Because elements can be both things at once, things basically can happen instantly — not sequentially –so the potential speed of solving a problem approaches instant.

The example of a practical application I was given years ago was decrypting the most secure data file. Traditional computing might take years, but true quantum computing only seconds (which would be required to interpret the results, not get them in the first place). Effectively, it should blow away any concept we have of speed.

The damn things even look weird, more like a cross between a traditional computer and something from the steampunk dimension.

It’s not just that it would be hard to understand a quantum computer — think what a nightmare it would be to program one or interface with the result.

Lab Linux Is a Rare Treat

The latest release of Black Lab Linux, an Ubuntu 16.04-based distribution, adds a Unity desktop option. You will not find Unity offered by any other major — or nearly any minor — Linux distributor outside of Ubuntu.

Black Lab Linux 8.0, the consumer version of PC/OpenSystems’ flagship distro, also updates several other prominent desktop options.

Black Lab Linux is a general purpose community distribution for home users and small-to-mid-sized businesses. PC/OpenSystems also offers Black Lab Enterprise Linux, a commercial counterpart for businesses that want support services.

Black Lab Linux is an outgrowth of OS4 OpenLinux, a distro the same developers released in 2008. Both the community and the commercial releases could be a great alternative for personal and business users who want to avoid the UEFI (Unified Extensible Firmware Interface) horrors of installing Linux in a computer bought off the shelf with Microsoft Windows preinstalled.

Black Lab offers its flagship releases with a choice of self or full support, and both come at a price upon launch. However, you can wait 45 days and get the same release with the self-support option for free. Black Lab Linux 8.0 became available for free late last year.

Black Lab 8.0 with Unity gave me a few problems depending on the hardware I tested. It sometimes was slow to load various applications. It more than occasionally locked up. However, its performance usually was trouble-free on more resource-rich computers.

Its core set of specs are nice but nothing that outclasses other fully free Linux OS options. Here is a quick rundown on the updated packages. Remember that version 8.0 is based on Ubuntu 16.04, which is a solid starting point.

Open Source Devs to Give E2EMail Encryption

Google last week released its E2EMail encryption code to open source as a way of pushing development of the technology.

“Google has been criticized over the amount of time and seeming lack of progress it has made in E2EMail encryption, so open sourcing the code could help the project proceed more quickly,” said Charles King, principal analyst at Pund-IT.

That will not stop critics, as reactions to the decision have shown, he told LinuxInsider.

However, it should enable the company to focus its attention and resources on issues it believes are more pressing, King added.

Google started the E2EMail project more than a year ago, as a way to give users a Chrome app that would allow the simple exchange of private emails.

The project integrates OpenPGP into Gmail via a Chrome extension. It brings improved usability and keeps all cleartext of the message body exclusively on the client.

E2EMail is built on a proven, open source Javascript crypto library developed at Google, noted KB Sriram, Eduardo Vela Nava and Stephan Somogyi, members of Google’s Security and Privacy Engineering team, in an online post.

The early versions of E2EMail are text-only and support only PGP/MIME messages. It now uses its own keyserver.

The encryption application eventually will rely on Google’s recent Key Transparency initiative for cryptographic key lookups. Google earlier this year released the project to open source with the aim of simplifying public key lookups at Internet scale.

The Key Transparency effort addresses a usability challenge hampering mainstream adoption of OpenPGP.

During installation, E2EMail generates an OpenPGP key and uploads the public key to the keyserver. The private key is always stored on the local machine.

E2EMail uses a bare-bones central keyserver for testing. Google’s Key Transparency announcement is crucial to its further evolution.

 

Google Partially Benefits

Secure messaging systems could benefit from open sourcing the system. Developers could use a directory when building apps to find public keys associated with an account along with a public audit log of any key changes.

Encryption key discovery and distribution lie at the heart of the usability challenges that OpenPGP implementations have faced, suggested Sriram, Nava and Somogyi in their joint post.

Key Transparency delivers a solid, scalable and practical solution. It replaces the problematic web-of-trust model traditionally used with PGP, they pointed out.

Linux Begins

Once you have a sense of the vast potential of Linux, you may be eager to experience it for yourself. Considering the complexity of modern operating systems, though, it can be hard to know where to start.

As with many things, computers can be better understood through a breakdown of their evolution and operation. The terminal is not only where computers began, but also where their real power still resides. I’ll provide here a brief introduction to the terminal, how it works, and how you can explore further on your own.

Although “terminal,” “command line,” and “shell” are often used interchangeably, it helps to learn the general distinctions between these terms. The word “terminal” comes from the old days of Unix — the architecture on which Linux is based — when university campuses and research facilities had a room-sized computer, and users interacted with it by accessing keyboard-and-screen terminals scattered around the campus and connected to the central hub with long cables.

Today, most of us don’t deal with true terminals like those. Instead, we access emulators — interfaces on Unix-like systems that mimic the terminal’s control mechanism. The kind of terminal emulator you’re most likely to see is called a “pseudo-terminal.”

Also called a “terminal window,” a pseudo-terminal is an operating system application on your normal graphical desktop session. It opens a window allowing interaction with the shell. An example of this is the Gnome Terminal or KDE Konsole. For the purpose of this guide, I’ll use “terminal” to refer exclusively to terminal emulators.

The “command line” is simply the type of control interface that one utilizes on the terminal, named for the fact that you write lines of text which are interpreted as commands.

The “shell” is the program the command line uses to understand and execute your commands. The common default shell on Linux is Bash, but there are others, such as Zsh and the traditional Unix C shell.

 

File Organization

The last thing you need to know before diving in is how files are organized. In Unix-like systems, directories are ordered in an upside down tree, with the root filesystem (notated as “/” and different from the “/root” directory) as the starting point.

The root filesystem contains a number of directories within it, which have their own respective directories and files, and so on, eventually extending to encompass every file your computer can access. The directories directly within the root filesystem, in directory notation, are given right after the “/”.

For example, the “bin” directory contained right inside the root would be addressed as “/bin”. All directories at subsequent levels down are separated with a “/”, so the “bin” directory within the “usr” directory in the root filesystem would be denoted as “/usr/bin”. Furthermore, a file called “bash” (the shell), which is in “bin” in “usr” would be listed as “/usr/bin/bash”.

So how do you find these directories and files and do stuff with them? By using commands to navigate.

To figure out where you are, you can run “pwd” (“print working directory”) and you will get the full path to the directory you’re currently in.

To see where you can go, run “ls” to list directory contents. When run by itself, it returns the contents of the current directory, but if you put a space after it and then a path to a directory, it will print the contents of the directory at the end of the path.

Using “ls” can tell you more than that, though. If you insert “-l” between the command and the path with a single space on either side, you will get the “long” listing specifying the file owner, size and more.

 

Commands, Options, Arguments

This is a good time to explain the distinction between commands, options and arguments. The command, which is the program being run, goes first.

After that you can alter the functionality of the command by adding options, which are either one dash and one letter (“-a”) or two dashes and a word (“–all”).

The argument — the thing the command operates on — takes the form of a path. Many commands do not need arguments to provide basic information, but some lend far greater functionality with them, or outright require them.