Ebook Security: A Growing Concern

Labels: , , , , ,

hese days, the main concern of the ebook authors is security of their online publications. There are people who buy Ebook and sell it for their own profit. Even, sometimes, the stories are distributed through emails or shared through the Internet by different customers.

This leads to a heavy loss for the author of the ebook. Thanks to the technology developed with internet, it has introduced several products for the security helping the author to have control over the distribution of their ebook. This is also essential for securing privacy rights of the author and preventing pirated copies.

The security program for the ebook is developed with `IP Tracking` technology. When a customer wants to buy a copy of the ebook, he or she has to enter the user name and password or a license code to access the product for the first time. In this process the IP address is recorded with the online control panel.

If the buyer shares the product with someone else, the author of the ebook is alerted as it will display two or more users with different IP address. If the author finds the ebook being shared illegally, the on-line control panel allows the author the ability to deactivate the copies associated with that IP address.

There are three most popular programs used for IP tracking for the security of e-book.

ClickLocker:

ClickLocker can be integrated into any ebook software. When the buyers want to purchase any ebook, the program directs them to a page where they are prompted to enter name and email address to get registered. The buyers are provided with a license code that they use it to access the product for the first time.

If the author finds any illegal use of the security code, the program allows the author the ability to terminate the license for the buyer. However, the author of the ebook has to pay on monthly basis to keep the program activated, once the service is cancelled all the distributed ebooks are immediately unlocked for the readers.

Virtual vault:

Some features of `Virtual Vault` are similar with ClickLocker. However, this security program allows the author to login into the control panel and prevent the access of the ebook by the buyer who has not paid, even after providing the security code. In this process the violator can not access the product even if the link or the security is emailed to someone else. VirtualVault assure the security of the ebook even if the author is not checking the control panel for multiple users.

Ebook pro:

It is an ebook compiler that comes with built-in security system to allow the author to track licenses as the ClickLocker. The buyer receives a user-name and password to access the ebook. But the author has to set the system so that the buyers can access the product for a specified number of times. The author can login to the control panel to set, disable or reactivate the licenses.

Use of IP tracking system is an assurance that you can save your time of tracking your ebook and simultaneously do your administrative work. Of course, there are some downsides of the system. The system is not able to track if multiple computers are connected with internet having same IP address.

Shortcomings apart, IP tracking is a useful tool of controlling the distribution of ebook. Though it can be expensive, it can save the author of the ebook from frustration of illegal sharing of their publication. However, before deciding to select any IP tracking system the author of the ebook should be practical and think about the pros and cons of every type of tracking system.





The Myth Of Network Latency

Labels: , , ,

his article is also available as a Podcast on "The ROOT Cause" podcast series available on iTunes.

There is a great deal of confusion surrounding the concept of Latency. This is not surprising as it is really many different concepts discussed as if they were one. Latency impacts on all areas of the enterprise including, networks, servers, disk systems, applications, databases and browsers. This article describes the different areas in which Latency occurs and how to differentiate between them. Such differentiation will improve the accuracy of all testing and troubleshooting, whether manual or automated.

The importance of measuring latency is becoming increasingly apparent to the IT industry. We see new products coming to market that claim to be able to monitor latency in various forms. Maybe they can—and maybe they only kinda can. With all the variables and distributed components that are involved in modern enterprise networks—it is far too easy to combine completely different issues into one metric. This drastically reduces the value of the metrics or worse—sends you off on a wild goose chase. Tools are only tools and as in any other situation, they are only as good as the professional using them. Their output needs to be analyzed with an eye on the big picture. Their implementation needs to be well thought out and spot on correct.

Many methods for measuring and calculating these metrics exist and are topics that will be covered in future articles. Here we focus only on breaking out the different areas and types of latency that affect performance.

NETWORK LATENCY: Everyone loves to blame the network, especially with regard to latency. Bad LAN or WAN design can cause all sorts of issues. However, at the time of this writing in 2008, those issues tend to be more of a "Go-No-Go" type of problem. Network designs will block or allow communication, true—-but they seldom slow it down anymore (although there are exceptions). If it is too slow, it is usually distance that is the cause. But don't blame a 300 millisecond Ping Time between Europe and Asia on a bad WAN. Distance matters. "You canut change the laws of physics Captain."

The first step is to break out the various areas that can be a part of Network Latency.

Round Trip Response Time (RTT): AKA Network Latency is determined by TCP at the beginning of the connection setup (the Three Way Handshake). Since there is minimal overhead at this point, the time this takes should represent the true transport time. However, there are common designs that will change this.

Proxy Servers: Do you have a Proxy Server or WAN Optimizer near your client? If so, the RTT for that TCP connection is not end-to-end to the Server Side, it is only end-to-end to that Proxy or WAN Optimizer. Sure, you can account for this in your calculations—but only if you have RTT from that Proxy to the Server side. It is very doable, but requires planning.

Multi-Tier Design: What about the Network Latency between Tier One and Tier Two—-or Tier Three? If they are all in the same data center there may be no significant latency—assuming the design is correct in that data center. (Of course, if you are troubleshooting, that isn't a wise assumption.) However, you will see Tiers in different locations. In such cases, this aspect of latency is important.

SERVER LATENCY: Memory, Disk System, CPU, Design and usage all impact significantly on how quickly the servers themselves can process requests—or make them. This metric must be separated from the other latency metrics to properly diagnose bottlenecks. In my article series titled "Baselining-StressTesting-PerformanceTesting-OhMy!" I introduced the topic of BESTlining (as opposed to Baselining). In a nutshell, it is a way to measure aspects of Server Latency by removing as much Network Infrastructure from the picture as possible.

APPLICATION LATENCY: Application Latency is different than Server Latency--but often bundled into Server metrics due to difficulties in drilling down deep enough to separate them. To better understand the difference, picture Websphere JVM's performance compared to the Operating System of the physical server on which it is hosted. Just looking at response time doesn't properly separate those issues. If you are just getting performance metrics, this may not matter. However, if you are troubleshooting a problem—-it matters a great deal. Application Latency is more frequently the source of problems than many organizations realize.

Another factor that can cloud this metric is Database Latency. Database Latency should not be lumped in with Application Latency. Database optimization is critical, but if an application is sending out sloppy calls, it will still slow down the works.

Protocol usage is another area to explore in measuring or troubleshooting Application Latency. I will cover this in more detail in an upcoming article on Network Utilization. While it is often considered a network utilization issue, to a large degree, it is the Application that controls how the protocols are used. For example, when an application uses many TCP connections and small packet sizes, it is usually a result of how the code was written. It may have been a non-issue when the components were near to each other but over a 70 millisecond or higher WAN link, it can bring an application to its knees. To make matters worse, WAN Optimizers are far less successful in resolving this particular type of problem.

DATABASE LATENCY: is a frequent cause of trouble. Fragmentation, inadequate Indexing and many other database design factors can slow down response time. Again, this is often lumped into either Server or Application Latency, but should not be. It is a separate variable.

BROWSER and WORKSTATION LATENCY: If you are receiving many rows of tabular data, or many images, or large Java Applets, or anything else of this type, you are tasking your personal workstation and its browser. This is frequently the main culprit and should be looked at quickly when different locations, or different users within the same location, experience trouble that others do not. Additional factors include:

--Spyware running on the PC (particularly with laptops that travel).
--Disk Fragmentation
--Older workstations
--Browser settings
--Operating system versions and patch levels

SUMMARY: Latency is not monolithic, although it is often treated that way. Time invested in accurately measuring these various aspects of latency will save you hours, days, weeks or even months of work. Last but not least, please remember that ALL of this CAN be measured and brought together into an accurate picture. It isn't hard—it just requires the correct set of skills and some open-source software like WireShark.





Defrag Myths Everyone Should Know

Labels: , , ,

Fragmentation is one of the most significant – and yet unknown - problems that plague computer systems and networks. It accounts for billions in annual economic losses, and is one of the leading causes of a wide number of computer problems and system failures. Why don’t users and IT departments take action? The following defrag myths may explain why.

Myth: My system or network doesn’t have fragmentation.

Wrong. It is estimated that there are over 700 million computers activity-in-use in the world today, and every single one of them has fragmentation issues to one degree or another.

Myth: I already have defrag software already preinstalled in my operating system.

Not really. There IS a kind of defragmentation software that comes pre-stalled with many operating systems, but it is a technological dinosaur in comparison to modern defrag software.

Myth: Defragmenting my network during work hours will causes disruptions in performance.

This is a major and legitimate concern among IT professionals, but choosing a high performance defragmenter that is specifically designed for networks will allow a systems administrator to defrag the system without affecting productivity. Companies such as Diskeeper make highly advanced network defragmentation software which is completely transparent when running in the background.

Myth: I have to replace my computer due to slow performance.

Not necessarily. Fragmentation is the scattering of data and files across the hard drive. As fragmentation builds up, pieces of data become increasingly scattered, and the read/write head takes longer and longer to write and retrieve data. This shows up as sluggish, slow performance, and eventually freeze-ups and failure. It is very possible, however, that a defrag of the system with a high quality defragmenter will return your ailing system or network to maximum performance.

Myth: Using defrag software is a hassle.

More misconception stemming from the use of the pre-stalled version. Unfortunately, it’s a lot like comparing a horse-and-buggy to a race car. Modern defragmentation software is lightening fast by comparison and the best ones are so completely automatic that you can forget about them once installed.

Myth: I don’t have time to defrag every computer in my network.

Believe it or not, nowadays there are defragmentation programs that will intelligently and automatically defragment every single workstation and server in the network with virtually no supervision – regardless of the size of your network. Installation, deployment, and control can all be done from one central administrative terminal. In effect, the company and network management saves time, money, and effort because the system as a whole is kept at peak performance, allowing IT personnel to put there attention elsewhere.

Myth: Defrag software is expensive.

Actually, most initial downloads are free. The best companies have a full line-up of defragmentation products - from the home user, to small business, to the largest networks – that one can choose from. The long range cost savings are usually enormous for companies and network administrators due to the tremendously reduced wear and tear on their hardware and drives, and reduced maintenance cost.

Don’t let a myth prevent you from finding out whether your system is running at peak performance or not. The best recommendation is to download a copy of defrag software and see for yourself whether or not it makes a