Popular Posts

Wednesday, March 16, 2011

Managing the mobile end-user experience with mobile monitoring

Measuring performance is not a new concept to mobile either. However, until recently performance in mobile has been synonymous with voice quality. When network coverage and dropped calls were the only metrics that mattered, it wasn’t surprising that mobile performance monitoring was mostly a concern for the mobile network operators. Data performance was less of a concern when mobile content, applications, and services took a backseat to voice. However, with the mass consumer popularity of advanced smartphones and SMS, the accurate delivery of content, applications, and service over mobile data networks has become increasingly important to content owners.

Companies with a more mature Web presence are now embracing mobile channels and making them a key part of their growth strategy. However, for companies making the move from Web to mobile it is important to understand that performance-monitoring solutions that worked on the Web cannot be simply applied to mobile. Managing the mobile end-user experience requires an understanding of how it’s different from the Web end-user experience

Read More on Mobile Monitoring

Tuesday, March 8, 2011

User Expectations and Mobile Performance

Performance is critical for this mobile portal. Explained a company executive, “End users are accustomed to high speed downloads when they access our Web site from their personal computers. However, mobile
connections have long offered slower response times. While newer 3G networks offer larger pipelines
that enable quicker response times, many end users still use older devices and are unable to download
mobile sites quickly. This can leave them frustrated with our service— and can ultimately harm our brand.”

Because of the disconnect between user expectations and mobile performance, it is critical for the company to monitor the performance of its mobile site and fix problems quickly to ensure that it offers the best performance
possible.

Read More

Tuesday, March 1, 2011

Improve Performance Of Third-Party Components


A slick-looking Web site is ultimately of limited effectiveness if all of the its bells and whistles are an impediment to system performance. Hence, the monitoring of system performance at the end-user/UI level is extremely important to ensuring a consistently excellent user experience. By using the proper website performance tools in a targeted manner, both business managers and developers can effectively monitor the impact of third-party performance both on the individual component level and in the aggregate.

The more complex your Web site is, the more likely it is that its performance – and possibly your profit margin - is in the hands of third parties and their components running on your site. Ideally, we want to improve the performance of these third-party components (aka “widgets”), so that a page loads just as fast with these widgets as without them.

If you are the owner of the Web site, there are at least two ways to improve overall site performance. One is to optimize the Web site itself for each widget so that those widgets running on the page will be more efficient, a method that is not cost-efficient. The other, more cost-effective option is to use continuous and focused website performance monitoring. Breaking down performance by time and by component category allows you to pinpoint the components that adversely impact Web site performance

Tuesday, February 22, 2011

Evaluation of Web Site User Experience


User attitude is continually shaped throughout their interaction with a Web site. You want to collect as much attitude data as you can for various reasons, including: providing interpretation of ambiguous behavior, such as long dwell time that can indicate interest or confusion; monitoring general attitude trends over time that result from marketing efforts and repeated exposure to the site; gathering user feedback and suggestions that can trigger direct action or further research.

The process of capturing such data might interrupt the user experience and alter the user’s attitude, which could corrupt the data and reduce its value. So, the method used to collect attitude data must be non-invasive and be incorporated into the user’s experience at the site.

Automating user experience testing of any Web site is a difficult challenge at best, especially in light of the need to balance data quality and quantity: gathering rich data is essential to deriving meaning and understanding, and a sufficient quantity of data is essential to making findings valid and statistically significant.
  

Monday, February 14, 2011

Web Application Performance Measurement and Tuning


To combat the growing problem of poor web application performance and safeguard the rising amount of business revenue gained via online channels, load testing strategies, tools and services have experienced a transformation in terms of both awareness and adoption.

Business requirements for web application load testing and application performance testing as a means for ongoing performance measurement and tuning have become more rigorous over the past several years.

Measurements derived with load testing tools should provide a clear understanding of where performance bottlenecks reside and aid in infrastructure and capacity planning of computing resources. When derived from meaningful load tests, results serve as a guide to helping IT staff make informed decisions about the performance of their applications and infrastructures.

A solid load testing strategy must complement performance monitoring and analysis in a production environment and, in turn, production monitoring and analysis should be leveraged to improve the accuracy of load tests.



Tuesday, February 8, 2011

Challenges of creating and managing robust applications


Application availability is the first quality encountered by customers, but the last one to be determined by the development process, because it depends on the quality of everything else that happens during design, development, and testing web applications. A chain is only as strong as its weakest link, and developing RIAs will uncover any weaknesses in an enterprise’s development and SLM processes.

RIAs inevitably involve running complex code on the client, and use more complex and application-specific strategies to manage the communication between browser and server. Making such complex code efficient and bugfree is not a task to be taken lightly.

Better tools are needed to help us meet the challenges of creating and managing robust applications; today, while the Adobe suite of Flash tools is a little more mature, Ajax development and application performance testing tools are still in their infancy. The Open Ajax initiative may help to address this issue. Therefore deploying an RIA successfully will demand more resources— skills, tools, process, time, and (of course) money—than would be required to deploy a more traditional Web-based application in which all application logic is executed on the server. Enterprises that decide to employ the technology must be prepared to invest more in design, development, testing and management to make it successful.


Tuesday, February 1, 2011

Challenges And Surprises With Web Load Testing


To gauge a Web site’s capacity and scalability, web load testing is one of the most effective ways. But the load tests need to simulate real scenarios. The huge numbers and ranges of variables involved in Web site load testing will always present challenges and surprises.

A load testing scenario can be made significantly more realistic by simulating the behavior of tolerant and
an intolerant user. Familiarity is a major factor in how quickly a simulated user navigates from one page to the next. As with latency tolerance, different people will behave in different ways: users that are very familiar with the Web site move more rapidly (therefore creating more load per unit of time) than users who are visiting the Web site for the first time and need to read and understand how the Web site is organized to go from one page to the next.

You can also run simple in-house experiments using employees and their friends and family to determine, for example, the page viewing time differences between new and returning users.