PDA

View Full Version : Computational Complexity and Solving Problems Efficiently



Neo
27th January 2011, 11:19 AM
Complexity theory in computer science refers to the study of the amount of computer time and resources required to solve problems. Finding a method to solve a problem is not necessarily the end of the job - even if a certain method works, if may not provide an answer in a reasonable amount of time. In computer terms, a problem is a question to be resolved or decided by some kind of computation. One of the most critical frontiers of computer science involves determining which problems can be solved in a reasonable amount of time. Consider the following example.

Suppose that Steve is a sales representative for a newly founded company. To increase the number of orders, he decides to make a tour through several cities and visit potential customers. But as an employee of a new company, which typically has limited resources, he also needs to be economical. The sales representative wants to visit the cities on his schedule in an order that minimizes the length of the trip. In other words, Steve wants to leave home and visit city A first, B next, and so on until he arrives back home, in an order that covers the least amount of distance. If he is driving, this arrangement would be the least expensive in general because it would require less gasoline than any other route. The shortest route is usually not at all obvious if the cities are spread out over a large area.

There may be other factors to consider, such as the weather and the condition of the roads, and if the sales representative flies or rides a train instead of driving, he will concentrate on finding the cheapest tickets. Practical problems involving some type of minimization or optimization arise often in business, engineering, and science. The problem described above is called the traveling salesperson problem (TSP).

Any given instance of this problem can be solved by calculating all the possibilities, then choosing the right one. Computer scientists sometimes call this the "brute force" approach because it tackles the problem in a direct and unsophisticated manner. Such an approach will always be successful if the problem solver has sufficient time. For example, a TSP with four cities, A, B, C, and D, has 24 possible routes: A - B - C - D, B - C - A - D, and 22 others. But half of these routes will have identical distances because they will be the reverse of another route - for instance, D - C - B - A will cover the same distance as the first route listed above because it is the exact reverse and the distance from A - B is the same as that for B - A, and so on. If Steve calculates all 12 possible distances, he can find the minimum.

The difficulty with this approach becomes apparent as the number of cities increases. If Steve must visit 10 cities, the number of routes is 3,628,800 (1,814,400 distances) - quite a lot of calculating! Finding a minimum distance is extremely time-consuming using the brute force method for this many calculations. For 60 cities, the number of routes is comparable to the estimated number of particles in the entire universe (about 1080). The rise in the number of calculations is so steep that even with the increasing speed of computers and the use of advanced technology such as neural networks, people still cannot solve these problems by brute force in a reasonable amount of time.

A more sophisticated approach is needed for problems that are computationally complex. Research on computational complexity impacts everyone, from sales representatives planning trips to consumers who make a purchase over the Internet with a credit card, and is one of the most active frontiers of computer science. But the topic involves some difficult concepts, and can be challenging for researchers as well as students.