Computers are thought to calculate every possible move. However, for eg. there are about 20 possible first moves and 20 possible responses to that and so by this logic the no. of positions after for eg.'n' moves is 20 ^(2n). This can become a huge number very fast. So I suspect that the computer may actually be using a huge precomputed opening database which may include all possible moves and responses to a huge depth - maybe around 30 moves or so. Their programming may also include a mechanism to look up transpositions so that it can transition from an unfamiliar opening to a familiar one. If it is indeed using a huge precomputed opening database, can this not be considered as cheating? Its like a bringing a chess book to a tournament.
Engines do use opening and endgame databases to help their play but that is only a small part of their strength. The strength of the engines come from what they do while they don't have a database move. The strength of engines come basicly from two factors - evaluator and search.
The evaluator assigns a value to a position so that the search will know which positions to favor. Most engines nowadays evaluate on the order of 1 million positions in a second, so the evaluator needs to be fast. The evaluator may take into account for example material, pawn strucutre, king safety, mobility, just to name a few features. In state of the art engines, the parameters of evaluator function (i.e. how much a feature affects the total evaluation of the position) are tuned using thousands and thousands of CPU hours of testing.
As you mention, the number of possible variations grows very rapidly. The average number of moves in a position is around 35, so the number of positions already after 6 half-moves is more than one billion, which is clearly infeasible.
A simple improvement is using the alpha-beta search, which is based on the observation that, for example, if you have calculated that move 34.A wins a pawn, and you know that after move 34.B Black can reply 34...C after which White does not win a pawn, you don't have to check other responses to 34.B because clearly you already know that 34.A is better than 34.B. If you have a good heuristic such that best moves are always searched first, this reduces the number of positions required to search depth d to sqrt(35^d) instead of 35^d which is a huge improvement. Transition tables reduce the search tree even more.
But this is not even close to what top chess engines really can achieve! Using some smart and smarter tricks to decide which nodes (=positions encountered during the search) are important and which are not, they search some parts of the search tree deeper and cut some parts early, which allows them to reach depth d for important variations often in even less than 2^d nodes. A quite technical list of possible ideas for selectivity is given in the Chess programming wiki.Tweet