In general, the kernel's memory allocator does not like to fail. So, when kernel code requests memory, the memory management code will work hard to satisfy the request. If this work involves pushing other pages out to swap or removing data from the page cache, so be it. A big exception happens, though, when an atomic allocation (using the GFP_ATOMIC flag) is requested. Code requesting atomic allocations is generally not in a position where it can wait around for a lot of memory housecleaning work; in particular, such code cannot sleep. So if the memory manager is unable to satisfy an atomic allocation with the memory it has in hand, it has no choice except to fail the request.
Such failures are quite rare, especially when single pages are requested. The kernel works to keep some spare pages around at all times, so the memory stress must be severe before a single-page allocation will fail. Multi-page allocations are harder, though; the kernel's memory management code tends to fragment pages, making groups of physically-contiguous pages hard to find. In particular, if the system is under pressure to the point that there is not much free memory available at all, the chances of successfully allocating two (or more) contiguous pages drops considerably.
Multi-page allocations are not often used in the kernel; they are avoided whenever possible. There are situations where they are necessary, though. One example is network drivers which (1) support the transmission and reception of packets too large to fit into a single page, and which (2) drive hardware which cannot perform scatter/gather I/O on a single packet. In this situation, the DMA buffers used for packets must be larger than one page, and they must be physically contiguous. This is a situation which will become less pressing over time; scatter/gather capability in the hardware is increasingly common, and drivers are being rewritten to make use of this capability. With sufficiently smart hardware, the need for multi-page allocations goes down considerably.