Hyper-Threading is Intel's implementation of Simultaneous Multithreading (SMT).
Hyper-Threading allows execution of multiple (logical) threads to be overlapped. This can avoid pipeline stalls during expensive operations such as cache misses and memory accesses. However, there are caveats. Execution resources (except the GPRS) between two hyperthreads are shared. This effectively means that the cache size is halved when hyperthreading is enabled. Each time a thread is switched in, the cache will be cold, affecting performance.
Because of the shared resources, there are limited situations in which hyperthreading is beneficial. Essentially, the cache hit rate for the single threaded version of the code must be low enough such that when hyperthreads are introduced, they can still maintain a decent hit rate to comprise an equivalent execution time. That is, the two hyperthreads must each have an execution time of no more than half of the single-threaded version. In general, the lower the cache hit rate for the single-threaded version, the more likely introducing a number of hyperthreads can be beneficial.
Another case in which it might be beneficial is if the working set sizes (if they are disjoint) of the two hyperthreads are small enough that they still fit comfortably within the capacity of the caches. Unless the threads are tightly coupled through sharing, this is unlikely to happen with L1 sizes.