Thread algebra (TA) provides a theory of concurrency that has been designed to support the understanding of explicit concurrency in programming and programming notations as well as running on a chip. The simplest form of thread algebra BTA (bsic thread algebra) not involving concurrency in the form of multi-threading has been developed as polarized process algebra in order to provide a semantic domain for program algebra. TA extends BTA with so-called strategic interleaving where BTA covers the simplest conceivable interleaving strategy that applies a cyclic permutation to the thread vector after each basic action.
TA is technically less elegant than for instance ACP as a process algebra. Process algebras operate at a level of abstraction where it is reasonable if not imperative to assume that parallel composition is commutative and associative. This at the same time make the mathematical properties of the theory harder to understand and analyse and it creates a bigger distance to programmer's intuitions. More importantly process algebras introduce non-determinism at a point where this might also be a disadvantage for programmer intuitions. Multi-threaded programs are executed on virtual machines that make use of scheduling. Although a machine user may not be aware of the particular schedule used quite often a deterministic schedule (heneforth called a strategy) is being used for the execution. In process algebra we intend to develop universal abstractions that deal with the absence of more specific information in a very general way. In thread algebra conversely always one will need to specify a particular strategy in order to analyze multi-thread behavior. By considering a significant family of different strategies information can be obtained about what might happen with an 'arbitrary strategy'. The thread algebra approach to concurrency is called strategic interleaving in contrast to the arbitrary interleaving underlying process algebras.
TA permits an understanding of practical phenomena such as a deadlock in Java much more convincinly than process algebra based formalizations of can do. The concept of an arbitrary strategy is not a basic one for TA, however. In fact TA gets no further than listing strategies in what seems to be an open ended multitude of options. I calim that TA captures programmer intuituions when writing multi-threaded programs quite directly. Multi-threaded programming is still a difficult topic. This is illustrated by the time it takes for the designers of Java and similar program notations to obtain a stable treatment of the related family of features. For that reason it seems entirely justified to design a theory like TA which concentrates on this specific form of parallelism, in spite of the fact that more general and sophisticated theories have been designed already. TA is frought with asymetries, just like programer intuitions. When things get complex these asymmetries become so cumbersome that a translation to process algebra is advantageous. Of course such translations are possible because process algebra is the more general and fundamental theory. But the complexity of these translations is such that the TA level of abstraction is definitely helpful as an intermediate level of description. It is comparable to translating floating point arithmetic to boolean functions. Entirely feasible in principle but not so useful for a human reader (in most cases).
Both in grid computing and on micro-chips multi-threading is the dominant intuition of parallel execution or operation. This is an additional reason (justification) for a focus on a theory designed specifically for the formalization of multi-threading. At the same time TA seems to be a promising approach for studying security related phenomena involving concurrent systems. In particular it is clearly helpful in further analyzing the theory of computer virusses that has been founded by Fred Cohen in 1984.