Scalar, vector, and matrix clocks are extensively used in distributed computations. This paper asks the question, "how does the clock abstraction generalize?" and casts the question in terms of concurrent knowledge. To this end, logical clocks of arbitrary dimensions are defined. This paper examines the relationship between the size of logical clocks and the levels of concurrent knowledge that can be attained in asynchronous distributed message-passing systems. For a global monotonic fact on the system state, the following is shown. (E^C)^k(\phi) can be attained by the processes on a continuous basis and without any control messages iff a logical clock system of dimension (k+1) is used. This establishes a lower bound of omega(n^{k+1}) integers on the space complexity, where n is the number of processes. The paper then presents an algorithm to address the following problem: "Given a (k+1) dimensional timestamp, what is the maximum past computation about which (E^C)^k(\phi) knowledge, for some monotonic fact \phi, can be declared in the current state?" The time complexity of this algorithm is O(k.(n^2 + n)).