Bit is short for
Binary Digit. Each binary digit, or bit, has one of exactly
two states, or values.
Trit is short for
Ternary Digit. Each trit has one of exactly three states, or values.
A
B-base digit has exactly
B states, or values.
An unsigned (nonnegative) integer value consisting of
N digits
d1,
d2,..,
dN of base
B has value
v,
$$v = \sum_{i=1}^{N} d_i B^{i-1}$$
Two is the smallest number of states you can use to represent unsigned integer values like this. If you had only one possible state, the value they represent is always their count (i.e, the number of one-state elements you have); pretty useless.
Two is also the number of states you need to implement
Boolean logic and
Boolean algebra. (It is the field of mathematics that among other things, tells you how you can transform one set of operations into another, often much easier and simpler one, when using Boolean or binary logic.)
In digital logic, there are several ways each binary digit can be represented. The most common is by voltage; with one voltage range representing zero, and another one. Voltages beyond those two ranges are
undefined, i.e. they can be interpreted as either zero or one.
(Another extremely common method to represent the states is to use a voltage or current pulse, with the duration of the pulse describing the state: a short pulse describing one state, and a long pulse describing the other. This is used to transfer a sequence or stream of digit states, rather than in computational logic, though. This is because describing each state takes an agreed upon time.)
It turns out that if you use voltage levels to describe the two states, you can use transistors to implement all the operators needed in Boolean logic. These constructs are called
logic gates.
It is interesting to note that
Leibniz realized as early as 1705 that using binary, one can combine logic and arithmetic (although the logic side took another 150 years to be formalized into Boolean logic by
Boole in 1854). If you look above in this message, you'll see how multiple binary digits can be organized to describe nonnegative numbers -- signed numbers and rational numbers are a simple extension of that --; and digital logic gates suffice to implement all mathematical operations on them; that's the beauty here.
Setun mentioned by Canis Dirus Leidy was a ternary computer originally developed in 1958 at Moscow University. It used
balanced ternary for the numerical operations, and
ternary logic instead of Boolean logic. (A ternary-emulating programming language, DSSP, with Forth-like syntax, still exists today, but I know nothing about it.)
In some ways, ternary is better than binary. The main downside in ternary logic compared to binary/Boolean logic is added complexity. In binary/Boolean logic, there are only 4 unary operators (functions that operate on a single binary value) and 16 distinct binary operators (functions that operate on two binary values); in ternary logic, there are 27 unary operators and 19683 distinct binary operators. We know well the logic gates needed for the binary/Boolean logic, but a large "space" of the possible ternary logic gates have not been studied much yet (and a lot of existing research has not been translated from Russian). It is quite possible there is some undiscovered beauty there that allows a significant leap forwards in terms of engineering and efficiency, but we really don't know yet; binary has at least a full century more of mathematical and logical research behind it, and currently, interest in ternary logic is quite low.
Personally, I haven't used ternary (except when describing some combinatoric solutions with three-valued components), but I suspect the inherent complexity in ternary logic has made them the "quaternions" of logic: disliked/rejected for their perceived complexity. (If you have ever used Euler angles or Tait-Bryan or Cardan angles in three dimensions, you've almost certainly done work that you could have avoided by using unit quaternions -- also known as versors.)