1.
Your second alternative is the outer product, resulting in a NxN matrix, rather than a scalar (dot product). So that's simply out. Not that outer products are useless; it's just not representing the quantity you're after.
The last one is possible, but differs from convention. Namely, we conjugate current when taking the product, to get the correctly phased result (leading/lagging reactive power). If we take opposite phase angles for leading and lagging, we can swap the conjugates, but then it looks weird because no one else does it that way. (There may be a more fundamental reason as well, I forget?)
2. Again, order of operations; you can't multiply a, say, 1x3 vector with a 3x3 matrix, because the width doesn't match the height. But you can multiply a 3x3 matrix with a 1x3 vector, because the width and height match.
Write out what the matrix-vector product actually means -- it's a system of simultaneous equations.
\[\mathbf{A} \vec{x} = \vec{b}\]
\[\begin{array}{cccc}
a_{11} x_1 & a_{12} x_2 & a_{13} x_3 & = & b_1 \\
a_{21} x_1 & a_{22} x_2 & a_{23} x_3 & = & b_2 \\
a_{31} x_1 & a_{32} x_2 & a_{33} x_3 & = & b_3
\end{array}\]
Multiplication has consistent rules, so if you put A and x in this order, you get this; the other way, it's nonsense, because you have one column of x components to spread across three rows of A. Doesn't make any sense. However you can transpose x and get the same thing out, although the... elements of x will be reversed? Or the elements of a will be transposed? I forget, I need to write it out. I think the latter.
Which is why \$a b^t\$ and \$a^t b\$ are different, and \$b a^t\$ and \$b^t a\$ in general as well -- matrix* multiplication is not always commutative.
*Well these are vectors, but a vector is a 1x matrix. And a scalar is a 1x1 matrix. It's why they call it
linear algebra, it's just all the normal algebraic rules and symbols you know and love, just... bigger.

And with that size you get a few different rules, but it's mostly alright.
Anyway- for network purposes, transposing or inverting a matrix (I'm not sure which and when; again, I'd have to write it out) is equivalent to swapping the direction of ports, or waves (incident/reflected). Depending on exactly which matrix you're talking about (in this case, the Z matrix, which does... whatever it does).
3.
Obviously, none. You've defined the other currents to be zero. Any current that flows, flows to itself.
Remember, a port is an ideal isolated connection (two terminals), where current only flows in and out in perfect balance. (This doesn't always happen in practice -- real ports have common mode impedances to them. Er, well, we can just model
those as additional ports, really -- the representation isn't losing generality -- it just gets annoying and bloated when we're careless about stuff like that...)
The matrix representation is a general form, so there might be impedances to all the remaining ports; or, none at all. Or it could be all shorted through. You might have
voltages at those ports, which is a meaningful question, and its answer depends in general on all the elements of Z. Or what the remaining port looks like, which is simply its column in the matrix, since all the others are zero (or, something like that I think?).
Tim