I have found that multiplication on my 84+ is faster when the number with fewer digits of precision is on the right. I have tested this with real variables and lists, finance variables, and immediate values on each side of the multiplication.

My timing program (rounds down; times include 3s loop overhead):
startTmr:Repeat checkTmr(Ans:End
For(n,1,5ᴇ3
[code]
End
checkTmr(T+1

For example, when N=cos(1) (many digits of precision):
2N once takes 4.2 ms; N2 takes 2.8 ms. Similar results when 2 is replaced with 1, .2, 7ᴇ80, a variable that equals 2, or when N is replaced with a real variable, π, etc.
N{1,2,3} takes 7.2 ms; {1,2,3}N takes 11.2 ms. Same when the list is in L₁ rather than immediate, and switched if N has few digits of precision and the list has many.
N.12345678901234 and .12345678901234N take 3.0 and 2.8 ms respectively. Not much difference here.

When there are an intermediate number of digits (say, five), the timings are in between those above; it is again slightly faster to have the number with more digits of precision on the left.

Using this optimization on a very simple SQUFOF algorithm I had lying around on my calculator resulted in a ~2% overall speedup. Am I correct in concluding that the number of digits of precision is what affects speed, or is there something I'm overlooking?
Hello lirtosiast, and welcome to Cemetech! I do believe that this may be the case, but I would need to do a little bit more research.

From what I understand, the TI-BASIC interpreter works by incrementing byte by byte reading each token. When it encounters a "1.2345", it takes multiple operations to store this value, whereas N can simply be recalled with ease. Nice find though; never really thought about that too much!
A very astute observation, and you are indeed correct. Multiplication does generally take longer the more digits that are in the second operand. To be more specific, the time taken by the multiplication algorithm is roughly linear with respect to the sum of the base-10 digits of the second operand. This ranges from about 500 cycles (1/30,000th of a second on an 84+) for 0 to 50,000 cycles (1/300th of a second on an 84+) for 99999999999999.

Using your example numbers, multiplying by 2 takes about 3,000 cycles and multiplying by cos(1) takes about 25,000 cycles. However, we need to put these times into perspective of the BASIC interpreter, which has a lot of overhead.

I timed the loop in this very realistic program (and this program with the multiplication operands switched):

Code:
2→M
cos(1→N
For(A,1,100
MN
End

As expected, there was about a (25,000-3,000)*100=2.2m cycle difference in the execution times. But more importantly, the total times taken were 11.3m and 9.1m cycles, which means that putting the 2 second made the loop take about 80% of the time with cos(1) second. So in extremely multiplication-heavy sections of code, emplyoing this operator ordering tactic might be useful. But ultimately, you're getting killed by the interpreter overhead, anyways.
Hmm, that is indeed odd... I wonder what the actual root cause of it is. To have 2.2m clock cycles wandering about is pretty odd... Is it storing a variable temporarily or something like that, I wonder?

EDIT: Nevermind... I see you already explained it. Silly me, I forgot multiplication is kind of repeated addition.
It's too bad that there's so much overhead. Even with a 100-element list, it's still ~0.8 ms/element. Those 1200 cycles seem to be too much for copying the next element of the list and storing it after the multiplication. It's not from storing to Ans either—with L1= 100 repetitions of cos(1), L1*1 takes 0.9ms, and L1*1*1*1*1*1*1*1*1*1*1*1 takes 10.1ms while still storing to Ans only once.

When multiplying by 0, order doesn't seem to affect speed—it's fast either way.

X³ seems to be no faster than X^3, and much slower than X²X, for complex numbers—they take 38, 38, and 19 ms respectively with X=e^i.

A/B is still faster than both B⁻¹A and AB⁻¹ by a significant margin, regardless of how many digits are in A or B.
Relevant is the algorithm prod( goes through: It starts with a 1, then goes from left to right multiplying the product so far with an element of the list. Evidence are these timings for prod(L1. 1.74 ms has been subtracted from each timing, and I'm using 2.55MP on jsTIfied.

Code:
{1                     1.38
{.999999999999         3.84
{.999999999999,1       4.58
{1,.999999999999       4.59

I would guess that sum( works the same way, except with a 0 and adding.

Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

»
» All times are UTC - 5 Hours

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum