It's a sequence number for packets that must be reassembled in a specific order at the receiver, like pieces of a webpage for Gossamer or pieces of a file for Sandpaper. For the still-large string, I've been tracing through the DCSBLibs code trying to figure out what has gone wrong. Please try inserting the following code before every Cn2Get call:
Code: det(20,"AF322399C9")
Edit: It looks like we might need another workaround for how Geekboy's code works. If you still get a wacky result with that (which you will, please try also modifying your server to add the little-endian two-byte size of the string (which should be 0xNN, 0x00) as a prefix to the string data.
Ok I have the actual issue now. Here is a proper description of the frame format(at least for strings) My apologies again for not documenting it.
Below will include the entire cn2 frame then I will pull out the actual cn2basic part.
This is using your test string "Pause X+3"
Quote:
00 00 00 00 00 07 80 04 04 00 d8 58 70 33
That's the entire calcnet frame including sender id and such. The green part is the important part. We already know what the other two are so lets look at the green data.
There are two parts to it.
Quote:
04 04 00 d8 58 70 33
The identifier and the data
The identifier is obvious we figured this out earlier this week
Now lets look at the data. This is where the confusion lies.
The data is the literal data that is pointed to by the VAT.
Meaning.
Quote:
04 00 d8 58 70 33
Blue is the size or the tokenized string data (Little endian).
green is the actual tokenized string data
That being said The reason your string is 55307 is because $d800 (the first token + the zero the old code included) plus the vat entry is that many bytes !
After Kerm's fix it becomes 22755 bytes because that zero is removed as proper and now its $d858 as the string length when you add the vat data as that menu does it adds up to 22755 bytes. .
What that means is you just need to prepend your string data with the length in bytes of the tokenized string.
Tl;Dr
Frame Format:
ID + Length Of Tokenized String(Little Endian) + Tokenized String
[UPDATE #12 25/8/14] Tokenizer Is Complete! + OOP Overhaul
From reading previous posts in the thread, you might see that there has been an issue regarding tokenization that has been something of a stumbling block. I'm pleased to say that the tokenizer now works and indeed did do in the first place! With help from Kerm and geekboy, we were able to find out that the error regarded the Cn format and managed to sort out a workaround.
After sorting the tokenizer out, I refactored the code in titokens (which has now been renamed to gcntools, since there are a fair few packages out there that are called titokens), using object oriented code. This should also make it easier for other programmers to use in their code (since I plan to distribute this soon).
I'll post some screenshots soon.
Many thanks to Kerm and geekboy for their invaluable help!
I don't suppose you'd consider putting your code for gcntools up on GitHub or Bitbucket so that we can collaborate on a little bit of code review? I'm glad you'll be distributing it, and I hope it'll find its way into the Cemetech Archives. In other news, I'm thrilled that WAti is rolling along again at last, and I can't wait to see how the core program progresses.
KermMartian wrote:
I don't suppose you'd consider putting your code for gcntools up on GitHub or Bitbucket so that we can collaborate on a little bit of code review?
Sure, I'll set up a private repo on BitBucket. When you've reviewed it and I've chopped and changed stuff, I'll make it public. I do think, however for greater exposure, it will probably be better to use GitHub.
KermMartian wrote:
I'm glad you'll be distributing it, and I hope it'll find its way into the Cemetech Archives.
It makes sense really. My hope is that if we can release a nice set of easy to use tools (Cn 2 BASIC libs, gcnskel and gcntools) lots of programmers can get involved and we can have a whole ecosystem of applications that utilise globalCALCnet in some way.
BUMP!
This isn't really worthy of a development update, since it's not really polished, but today, I managed to get the calculator to communicate with W|A through the bridge, using the API.
I now need to ensure that the data that W|A spits out is consistent and clean and sort out the calc side code.
That's very exciting news. Were you also able to run the communication through the tokenizer and detokenizer? I assume you at least have the detokenizer in the loop if you were able to submit successful requests to Wolfram|Alpha. I think we touched briefly on this before, but do you have any plans to try to support any pretty-printed or image output from W|A?
Yep. I have indeed been able to run the communication through both the tokenizer and detokenizer.
As for images, I'm not sure yet, but I'll definitely consider it. It would certainly be doable and I can pinch a lot of code from TImage in order to support it.
You might also want to consider looking into some of the xLIB and PicArc functions for on-calculator display, since your users are locked into having Doors CS anyway by the CALCney routines. Good work as always, and I look forward to your next update.
I will definitely consider that. I'm using DCSB Libs' GUI functions to create the interface, anyway.
Oh, then you already get free image-diaplay functions there. Perfect.
Bumpity bump. Any progress on this project? Combined with KermM's Spark bridge, this could be really awesome and portable!
Ivoah wrote:
Bumpity bump. Any progress on this project? Combined with KermM's Spark bridge, this could be really awesome and portable!
I second this emotion. I pinged EGeek about the project when he visited SAX recently, but it sounded like he might be a bit busy with other projects. I'm certainly willing to give this a go myself if he decides he no longer wants to pursue this project.
Ivoah wrote:
Bumpity bump. Any progress on this project? Combined with KermM's Spark bridge, this could be really awesome and portable!
I'd love to say yes to that, but I'm afraid the answer is no. It's been a busy year with school work and it's going to be a busy few months, what with my final exams coming soon.
As it stands, I have a working tokenizer and detokenizer. Some months back, I was able to send data from calc to computer and vice versa. I imagine it would be fairly trivial to stock the W|A communication code in.
Alas, I haven't the time to work on WAti at the moment, but tl;dr, I don't think there's too much more to do. I would definitely like to see this project through, though.
Thanks for your interest, though Ivoah
Perhaps you should consider open-sourcing the project? That way others can continue your work?
elfprince13 wrote:
Perhaps you should consider open-sourcing the project? That way others can continue your work?
I second that.
*largenecrobump*
Has any progress been made on this? This is something that I would love to see come out.