Use of java, javascript or HTML5

I have downloaded a Sony SDK and sample java app of some 200 lines that calls a “live stream slicer” utility of 300 lines that takes embedded jpegs and views them seamlessly in real time. I am not familiar with java, though I can roughly make sense of it. I believe that java can be converted to javascript or HTML5 but how successfully I don’t know, but if I pursue this what is the best way to achieve this in AI2?

1 Like

Is this what you want to use Dave [

Overview - Camera Remote API beta SDK - Sony Developer …

](https://developer.sony.com/develop/cameras/) ?

There is a note that “You can now call the APIs by HTTP POST with JSON”

This older Forum post suggests you might be able to do something using a Web component and a Clock for some access (but not necessarily the Sony SDK) using an http post
https://groups.google.com/forum/#!msg/mitappinventortest/HGTcI97K-9c/LA5H3S1jAgAJ;context-place=categories/mitappinventortest It might be a starting point.

Regarding converting Java to javascript

https://www.google.com/search?q=convert+java+code+to+javascript&oq=convert+java+code+to+javascript&aqs=chrome..69i57.9967j0j7&sourceid=chrome&ie=UTF-8 .

Best way? Unless someone provides specific advice, Experiment. What will happen depends on what needs to be done; doesn’t it. This might not be possible for what you want to do. To get an answer, someone savy would need to see the Java code.

Achieve this with AI2 ? These links might help:

[

WebView Javascript Processor for App Inventor

](https://appinventor.mit.edu/explore/ai2/webview-javascript) might be the path once you convert to javascript. or perhaps [

Two New Tutorials on Using Javascript in App Inventor

](https://appinventor.mit.edu/explore/blogs/karen/2017/10/two.html)

Good luck.

Cheers Steve and thanks for replying,

You are correct in that it will work with a Webviewer making API calls using HTTP with JSON, and that is what I do to take a photo and zoom, and even start and stop the live stream, which provides a URL for the stream, but it just tries to do an infinite download.

I was hoping that someone with experience of using external code may be able to tell me whether AI2 would work with java, javascript or HTML5 best, or at all. The SDK can be downloaded from https://developer.sony.com/develop/cameras/ but I wasn’t expecting anyone to go to that trouble.

The articles mention java for advanced MIT developers, and I am very much a beginner. It goes on to give javascript examples, also new to me, so I will take a look at those. Thanks for your time

If there’s a Java library, you might want to cross post to the extensions category to see if anyone would be willing to write an extension for you. Otherwise, you can use JavaScript and HTML5 in a Webviewer assuming you have a newer version of Android (5.1+) and the latest version of Google Chrome installed on your device.

Thank you very much, that is exactly the advice I was looking for. I would like to learn to do it myself but didn’t want to waste my efforts going down the wrong path. I will try to use a converter on the java code and see what I get. My android version is 8.1 on my tablet so shouldn’t be an issue.
Many many thanks

1 Like

Have I overlooked the obvious, could I process the data coming into the webviewer and display using code in blocks?

Yes that may be a possibility depending on how sophisticated the camera is. I’m not familiar with this model, but some people have had success with simple Arduino-based solutions using the WebViewer component.

https://groups.google.com/forum/#!searchin/mitappinventortest/camera$20webviewer|sort:date/mitappinventortest/CiN6KfXRRz8/AliXve-PAwAJ
https://groups.google.com/forum/#!searchin/mitappinventortest/camera$20webviewer|sort:date/mitappinventortest/9Tih_KHmQ9Q/a5M6gTS6EAAJ
https://groups.google.com/forum/#!searchin/mitappinventortest/camera$20webviewer|sort:date/mitappinventortest/Xdo7o94hsno/TWlK4sFhBwAJ

1 Like

I have followed up all of the options and researched more. The second link looked promising and about as simple as it gets, using img src=“http://172.217.28.1:81/stream” in a HTML doc, but unfortunately didn’t display am image, just the image icon, so I think the Sony data format cannot be decoded natively.

I read that .java can be used in extensions but there isn’t one and it’s of limited use so that isn’t looking likely. I think my options are to learn javascript or stick to running my AI app for controlling the motors and third party app for the camera.

Thanks for your input, and I have learnt on the way; every day’s a school day :slight_smile:

It might also be the case that the stream is a video format rather than an image format, in which case a <video> tag would be more appropriate than <img>.

I tried both, but img is what worked for the person in the link you provided a reference to.
I have located a JavaScript project on GitHub for time lapse recording using the Sony API which includes the live view streaming, so once I work out how to access it I may be in luck.
You’ve been a great help

Okay, so I have read all the suggestions, articles, researched the whole of the internet :joy:, and decided that using HTML with JavaScript in is my best option. I have found a JavaScript project on GitHub that uses the Sony Camera Remote API and some of the functionality I need.

My question is can the JavaScript in the HTML file use an extensive JavaScript library as Assets?

There is a collection on this board.
Use the board search for ‘FAQ javascript-stunts’

yes, however the project limit is 10 MB...
Taifun


Trying to push the limits! Snippets, Tutorials and Extensions from Pura Vida Apps by Taifun.

Many thanks, I thought you may know, I have seen many of your articles and comments. I am using your WiFi extension

1 Like

Many thanks for your response. Yes, I had seen that but hadn’t noticed any with libraries. I have the answer now

I feel frustrated asking for more help but I have spent the last couple of days trying to work out how to debug javascript, decipher the javascript app that I found on Github and have learnt a bit but I’m not familiar with javascript/web programming so struggling. The project has a large library behind it which I don’t need and it also doesn’t have some of the functionality I want, and have managed to achieve with blocks in AI2 that use Json HTTP Post. The only thing I can’t achieve is ‘slicing’ the video padded jpeg stream. I can start the livestream and set the webviewer URL to receive the input but it can’t decode it. The github app does it all in functions and takes the URL and the calling example app uses http.listen(3000, function(){ console.log(‘listening on *:3000’);});
I can’t piece together this part of it. Here is the code that does the actual decoding of the stream. Can you tell me how I could use the core of this in the webviewer or web components, I assume as javascript in a HTML doc please?
I have already done the equivalent of “this.call(‘startLiveview’, null, function (err, output)” with blocks:-

  SonyCamera.prototype.startViewfinder = function (req, res) {
    var self = this;
    this.call('startLiveview', null, function (err, output) {
      var liveviewUrl = url.parse(output[0]);
      console.log(liveviewUrl);

      var COMMON_HEADER_SIZE = 8;
      var PAYLOAD_HEADER_SIZE = 128;
      var JPEG_SIZE_POSITION = 4;
      var PADDING_SIZE_POSITION = 7;

      var jpegSize = 0;
      var paddingSize = 0;
      var bufferIndex = 0;

      var liveviewReq = http.request(liveviewUrl, function (liveviewRes) {
        var imageBuffer;

        var buffer = Buffer.alloc ? Buffer.alloc(0) : new Buffer(0);

        liveviewRes.on('data', function (chunk) {
          if (jpegSize === 0) {
            buffer = Buffer.concat([buffer, chunk]);

            if (buffer.length >= (COMMON_HEADER_SIZE + PAYLOAD_HEADER_SIZE)) {
              jpegSize =
                buffer.readUInt8(COMMON_HEADER_SIZE + JPEG_SIZE_POSITION) * 65536 +
                buffer.readUInt16BE(COMMON_HEADER_SIZE + JPEG_SIZE_POSITION + 1);

              imageBuffer = Buffer.alloc ? Buffer.alloc(jpegSize) : new Buffer(jpegSize);

              paddingSize = buffer.readUInt8(COMMON_HEADER_SIZE + PADDING_SIZE_POSITION);

              buffer = buffer.slice(8 + 128);
              if (buffer.length > 0) {
                buffer.copy(imageBuffer, bufferIndex, 0, buffer.length);
                bufferIndex += buffer.length;
              }
            }
          } else {
            chunk.copy(imageBuffer, bufferIndex, 0, chunk.length);
            bufferIndex += chunk.length;

            if (chunk.length < jpegSize) {
              jpegSize -= chunk.length;
            } else {
              self.emit('liveviewJpeg', imageBuffer);
              buffer = chunk.slice(jpegSize + paddingSize);
              jpegSize = 0;
              bufferIndex = 0;
            }
          }
        });

        liveviewRes.on('end', function () {
          console.log('End');
        });

        liveviewRes.on('close', function () {
          console.log('Close');
        });
      });

      liveviewReq.on('error', function(e) {
        console.error('Error: ', e);
      });

      liveviewReq.end();
    });
  };

I appreciate this may not be straightforward, but there again it may well be for someone with more knowledge

Thanks for reformatting my post, I hadn’t realised. Any chance of a steer on this, I can’t find a way to debug it as I can’t run the companion as the device is using a Direct WiFi connection to the camera, not that I’m sure I could anyway. I’m fairly familiar with VB desktop Windows apps in Visual Studio, a quite different environment. The only other way I can think is to use some of the code in a Windows environment and debug in Chrome and then hope it will work in AI

what about connecting the device via USB to your desktop computer?
Taifun


Trying to push the limits! Snippets, Tutorials and Extensions from Pura Vida Apps by Taifun.

Indeed, that would do it, thanks

But the WiFi extension won’t run in the companion app?