marioepsley's picture
@marioepsley commented on @marioepsley's Question, “texture mapping a .ply

I managed to reexport it out of meshlab as a .ply after cleaning it and it seems to work (not sure why) if anyone else wants to play.

marioepsley's picture
@marioepsley posted a new Question, “texture mapping a .ply
Kewl's picture
@Kewl commented on @Kewl's Feature Request, “Merge WXYZ Lists

While we're at it, would it be possible to eventually implement points & lists (and their corresponding nodes) that have more than 4D?

useful design's picture
@useful design commented on @alexmitchellmus's Feature Request, “Data only cable type

I think port socket shape variations could add to readability. I'm certain line-types by cable object-type would (event-only, single object data+event, lists, one day lambdas!). They've been standard in complex engineering, maps and architecture drawings for a very long time because they work.

I was never a fan of the protruding ports, even though I understand the reasoning behind the decision (making cable-port connections more obvious than in QC) I think it's debatable if the logic holds water. To me it can still be hard to see if the angle is close to perpendicular and that could be rectified by making the cable arcs enter ports at a horizontal angle.

Various shapes that ports could take if they were rather than outside nodes but inside ports but open and on the edge would be triangular (normal), square (data), circular (lists and lists of lists), hexagonal (four sides of the hexagon) (events). Or any other permutation. (I don't like the aesthetic of it though so far).

Colouring wires (either randomly or by type) on selected nodes could improve readability as well as dots for events and dash-dot for lists. Although if screen notes or expanded sub-graphs (window-in-window that doesn't exist yet) had coloured backgrounds that could make random colours problematic for readability.

marcin's picture

Questions about Kinect v2 (XBox One) support

marcin's picture

Hi, I've started playing with Kinect v2 and have some questions on how it can operate with Vuo.

Let's start with the data I'm able to read from NiMate. One the left side is "Depth" image encoded with grayscale (0-255) color, on the right is "Encoded Depth" using REP118 protocol with RG channels for econding depth (0-65025).

kinect views I'm able to receive the simpler "Depth" image via Syphon and then displace an object with it, it works fine, but amount of details is really low.



Vuo is more than nodes and cables, it's a community! Feel free to browse or add your voice.

Browse Discussions

Start a Discussion

Browse Questions and Answers

Ask a Question

Browse Feature Requests

Suggest a Feature

Browse the Composition Gallery

Share a Composition

How can I get notifications?

Learn more about the community

Learn more about Vuo

Vuo Announcements

Sign up for the Vuo announcements mailing list to get news and join the community.