So what is the VDI Driver that we are working on We call it KandyLib, but it was previously called Distant which is a better name in that it refers to being able to control a remote application put simply, this handles cases where we have the need for an application which is actually 2 applications in different systems, which is increasingly necessary as computing allows for virtual desktop environments which are more performant and sophisticated. In this case we have international banks with strict security measures who don't allow their employees to take their work systems off the premises, so they utilize a remote virtual desktop solution, such as Citrix, in order to maintain access to their work environments Our company's business model consists mostly of SAAS software as a service and this allows companies to have compprehensive communication solutions easily integrated with their specific needs in the case of our banking client, they were using Citrix Xenapp or Citrix Workspace on the user's home machine, or personal machine, and a Citrix Workstation Server on their work user environment WE needed a way to be able to use WebRTC in their work environment without suffering the poor experience of having to have the media displayed through their remote desktop session so what we had to do was create some way of running one application which does all the signalling and all the rendering on their local environment, while maintaning coherent state with the application on their work system To do this, we had to write a driver which integrates with teh Citrix ecosystem This included a communication protocol, session lifecycle and the ability to load applications as processes launched by our driver. Specifically, these were to be web applications so that the client's front end javascript developers could leverage their application development skills, and also make use of WebRTC for communication. To solve this we used CEF, Chromium Embedded Framwork, which allows us to create browsers, load applications in their V8 engine and make use of the APIs built into Chromium We wante dto make a generic solution that can be reapplied for any remote application that needs to run in a browser The first solution was a bit of a rush job to have a proof of concept, and to satisfy the needs of our immediate client. They wer happy with the solution and afer having first deployed it at one of tehir banks in Hong Kong, decided to use it throughout their organization, amounting to about 250 ,000 users After successful demonstration of the technology, we started combing through with performance testing and understanding the limitations, such as being able to run multiple sessions of this from one user environment. We also wanted a solution that could work with other virtual desktop technologies, such as one being offered by VMWare, which other prospective clients had been asking us about. SO we decided to refactor the project as a multiprocess solution for better redundancy where we have complete control over how many connections there are, and have the ability to relaunch components should they go stale or become unreliable, and so on. I was not the chief designer of this refactor, though I was the chief developer for its implementation and, at a certain point early on in the development phase, our designer left for another position At that point, I was naturally the individual most suitable to take over as lead designer, partly because I had had so much interest in the project since its inception, and furthermore because I had been involved in writing most of the code. From that point on we aimed to maintain our high degree of test-driven development, and this went on until such time that we were releasing for our first platform, Windows, at which point our focus shifted a bit to taking care of the immediate needs of the client, and we began to compromise a little on the degree to which every aspect of the codebase had to have unit tests. Some components were difficult, or at least time-consuming, to mock for little additional verifiability. We plan to go back and ensure that everything is completely unit tested. This project allows the user to install a Citrix virtual driver module which integrates with the Citrix Workspace. Upon creating a Citrix session, the Citrix workspace goes through a list of configured modules an dattempts to load them using a set of C functions. This gives us our opportunity to instantiate our communication link, launch a broker (this was initiall conceived of as a system-level service which would always be running, but we decided against that as most users don't necessarily need to be running this software all the time except when making a call, therefore it seemed like we didn't need to use resources the rest of teh time - that being said, the resource utilization of the broker is not that high, is fairly modest, and because of that we've contemplated whether we'd like to simply re-add it as a service, but this is easier to do on some operating systems than others -> easy on Linux, a bit annoying on Windows) With the broker running, heartbeating begins with the communication link, and any break in communication can result in the closure and reopening of applications as is necessary. When the communication link receives a request from teh client's work environment, or the client application, as I prefer to refer to it, a session is sought and, if non-extent, created. This leads to a series of operations wherein a browser host process is launched, which initializes CEF, launches actual browser processes that are separate from the browser host, and obviously separate from the driver process (which is combined with teh communication link, at least as far as our solution for Citrix is concerned), and loads the remote application. Upon success of all of these things, we send back an event to the client application with confirmation of states and any details that are necessary to know, such as the respone code of having loaded the remote application, which is itself a code bundle hosted at some private URL.) At this point in tie, the client application can start sending session messages with opaque data for use between the client and remote application. What these data payloasd are used for is beyond our scope and we don't have to worry about it, it's just application state. We have creatd our own example applications which make use of this same pattern, and it's mostly our application to test the platforms, create video calls, perform mid-call operations, multi party calling, SMS messaging, text messaging, user presence, user directory services, and I believe that's the bulk of what is available on a communication platform. Another issue is that we need to ensure that video calling is seamless in the sense that any video window which is created is kept in the appropriate position relative to their application, which is relative to the location of the Citrix Window which may or may not be full screen. These windows can also be displayed across multiple monitors, and these monitors might have different dimensions and different DPI settings, thus all of these things need to be taken into account when tracking and modifying the location of our Video window. Lastly, we need for this to work on all platforms, so we have platform-specific window host implementations which work quite differently for each platform. Obviosuly with Windows, we have the Win32 API to make use of. For Linux, it was actually pretty easy to do iinitially because Citrix offered some software to create child windows and manage their location relative to Citrix's Viewer window. We have, since, implemented our own using X11 directly, as we wanted to have one interface for all platforms, and this was the only way to do so elegantly. Lastly, on MacOS, we had to implement code which was written in ObjectiveC, and we needed a way to bridge from C++ to ObjectiveC without having to write the brower host process, which makes use of the window, in ObjectiveC, which would have been a painful udnertaking for a team with no ObjectiveC programmers. I took that on and figured out different options for calling on the MacOS' APIs and making those calls viable from C++. We settled on writing a simple C++ class with methods which call upon function pointers that are made available in a extern declared C struct. The C struct's function pointers call C functions which are interfaced with ObjectiveC code. And these instantiate, manage the lifecycle and call the methods of an ObjectiveC UI object which leverages the MacOS' apis, NSWindow in particular, to procure a window, show/hide and move it as is required for our intelligent tracking. What of KIQ? KIQ Was something that I put together under the theme of my wife's business, which is an online business to help people learn Korean. This began with her teaching many years ago, actually she was teaching English to people in Korea, and after coming to Canada she began finding students for Korean and developed an entire arsenal and repertoire of teaching material which she has refined over time. This included group outtings, and a book which she mostly wrote, btu never published, though she has used it to draw on in a manner where shec an utilize consistent concepts. The KIQ concept was born because of the Korean aspect, and hte fact that her business is called KStyleYo. This is the Buisiness intelligence of KStyleYo, or a business intelligence modelled after the needs of a social-media aware presence which needs to maintain itself and act upon events in the social media sphere. This began as a service application which takes requests from a client application and then sends back events. I took this as a n opportunity to improve my C and C++ skills. SO I began from the ground up with a simple socket_listener class which managed one socket, and then I expanded on it by creating a thread pool of workers who each manage a socket and clean up on disconnect. The KServer was born as a class which implements the socket listener and overrides the onMessageReceived wihch gets a payload as awell as the unique file descriptor for the socket managed by whichever worker of the thread pool is receiving data. This allows us to manage mutliple concurrent connections for different clients and, in essence, has been mostly tested by having the two of us using the server at the same time. This was initially an overengineered solution for her desire to have an application with schedules posting on Instagram, since she didn't want to have to do this all the time, and idn't want to get a susbscription to something like hootsuite which is expensive and somewhat inflexible. I didn't watn to merely have that aplpication, though, I wanted this as an excuse to start a project which I had already been thinking of for somee time, where we could observe all the operations and manage the concerns of her business through one interface, and have it report to us as is needed. This began the process executor aspect of the KServer, where it uses a database to store information about possible processes which can be executed, and the manner in which they are to be executed, that is parameterized arguments that differ by name an dwhich can have their values set and updated accordingly. The executor forks processes, polls them, and then returns the process output or the error output for parsing and reaction by the KServer. This leads informing an event system and the parsing of the data can lead to follow-up actions. I've tried to genericize it as much as possible, considering that this was developed on the fly for our ongoing business needs, and, for example, we can assign triggers to react to particular applications based on what named parameters were run and what their values might be. This allows us to configure and schedule additional tasks which themselves can also be acted upon in the same way. Some of the things we've been able to do with this has been to write analytics software which keeps us up to date with what business events are occurring, what social media has been psoted, how the reactions to that social media were, and then we generate some reports and schedule them to be emailed to us on a daily basis. Furthermore, I wante dto be able to have some application behaviour to facilitate and improve teh user experience of our livestream events, which used to be held everyday whent he pandemic began. To manage this, I wrote a bot which uses the YouTube Data API, and can check for a channel by ID and determine if livestream events are going on, then query those livestreams and get all the messages and information about participants who are watching. We can then engage with those participants using a bit of tokenization and some modest machine learning to determine teh topics being discussed and then begin to track an ongoing conversation with anyone we engage with. These conversations can expand and we hold information about the conversation such as objective and subjective context, so that we can make more intelligent decisisions on what to say or what actions to take, such as a research action, depending on th depth and subject matter of the conversation. This turned into a bot application which we could simply leave running all the time, as it had a small footprint, and have it recurringly check for channels of interest to see if livestreams are active. At a certain point, because we wanted to repost our social media content from one platform onto other platforms, but in an original way where we can produce changes, such as announcement that "This is frm a sister channel" or "our affiliate" or "this is from our other social media platform", we came up with ways of tracking each platform, mutiple users for each platform that can eb differentiated by user type, such as the primary official user, the personal user of the CEO, an affiliate user with an arbitrary specifier, and so forth. This gave me an opportunity to leverage my use of IPC knowledge, by having an IPC manager in the KServer which can manage multiple IPC clients each for whatever application that we happent o know about. We register these applications and assign them a port, and if they're active, the clients connect to them and then are queried by the IPCmanager every so often to see if there are new messages. I utilized that same request-reply pattern here to make sure I was refining my logical fitness and upgrading my ability to be intuitive about the design. These applications can take events based on an IPC protocol or set of message schemas which fit a particular IPC protocol, and then these can be parsed intelligently from each side and responded to. A common occurrence of how this is al lutilized would be. 1. Instagram posts are scheduled and are executed at their appropriate time 2. Another task runs every so often to grabn whatever new posts an tracked user has made, because they might have also made posts direclty from their phone, and then we save these as our own generic "platform post" 3. We can assign reposting to other platforms and for the same platform with other users, and then we generate those posts, with changes as is ncessary for the different categories of post that it might happent o be, affiliate post, simple repost, post on a different platform, etc.. and then we send those as requests to the bot broker which already has bots running all the tiem for each of those platforms. They handel accordingly and send back their IPC messages with events. This allowed us to develop the IPC protocol a little, because I want to eventually have everything going through this system and usign the same event system so that we can track chronology and make improvements as are necessary. We are pretty much at this place now