// FULL INTERFACE – Aug 14 2020 – https://app.wevu.video/guest/intg/index.php?anon=true&cid=65&gid=0&co=false&evid=sFDDElAWi4LdGfu0oafhIA%3D%3D&forEmbedding=1
Button only embed – Waltz for one – Can only play once then must wait 30s to play again
Unlike a normal embed code, the tiny embed code in CLAS will have the “display:inline-block” style by default, so that you can put it in the middle of a line of text or other web element. To put it on a separate line, simply add a br tag.
“audio only” embed – CLAS native video (HTML5)
So CLAS is a project being developed actively at Arts ISIT (University of British Columbia), and also a production service used by thousands of students by multiple institutions around the world, and also a research tool. How can the CLAS service get so many upgrades every few months without affecting the students and instructors using it? You probably have heard mentions about how one or all instances of CLAS can be remote updated in seconds, completely transparently to users… So how transparent are we talking about here? Have a look!
I recently got an email from the university soliciting ideas for efficiency improvements, so it is a good time to document the “kaizen” (continuous improvement) culture in my unit. For context, Arts ISIT has a small software development team, which I belong to. Our mandate is to support the T&L, research, and administration needs of our faculty, by adapting or developing technologies. The requests are diverse, yet ISIT is just a unit within the Faculty of Arts, so our funding is quite limited. That means we need to be efficient. I have never used the term kaizen at work, as we are a long way from being a Toyota story, but I have a belief that if a small team continuously self-reflects on process efficiency and leverages the vast intellectual capital in a university, it can provide top-notch service and create world-class innovations.
A successful case study at Arts ISIT is the CLAS project. CLAS is a video management and interaction system that arose out of the need to apply videos in education in a pedagogy-rooted manner, simplify user experience so that using videos in a course is not a burden, and give students deeper and more varied ways of interacting with videos. Spreading mainly by words of mouth, CLAS now serves 2500 students each year in 3 faculties and satisfies many complex requirements that cannot be met by Youtube or off-the-shelf video systems. CLAS also supports research from Psychology, Education, and Applied Science, and incorporates some of the findings back. The kaizen spirit that is now spreading in my unit can be said to have begun with this project.
The service model of CLAS represents a win-win-win collaboration between researchers, educators, and staff. Researchers see their latest work implemented professionally within a year or two from an initial idea, and limited-scope results from singular studies are aggregated into a complete product suite that may gain critical mass as a whole. Educators benefit from a video system informed by up-to-date research. As staff, I have the opportunity to apply my M.Sc. degree in HCI, received at UBC, to the fullest. Our support staffs also enjoy the boons of in-house product control. Because the development is very close to users, when an instructor has a novel use-case related to videos, our support team can confidently say “Yes, or if not yet, we can discuss how to make it happen.” Thus, the support work for CLAS becomes more intellectually rewarding, with activities like consulting educators on the technology and learning about their pedagogical needs, as opposed to firefighting problems that cannot be resolved internally. Finally, this project and others at ISIT instill in our staff and our users a sense of pride, as we are all contributing to UBC products that could be called “world-class.” After all, various external parties have inquired us if CLAS is commercially available yet (including universities in the Canada, the US, Japan, the UK, Australia, New Zealand, and most notably, the Ministry of Education of Singapore).
In other words, the dev team at Arts ISIT could be seen as the start of something similar to MIT Hyper Studio, Media Lab, and similar research & development groups in other top universities.
A good model still needs good execution, so what are the day-to-day secret ingredients of CLAS?
- Keep team structure flat and task division fluid: In the early stages of CLAS as a production service, I handled nearly all support tasks and communication, despite my official role of programmer analyst. This was not out of an obsession over a technology or a lack of organizational discipline, but completely out of needs. Like in a startup, the person most knowledgeable about a topic at each stage of an endeavour will handle whatever tasks necessary to move the endeavour forward to the next stage until others can take over. As the project matures, division of labour and communication happened fluidly. Our support team began to take over, and now completely own, the support website and user documentation. My manager took over all the project management aspects the moment the abstract visions became clear enough that a scope could be defined and split to milestones and tasks. Our unit has 25 people total, but most importantly, 19 of them are technical support, instructional support, and engineering staff. This makes for a rather “flat” management structure, with the most resources being spent on the productivity drivers. So how do we accomplish the requirement analysis and evaluations of alternative solutions? We leverage the intellectual capital of the university, which is our next best practice…
- Collaborate! Communicate! My academic chair, my manager, myself, and our learning support team routinely reach out to our faculty members, using any channel most convenient for the faculty members to share their thoughts: walk-in consultation, focus group, contact forms, and support email address published in multiple locations. We are flexible and eager to listen, because providing service to faculty members and students is our first priority, not procuring technology for technology’s sake. Our analysis and evaluation function is thus distributed among my manager (finance & resource planning), myself (matching technical constraints and alternatives with functional requirements and usability), and the faculty members themselves (contributing their observations from the trenches, research results, and a first hand account of their needs). The direct communication and flat team structure make this process remarkably fast and effective compared to the big corporate cultures that I used to work in. The startup attitude also inspires a sense of ownership in faculty members and partners, even those outside of Arts. Three other Faculties have independently solicited grants to add functionalities to CLAS, and Faculty of Education in particular even volunteered to create professional orientation videos for the system.
- Intellectuals lead and management manage: As the technical project lead, I have the clearest sense about what is possible now, what is not yet possible, and what can become possible given each architectural choice. I discuss these options openly with my manager. My manager creates the project management scaffolding around my work, but keeping to the abstract requirements instead of dictating the “how”. My manager also lends a set of strategic eyes and ears for risks and funding opportunities, and discusses them openly with me. Management gather the fuel to keep the ship going and look out for rocks and shallows while intellectuals try to navigate the map and steer. We developed trust and respect from this healthy partnership.
- A technology project needs a lead with deep working ability in technology and also in user experience, since this role requires bridging the engineering gap between stakeholders’ needs, expressed in human language, and all the abstraction levels of technology. More generally, you could say that a project about a domain X needs a lead who is both an expert in domain X and in user experience.
- Our support team is also our Quality Assurance team, allowing support staff to become very familiar with each new set of capabilities before those capabilities are released into the production service. In addition, front-line support staffs have an intimate knowledge of how instructors and students think, which inform their testing. Finally, work study students who support our technology learn a variety of useful skills: QA testing process, drafting test cases, user consultation, and articulating about technology.
- Test! Verify! Assume nothing! I test my projects ceaselessly with each incremental change, so that by the time a release reaches QA, it is already virtually problem free. When integrating an external technology, I verify all advertised features, keeping in dialog with our instructional design and support staff or directly with our users, to make sure that use-cases are actually being met *at an adequate quality of user experience*. I insist on a very generous allotment for testing in project time estimates, because an exhaustive verification regiment is crucial to high service quality and also results in a net cost saving. It is much easier to fix a bug in code you just wrote, or an external product you just integrated, than to deal with a trouble ticket about a hasty decision 6 months before. More importantly, a single IT problem or subpar user experience can waste hours of 20, 50, 100 faculty members at a time, negating any hypothetical benefits of short term cost cutting methods.
- Give time for engineering: The CLAS project is characterized by a paradoxical combination of blistering productivity and high quality. A huge number of enhancements are released every term (this is a typical 3-month update), while having a yearly trouble ticket count of less than 5. This is due to my insistence, and my manager’s support, on spending time creating and improving our engineering framework. A few of our pearls:
- There should be a multi-tenancy architecture for every project, big or small: each tenant (virtual instance) should be able to stand on its own physical server or a shared server, yet is completely separate from each other in terms of database, code base, configuration files, database users and passwords, and encryption key. These instances should be so isolated from each other that you can compromise and destroy one instance without any harm to all the others.
- Every project should be version controlled: The repository structure for each project should contain separate areas for core deliverables (code, configuration files, database schema, etc.), implementer’s documentation (technical notes for me and others in the dev team), stakeholders’ reports (drafted by myself after each milestone, improved upon by support team), and user documentation (by the support team).
- Every project should have an automatic, transparent upgrade system in the back-end: this kind of systems pays for itself many times over in productivity increase and reduction of human error. So how transparent are we talking about here? Watch the demo!
Last week I integrated the “Interactive Media for Education” (IME, aka. CLAS) video app with our school’s student information system (SIS) so that the app can update viewer lists daily for media collections that are linked to courses. The API is a simple REST service, so at first glance I thought all I needed to do was to send a request, get the data, and follow the specs correctly to parse the data and update my database, with a healthy dose of unit testing to smooth out the edge cases. Even so, I just couldn’t start coding right away even if this project is still a one-person show and I’m strapped for time. That habitual need to take long walks and imagine all the usage situations that a new feature may go through just refused to let me go, and so I walked, and I prototyped. After half a week of tinkering and thinking about it, I realize that even automated systems have a usability angle that one must consider.
Who are the “actors” in an automatic enrollment list update system? The data source I talk to is one, since it represents the designers and programmers creating that system and also the organizational culture and policy that they work in. Understanding those involved with my data source allow me to consider edge cases unwritten in the documentation. For example, I remember hearing in passing from support staff that “courses number with a letter at the end sometimes have strange enrollment lists”. Sure enough, some, not all, test courses yielded zero students. I contacted the SIS team to hash this bug out, and realized that this was not a programming bug but an organizational issue, a naming rule inconsistency between the departments and the centralized body. Not a problem I can solve now, but I got the information needed to create a work around before the service went live. Continuing this line of thoughts, I imagined how the data source would behave over time, and what it “needs”. During the course registration period before each term, the back end database of my source would be hammered, so I should be polite and stop asking it for data nightly.
Another actor is the student using the IME app. The students never see the auto provisioning service, but they are affected by it. In the first month of term when the enrollment list is in flux, tension is high, and a student who just registered late for a course from a waitlist will feel that a one day wait for a video collection to open to them is as long as two, so I force the student list to update twice a day, slowdown or not. Also, I realize that I cannot anticipate all bugs, and all human errors that either I or many other staff involved with enrollment can make, so during this period the service runs on a “kindness” mode, where new students added to courses are updated, but students dropped from courses are not applied immediately, in case that student actually lost access because of a bug or human error. Would this then jeopardize academic integrity? To address this, I make sure that the “kindness” mode is only applied to courses where videos are only shared to everyone in the course. A “one to many” content distribution relationship implies that this is just a lecture video, while “one to one” or “many to many” or “many to one” would imply coaching, group work, or assignment submissions, situations where integrity is required and “kindness” must not be misplaced.
Naturally, at this point I wish for a service where updates are PUSHED to my video app, instead of REQUESTED by my video app. My university actually do have such a service. However, integrating with this push service will take more time, because it was purpose built for the centrally-managed Learning Management System (LMS) and was not meant to be simply hooked into another app. Getting the permission and arranging the details about testing environment and move-to-production process also takes more time. With this delay, the feature might not become useful to students for at least one term. More importantly, as the number of users increase and we are developing a more enterprise-scale support model, having scalable support for user provisioning is also a crucial feature to the service managers who are evaluating the app from an operation view point right now. I decided to go with the request method to create an immediate impact to users and stakeholders, while having a conservative development schedule to ensure production quality. Meanwhile, I communicated with the LMS team to express my interest in the next iteration of their enrollment list push system, and they have committed verbally to provide an external facing API in the future.
Finally, the needs of the the support team must also be considered. An automatic enrollment service needs a mechanism to register and deregister courses from automatic user provisioning, and the support team are the ones who will do this. This process should be seamless instead of adding to their already busy workflow when first setting up a course and then supporting it on requests from instructors, otherwise it will create another source for human errors, forgetfulness under stress, and delay, especially during that hectic first month of a term when students need robustness the most. Thus, I decided that there would be no extra UI for registering and deregistering a course for auto-provisioning. These actions will be integrated into the normal support workflow that the team had always been used to. When a new course is created by an instructor, if that instructor specifies that the course is linked to SIS, it is immediately registered for auto-provisioning. If the support team receive a request to bulk import enrollment from SIS, that action will register that course for auto-provisioning. If the enrollment list is then changed manually for any reason, using any methods in the admin interface like the Add / Drop tool or the CSV import tool, the enrollment list will be deregistered, so that the automatic source will not overwrite the manual addition. Support staff also often need to use test acccounts, so I included a reserved account list in the design and exclude these accounts from being changed by the automatic updates, yet adding and dropping these accounts from courses manually also do not remove the auto-provisioning status.
Last but not least, to minimize the risk caused by having such a short development time frame, every automatic back-end features of the IME, this auto user provisioning system included, have extensive logic to check for corrupted data, data that is technically specs-valid but seems unusual (too many students to be dropped, too many blank entries, etc.). The cumulative time to update all courses, the time for each course, and the time and error detection status for each part in the process for each course, are collected, evaluated for unusual patterns, and dumped into an email report that the developer and support team can monitor for the first few months of the feature. All this monitoring and error detection may seem overbuilt for something so simple, but this extra development time is to trade for a much bigger time saving. This method allows the team to treat the first few months of live service as an extended QA testing phase, by being able to notice when things fail and exactly what course, what step it fails on, even which users whose enrollment status are being modified at the time, so that the team can manually recover from failures quickly and at the precise point before users feel any pain.
So I was politely accosted by a psychology student running an experiment while going for lunch. The task was to solve brainteasers (arg! my only weakness!) in 2 minutes. Each right answer increases your chance of winning a $200 draw. I immediately felt a sense of certain doom. I was stuck right on question 2! Right then, a lost looking student asked me the direction to a library, and so I stopped thinking for a few seconds to point him along. He confirmed the direction again, and asked if the library had a water fountain; and the whole conversation took about 20 seconds.
I got back to my task and, strangely, saw the answer to Q2 right away, just when the time is up. This was when I suspected that the student who asked me for direction was a partner in the experiment, and the point of the experiment was to see how people react differently in social situations when stressed for time. It turned out that my guess was spot on. The hypothesis is specifically whether you would be less willing to give time, when you feel a perceived lack of time, and a perceived value placed on your time.
The student didn’t count on the fact that I have chronic migraine and work in a “0.6 programmer, 0.4 everything else” position. Being starved for time is a main theme of life. But there was a surprise ending to this, and it surprised me too…
I was the only respondent in his study who managed to answer more than 1 questions so far. The brainteasers were deliberately difficult to induce the necessary stress. In fact, most respondent didn’t manage to answer even one of the brainteasers.
And I am terrible at brainteasers… I hate them with a fiery passion! What I think happened was that the act of giving time to help a stranger allowed me to overcome a mental block. Or perhaps it was because I felt a bit better after helping someone, getting a jolt of dopamine to block out the bad feeling of being stuck at “questions that may be linked to my IQ.”
The bottom line is, state of mind matters a great deal, and we certainly can control it to make our life and work better. Often, when you give time, you get more back.
Case in point, this little game a close friend and I made for a development challenge: produce a real-time-strategy (RTS) game in a week. Being eternally time-starved, I decided that I might as well learn GIT, Maven, and the IntelliJ IDE while I’m at it, instead of using something I’m used to like SVN and Eclipse. We also had nearly zero idea of how to make a real time strategy game. Recipe for disaster right?
- Day 1 to 3: fumbling with GIT, IntelliJ, and Maven. Argued with each other about what the game would even be about. We finally decided that it would have something to do with defending a wall, and that you will control just one character, a human veteran, who runs around shouting orders to a bunch of orcs who are so peaceful that they would not stop tiling the field even when an army of monsters bear down on them. Great concept! I think. But we had absolutely nothing to show for implementation, after half of the allowed time! Our moods were certainly not great then, but we decided to have some faith.
- Day 4 to 5: fumbling with the actual graphics / audio engine that we use to build the game (you think we actually know this deep stuff beforehand when I didn’t even know how to build a java project with Maven before we started?). At the end of day 5, we got the first demo to build: a guy standing on an empty grass field, in complete silence. He walks left, right, up, down, but there is nothing and no one… Unless we manually code in some scenery tiles like mountains that he cannot walk through.
- Mid of day 6: got some enemy sprites to randomly appear on screen, got a multi-layer map tiler data structure in place so that we can put a few trees and mountain sprites. The enemies still don’t move, but touching them and you will die after a few seconds, mysteriously, in complete silence and with a complete lack of visual feedbacks.
- End of day 6: got enemies to move, got allies to spawn, the map system now read up a text file for each layer, so I can make a bunch of text files with “t” for tree sprite, “w” for wall sprite, “m” for mountain sprite, etc. Got a single sound file to play for the first time.
- Start of day 7: within 2 hours I had the idea of making a new “map tile” layer and fill them with semi-transparent sprites, creating a limited range-of-vision effect. In 2 more hours, I put together a system for switching music tracks smoothly based on how many enemies are on the map, and how much damage the wall has taken, and adjust the mood of the music accordingly, from suspenseful to frantic to despair. During the same period, my friend put together a rudimentary game AI. Enemies now slowly come down the map in a threatening, mock-intelligent manner. This is all smoke and mirrors in the back, of course, but unless I tell you, you would swear blind that the monster army know how to flank and ambush.
- Mid of day 7: I franticly scoured the web for creative common graphics, sound effects, and music to put into the game, and my friend drew what I couldn’t find quickly enough. The thing now looks like an actual game! I added minor effects like when an attack action happens, it’s not just stats being subtracted in the back-end, but there is a sword slash animation, a claw slash, and a sound of metal hitting metal. My friend coded win / lose conditions, key targets to destroy and what to protect, and put together a very basic UI, a screen for the start menu, one for winning, and one for losing. All of those screens reuse the code from the very first demo, just a single background image, in this case, with text baked on them. I start to play test the game and tune the parameters of enemies so that the game is challenging but you won’t lose too quickly.
- End of day 7: I recorded and mastered voice acting for the game. My friend spent the time optimizing the code, because the game, by this point, throws out enemies and allies by the hundreds, whose movements and health must be tracked. I continued to play test the game endlessly to the very last seconds. We are utterly exhausted by this point, because, I forgot to mention from the beginning, we have been working on this game demo after going to our full time jobs. So when I said “start of day”, I meant 5pm; “mid-day” meant 9pm; and “end-of-day” meant 1am. We did this for fun and love, mind you.
But at the end, we had something that looks like an RTS, and may even be called “fun” (your mileage may vary, unless you grew up considering things like Battle Toads fun), from the beginning of having almost none of the skills to make such a game.
Looking at the timeline, you may notice that the amount of productivity seemed to grow like a bacteria colony: negligibly tiny, then excruciatingly slow, then somewhat fast, then really fast. I believe that all projects are like this. After enough ramp up time, and trust your dev team to tell you how long is not long enough yet to give up, eventually that magical exponential growth will kick in.
Anyway, enough ranting, times to lead some peaceful orcs to defend their homes, yes?
Selenium (seleniumhq.org) is a browser controller, but I’d like to imagine that it’s a nice, big robot using a web browser with its somewhat clumsy robot paws. And it can be quite good for automating web-based office processes.
Web automation is not a very new thing to techies, especially DevOps. Even a 0.6 programmer like me managed to rig up an automated load test for CLAS with JMeter a year ago, after a weekend learning the thing. Well, while JMeter is very good at jackhammering at a web app with what looks like an army of users logging in at the same time, I don’t like it for office automation. JMeter works at a HTTP protocol/request level, so simulating an army doing simple login + page jumps is easy, coming up with the kind of long, precise, and timing-heavy sequence of actions that mimics a real person is difficult, error prone… But most of all not easily repeatable. What you record in JMeter is the communication between the client and server, not actually what you did on the browser, so in order to fix bugs, you need to reverse engineer how the site works. This is why I have hesitated to reassure my boss that I can automate the tedious parts of his job, because I can’t tell for sure how long that would take.
The difference between Selenium and JMeter is WYSIWYG. What you record in Selenium are browser actions, which you can confidently debug by looking at what the site shows you. A Selenium script starts up a browser, search for DOM elements on a page, and send events to that elements. It operates at a closer level to the actual human user, compared to the HTTP protocol level. At the user interface level, you can be more certain that what your automated script does is equivalent to what a human being does, because you are, for example, entering usernames and passwords on actual input fields instead of trying to match session cookies on requests. Selenium is slower and heavier, because it invokes and drives the browser, but it can be very precise and human-like.
Moreover, its final runnable test suite can be easily wrapped inside a shell script, a Windows batch script, or an Apple .app created with Apple’s Automator. Wrapping the actual test runner behind an executable breaks the dependency between the developer and the automation end user with regards to scheduling. Now whoever needs to use the automation script can schedule the script to run via an iCal alert or the windows task scheduler, both far more friendly than editing crontabs. If the action sequence needs to be changed, then the sequence file (Selenium test suite) can be changed without affecting the final script, or its scheduling.
Here’s a quick way to get started with Selenium:
- Download the Selenium IDE, a Firefox plug-in, and try recording some browser action sequences, save that as an .html file.
- Run the whole test suite you just recorded. You will notice that it fails quickly because the pages cannot keep up with the script. You will have to add “pause (target milliseconds)” commands between every actions. I recommend using a time-based pause rather than an element-detection command like “waitFor”, because element ids and order of appearance change a lot quicker in a website than whether an element representing a core site functionality is there or not.
- Refine the timing some more. Some pauses may need to be many minutes long, if a button triggers an AJAX event that fetches massive amount of data, for example.
- Debug mysterious “ineffectual clicks”, where an actual click with a mouse on an element does something, but the click event on the DOM does nothing. Chances are, some elements in certain web sites are keyed to related events like mouseUp and mouseDown instead of a click. Watch out for these types of events that are similar in meaning.
- Refine element search logic. The Selenium recorder is only smart enough to know that you click on “the third LI inside the second DIV” not “the LI containing a label that contains the text ‘Faculty of Arts’.” You need find the target of that entries, and replace positional identifiers like “//div[@id=’sectionForm:faculty_panel’]/div/ul/li” with semantic identifiers (in XPATH syntax), like “//div[@id=’sectionForm:faculty_panel’]/div/ul/li[contains(text(), ‘ARTS’)]” or “//div[@id=’sectionForm:subjects_panel’]/div/ul/li[label[contains(text(), ‘COGS’)]]”
Note: you may also want to replace any command target that searches elements by ids, because ids like ‘id_12345’ are often randomly generated to prevent just the kind of automation that you are trying to do (since spammers do it too).
- Download the Selenium java WebDriver, and write a shell script to run your test suite without having to open up the browser and press buttons on the IDE test runner.
- Example script:
lsof -t -i tcp:4445 | xargs kill
java -jar “selenium-server-standalone-2.44.0.jar” -port 4445 -ensureCleanSession -timeout 9999 -htmlSuite “*firefox /Applications/Firefox.app/Contents/MacOS/firefox-bin” “https://somedomain.com/” “./officeProcess.html” $officeProcess_resultFname
Some Caveats: you may need Apple’s own java on yosemite. Oracle’s one sometimes installs correctly but the “java” command cannot be found on the terminal. Your user needs to schedule the task to run when the computer is not sleeping, so probably during the work day, or if after, then the computer must be set not to sleep at that time. This limitation doesn’t have anything to do with Selenium, of course, just a limitation of task schedulers in general.
I am working as a “programmer analyst” at the Faculty of Arts, University of British Columbia. I try to take my job title as literally as possible, so I function as a “0.6 programmer” and “0.4 of everything else: idea driver, usability analyst, and architect.” I want to help shape and promote products that are useful and attractive, regardless whether I am the programmer or not. I did an M.Sc in Human Computer Interaction to learn the basics of usability, then continuously adapt the theories and wisdom learned at school to the realities of my work environment and the projects that I’m in.
My current position allows me nearly complete ownership and freedom in a (few) “experimental” educational technology projects, an advantage of smaller organization size. The challenges of smaller organization size are many: limited funding, convincing various levels of management and early adopters that certain ideas are worthwhile, growing a user base quickly (enough not to be defunded), and implementing the actual technology to a high enough level of quality. Overcoming these challenges yielded many, hopefully entertaining, stories to tell.
As this blog can be informal and rambling at times, please visit ca.linkedin.com/in/thomasdang for my condensed professional profile.
# On your own computer (client machine), go to the .ssh folder of your home and generate the key
ssh-keygen -b 2048 -t rsa => enter a passphrase (nice to have)
# Copy the new public key to the server
# many systems especially macs don’t have ssh-copy-id
# and scp may overwrite previous content of authorized_keys on server
cat ./id_rsa.pub | ssh email@example.com “cat >>~/.ssh/authorized_keys”
# Go onto the server, you will still need to enter a password here
# Make sure that the key and .ssh folder on the server is secure enough
# Many ssh client will not let you through if the permission is too open
chmod 700 .ssh
=> the result of “ls -latrh | grep .ssh” should be “drwx——. 2 you you 4.0K May 29 16:21 .ssh”
chmod 600 authorized_keys
=> the result of “ls -latrh | grep authorized_keys” should be “-rw——-. 1 you you 400 May 29 16:31 authorized_keys
# Logout, you are done! The next time you log in again you will not need a password.