to

I took this shot about a year ago when I had a very different...



I took this shot about a year ago when I had a very different editing style. A ton of faded blacks and, believe it or not, a subtle green tint (unknowingly inherited from the preset I was using at the time). Re-editing it now, I’m happy with the way my style has evolved, though I can already sense that I’m on the brink of evolving it again. And I’m okay with that. ???? (at London, United Kingdom)




to

This trip solidified my conviction to learning photography. A...



This trip solidified my conviction to learning photography. A lot has happened since this shot was taken.
Can you pinpoint the moment you decided to pursue photography? (at Toronto, Ontario)




to

A lot to look forward to in 2017. How did 2016 treat you: ???? or...



A lot to look forward to in 2017. How did 2016 treat you: ???? or ????? (at San Francisco, California)




to

Four days from now I’ll be boarding a one way flight to...



Four days from now I’ll be boarding a one way flight to San Francisco to take on the next evolution of my role at @shopify. Leaving the city that I’ve called home my entire life and the people who have defined everything I am was one of the most uncomfortable decisions I’ve ever had to make. But this wouldn’t be the first time I’ve chased discomfort in my career.
.
I wrote about my ongoing pursuit for discomfort this morning in hopes of inspiring others to do the things that scare and challenge them this year. You can find the link in my profile.
.
Happy 2017! ????
.
????: @jonasll (at San Francisco, California)




to

Facebook Live Streaming and Audio/Video Hosting connected to Auphonic

Facebook is not only a social media giant, the company also provides valuable tools for broadcasting. Today we release a connection to Facebook, which allows to use the Facebook tools for video/audio production and publishing within Auphonic and our connected services.

The following workflows are possible with Facebook and Auphonic:
  • Use Facebook for live streaming, then import, process and distribute the audio/video with Auphonic.
  • Post your Auphonic audio or video productions directly to the news feed of your Facebook Page or User.
  • Use Facebook as a general media hosting service and share the link or embed the audio/video on any webpage (also visible to non-Facebook users).

Connect to Facebook

First you have to connect to a Facebook account at our External Services Page, click on the "Facebook" button.

Select if you want to connect to your personal Facebook User or to a Facebook Page:

It is always possible to remove or edit the connection in your Facebook Settings (Tab Business Integrations).

Import (Live) Videos from Facebook to Auphonic

Facebook Live is an easy (and free) way to stream live videos:

We implemented an interface to use Facebook as an Incoming External Service. Please select a (live or non-live) video from your Facebook Page/User as the source of a production and then process it with Auphonic:

This workflow allows you to use Facebook for live streaming, import and process the audio/video with Auphonic, then publish a podcast and video version of your live video to any of our connected services.

Export from Auphonic to Facebook

Similar to Youtube, it is possible to use Facebook for media file hosting.
Please add your Facebook Page/User as an External Service in your Productions or Presets to upload the Auphonic results directly to Facebook:

Options for the Facebook export:
  • Distribution Settings
    • Post to News Feed: The exported video is posted directly to your news feed / timeline.
    • Exclude from News Feed: The exported video is visible in the videos tab of your Facebook Page/User (see for example Auphonic's video tab), but it is not posted to your news feed (you can do that later if you want).
    • Secret: Only you can see the exported video, it is not shown in the Facebook video tab and it is not posted to your news feed (you can do that later if you want).
  • Embeddable
    Choose if the exported video should be embeddable in third-party websites.

It is always possible to change the distribution/privacy and embeddable options later directly on Facebook. For example, you can export a video to Facebook as Secret and publish it to your news feed whenever you want.


If your production is audio-only, we automatically generate a video track from the Cover Image and (possible) Chapter Images.
Alternatively you can select an Audiogram Output File, if you want to add an Audiogram (audio waveform visualization) to your Facebook video - for details please see Auphonic Audiogram Generator.

Auphonic Title and Description metadata fields are exported to Facebook as well.
If you add Speech Recognition to your production, we create an SRT file with the speech recognition results and add it to your Facebook video as captions.
See the example below.

Facebook Video Hosting Example with Audiogram and Automatic Captions

Facebook can be used as a general video hosting service: even if you export videos as Secret, you will get a direct link to the video which can be shared or embedded in any third-party websites. Users without a Facebook account are also able to view these videos.

In the example below, we automatically generate an Audiogram Video for an audio-only production, use our integrated Speech Recognition system to create captions and export the video as Secret to Facebook.
Afterwards it can be embedded directly into this blog post (enable Captions if they don't show up per default) - for details please see How to embed a video:

It is also possible to just use the generated result URL from Auphonic to share the link to your video (also visible to non-Facebook users):
https://www.facebook.com/auphonic/videos/1687244844638091/

Important Note:
Facebook needs some time to process an exported video (up to a few minutes) and the direct video link won't work before the processing is finished - please try again a bit later!
On Facebook Pages, you can see the processing progress in your Video Library.

Conclusion

Facebook has many broadcasting tools to offer and is a perfect addition to Auphonic.
Both systems and our other external services can be used to create automated processing and publishing workflows. Furthermore, the export and import to/from Facebook is also fully supported in the Auphonic API.

Please contact us if you have any questions or further ideas!




to

Auphonic Audio Inspector Release

At the Subscribe 9 Conference, we presented the first version of our new Audio Inspector:
The Auphonic Audio Inspector is shown on the status page of a finished production and displays details about what our algorithms are changing in audio files.

A screenshot of the Auphonic Audio Inspector on the status page of a finished Multitrack Production.
Please click on the screenshot to see it in full resolution!

It is possible to zoom and scroll within audio waveforms and the Audio Inspector might be used to manually check production result and input files.

In this blog post, we will discuss the usage and all current visualizations of the Inspector.
If you just want to try the Auphonic Audio Inspector yourself, take a look at this Multitrack Audio Inspector Example.

Inspector Usage

Control bar of the Audio Inspector with scrollbar, play button, current playback position and length, button to show input audio file(s), zoom in/out, toggle legend and a button to switch to fullscreen mode.

Seek in Audio Files
Click or tap inside the waveform to seek in files. The red playhead will show the current audio position.
Zoom In/Out
Use the zoom buttons ([+] and [-]), the mouse wheel or zoom gestures on touch devices to zoom in/out the audio waveform.
Scroll Waveforms
If zoomed in, use the scrollbar or drag the audio waveform directly (with your mouse or on touch devices).
Show Legend
Click the [?] button to show or hide the Legend, which describes details about the visualizations of the audio waveform.
Show Stats
Use the Show Stats link to display Audio Processing Statistics of a production.
Show Input Track(s)
Click Show Input to show or hide input track(s) of a production: now you can see and listen to input and output files for a detailed comparison. Please click directly on the waveform to switch/unmute a track - muted tracks are grayed out slightly:

Showing four input tracks and the Auphonic output of a multitrack production.

Please click on the fullscreen button (bottom right) to switch to fullscreen mode.
Now the audio tracks use all available screen space to see all waveform details:

A multitrack production with output and all input tracks in fullscreen mode.
Please click on the screenshot to see it in full resolution.

In fullscreen mode, it’s also possible to control playback and zooming with keyboard shortcuts:
Press [Space] to start/pause playback, use [+] to zoom in and [-] to zoom out.

Singletrack Algorithms Inspector

First, we discuss the analysis data of our Singletrack Post Production Algorithms.

The audio levels of output and input files, measured according to the ITU-R BS.1770 specification, are displayed directly as the audio waveform. Click on Show Input to see the input and output file. Only one file is played at a time, click directly on the Input or Output track to unmute a file for playback:

Singletrack Production with opened input file.
See the first Leveler Audio Example to try the audio inspector yourself.

Waveform Segments: Music and Speech (gold, blue)
Music/Speech segments are displayed directly in the audio waveform: Music segments are plotted in gold/yellow, speech segments in blue (or light/dark blue).
Waveform Segments: Leveler High/No Amplification (dark, light blue)
Speech segments can be displayed in normal, dark or light blue: Dark blue means that the input signal was very quiet and contains speech, therefore the Adaptive Leveler has to use a high amplification value in this segment.
In light blue regions, the input signal was very quiet as well, but our classifiers decided that the signal should not be amplified (breathing, noise, background sounds, etc.).

Yellow/orange background segments display leveler fades.

Background Segments: Leveler Fade Up/Down (yellow, orange)
If the volume of an input file changes in a fast way, the Adaptive Leveler volume curve will increase/decrease very fast as well (= fade) and should be placed in speech pauses. Otherwise, if fades are too slow or during active speech, one will hear pumping speech artifacts.
Exact fade regions are plotted as yellow (fade up, volume increase) and orange (fade down, volume decrease) background segments in the audio inspector.

Horizontal red lines display noise and hum reduction profiles.

Horizontal Lines: Noise and Hum Reduction Profiles (red)
Our Noise and Hiss Reduction and Hum Reduction algorithms segment the audio file in regions with different background noise characteristics, which are displayed as red horizontal lines in the audio inspector (top lines for noise reduction, bottom lines for hum reduction).
Then a noise print is extracted in each region and a classifier decides if and how much noise reduction is necessary - this is plotted as a value in dB below the top red line.
The hum base frequency (50Hz or 60Hz) and the strength of all its partials is also classified in each region, the value in Hz above the bottom red line indicates the base frequency and whether hum reduction is necessary or not (no red line).

You can try the singletrack audio inspector yourself with our Leveler, Noise Reduction and Hum Reduction audio examples.

Multitrack Algorithms Inspector

If our Multitrack Post Production Algorithms are used, additional analysis data is shown in the audio inspector.

The audio levels of the output and all input tracks are measured according to the ITU-R BS.1770 specification and are displayed directly as the audio waveform. Click on Show Input to see all the input files with track labels and the output file. Only one file is played at a time, click directly into the track to unmute a file for playback:

Input Tracks: Waveform Segments, Background Segments and Horizontal Lines
Input tracks are displayed below the output file including their track names. The same data as in our Singletrack Algorithms Inspector is calculated and plotted separately in each input track:
Output Waveform Segments: Multiple Speakers and Music
Each speaker is plotted in a separate, blue-like color - in the example above we have 3 speakers (normal, light and dark blue) and you can see directly in the waveform when and which speaker is active.
Audio from music input tracks are always plotted in gold/yellow in the output waveform, please try to not mix music and speech parts in music tracks (see also Multitrack Best Practice)!

You can try the multitrack audio inspector yourself with our Multitrack Audio Inspector Example or our general Multitrack Audio Examples.

Ducking, Background and Foreground Segments

Music tracks can be set to Ducking, Foreground, Background or Auto - for more details please see Automatic Ducking, Foreground and Background Tracks.

Ducking Segments (light, dark orange)
In Ducking, the level of a music track is reduced if one of the speakers is active, which is plotted as a dark orange background segment in the output track.
Foreground music parts, where no speaker is active and the music track volume is not reduced, are displayed as light orange background segments in the output track.
Background Music Segments (dark orange background)
Here the whole music track is set to Background and won’t be amplified when speakers are inactive.
Background music parts are plotted as dark organge background segments in the output track.
Foreground Music Segments (light orange background)
Here the whole music track is set to Foreground and its level won’t be reduced when speakers are active.
Foreground music parts are plotted as light organge background segments in the output track.

You can try the ducking/background/foreground audio inspector yourself: Fore/Background/Ducking Audio Examples.

Audio Search, Chapters Marks and Video

Audio Search and Transcriptions
If our Automatic Speech Recognition Integration is used, a time-aligned transcription text will be shown above the waveform. You can use the search field to search and seek directly in the audio file.
See our Speech Recognition Audio Examples to try it yourself.
Chapters Marks
Chapter Mark start times are displayed in the audio waveform as black vertical lines.
The current chapter title is written above the waveform - see “This is Chapter 2” in the screenshot above.

A video production with output waveform, input waveform and transcriptions in fullscreen mode.
Please click on the screenshot to see it in full resolution.

Video Display
If you add a Video Format or Audiogram Output File to your production, the audio inspector will also show a separate video track in addition to the audio output and input tracks. The video playback will be synced to the audio of output and input tracks.

Supported Audio Formats

We use the native HTML5 audio element for playback and the aurora.js javascript audio decoders to support all common audio formats:

WAV, MP3, AAC/M4A and Opus
These formats are supported in all major browsers: Firefox, Chrome, Safari, Edge, iOS Safari and Chrome for Android.
FLAC
FLAC is supported in Firefox, Chrome, Edge and Chrome for Android - see FLAC audio format.
In Safari and iOS Safari, we use aurora.js to directly decode FLAC files in javascript, which works but uses much more CPU compared to native decoding!
ALAC
ALAC is not supported by any browser so far, therefore we use aurora.js to directly decode ALAC files in javascript. This works but uses much more CPU compared to native decoding!
Ogg Vorbis
Only supported by Firefox, Chrome and Chrome for Android - for details please see Ogg Vorbis audio format.

We suggest to use a recent Firefox or Chrome browser for best performance.
Decoding FLAC and ALAC files also works in Safari and iOS with the help of aurora.js, but javascript decoders need a lot of CPU and they sometimes have problems with exact scrolling and seeking.

Please see our blog post Audio File Formats and Bitrates for Podcasts for more details about audio formats.

Mobile Audio Inspector

Multiple responsive layouts were created to optimize the screen space usage on Android and iOS devices, so that the audio inspector is fully usable on mobile devices as well: tap into the waveform to set the playhead location, scroll horizontally to scroll waveforms, scroll vertically to scroll between tracks, use zoom gestures to zoom in/out, etc.

Unfortunately the fullscreen mode is not available on iOS devices (thanks to Apple), but it works on Android and is a really great way to inspect everything using all the available screen space:

Audio inspector in horizontal fullscreen mode on Android.

Conclusion

Try the Auphonic Audio Inspector yourself: take a look at our Audio Example Page or play with the Multitrack Audio Inspector Example.

The Audio Inspector will be shown in all productions which are created in our Web Service.
It might be used to manually check production result/input files and to send us detailed feedback about audio processing results.

Please let us know if you have some feedback or questions - more visualizations will be added in future!







to

New Auphonic Transcript Editor and Improved Speech Recognition Services

Back in late 2016, we introduced Speech Recognition at Auphonic. This allows our users to create transcripts of their recordings, and more usefully, this means podcasts become searchable.
Now we integrated two more speech recognition engines: Amazon Transcribe and Speechmatics. Whilst integrating these services, we also took the opportunity to develop a complete new Transcription Editor:

Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.
Try out the Transcript Editor Examples yourself!


The new Auphonic Transcript Editor is included directly in our HTML transcript output file, displays word confidence values to instantly see which sections should be checked manually, supports direct audio playback, HTML/PDF/WebVTT export and allows you to share the editor with someone else for further editing.

The new services, Amazon Transcribe and Speechmatics, offer transcription quality improvements compared to our other integrated speech recognition services.
They also return word confidence values, timestamps and some punctuation, which is exported to our output files.

The Auphonic Transcript Editor

With the integration of the two new services offering improved recognition quality and word timestamps alongside confidence scores, we realized that we could leverage these improvements to give our users easy-to-use transcription editing.
Therefore we developed a new, open source transcript editor, which is embedded directly in our HTML output file and has been designed to make checking and editing transcripts as easy as possible.

Main features of our transcript editor:
  • Edit the transcription directly in the HTML document.
  • Show/hide word confidence, to instantly see which sections should be checked manually (if you use Amazon Transcribe or Speechmatics as speech recognition engine).
  • Listen to audio playback of specific words directly in the HTML editor.
  • Share the transcript editor with others: as the editor is embedded directly in the HTML file (no external dependencies), you can just send the HTML file to some else to manually check the automatically generated transcription.
  • Export the edited transcript to HTML, PDF or WebVTT.
  • Completely useable on all mobile devices and desktop browsers.

Examples: Try Out the Transcript Editor

Here are two examples of the new transcript editor, taken from our speech recognition audio examples page:

1. Singletrack Transcript Editor Example
Singletrack speech recognition example from the first 10 minutes of Common Sense 309 by Dan Carlin. Speechmatics was used as speech recognition engine without any keywords or further manual editing.
2. Multitrack Transcript Editor Example
A multitrack automatic speech recognition transcript example from the first 20 minutes of TV Eye on Marvel - Luke Cage S1E1. Amazon Transcribe was used as speech recognition engine without any further manual editing.
As this is a multitrack production, the transcript includes exact speaker names as well (try to edit them!).

Transcript Editing

By clicking the Edit Transcript button, a dashed box appears around the text. This indicates that the text is now freely editable on this page. Your changes can be saved by using one of the export options (see below).
If you make a mistake whilst editing, you can simply use the undo/redo function of the browser to undo or redo your changes.


When working with multitrack productions, another helpful feature is the ability to change all speaker names at once throughout the whole transcript just by editing one speaker. Simply click on an instance of a speaker title and change it to the appropriate name, this name will then appear throughout the whole transcript.

Word Confidence Highlighting

Word confidence values are shown visually in the transcript editor, highlighted in shades of red (see screenshot above). The shade of red is dependent on the actual word confidence value: The darker the red, the lower the confidence value. This means you can instantly see which sections you should check/re-work manually to increase the accuracy.

Once you have edited the highlighted text, it will be set to white again, so it’s easy to see which sections still require editing.
Use the button Add/Remove Highlighting to disable/enable word confidence highlighting.

NOTE: Word confidence values are only available in Amazon Transcribe or Speechmatics, not if you use our other integrated speech recognition services!

Audio Playback

The button Activate/Stop Play-on-click allows you to hear the audio playback of the section you click on (by clicking directly on the word in the transcript editor).
This is helpful in allowing you to check the accuracy of certain words by being able to listen to them directly whilst editing, without having to go back and try to find that section within your audio file.

If you use an External Service in your production to export the resulting audio file, we will automatically use the exported file in the transcript editor.
Otherwise we will use the output file generated by Auphonic. Please note that this file is password protected for the current Auphonic user and will be deleted in 21 days.

If no audio file is available in the transcript editor, or cannot be played because of the password protection, you will see the button Add Audio File to add a new audio file for playback.

Export Formats, Save/Share Transcript Editor

Click on the button Export... to see all export and saving/sharing options:

Save/Share Editor
The Save Editor button stores the whole transcript editor with all its current changes into a new HTML file. Use this button to save your changes for further editing or if you want to share your transcript with someone else for manual corrections (as the editor is embedded directly in the HTML file without any external dependencies).
Export HTML / Export PDF / Export WebVTT
Use one of these buttons to export the edited transcript to HTML (for WordPress, Word, etc.), to PDF (via the browser print function) or to WebVTT (so that the edited transcript can be used as subtitles or imported in web audio players of the Podlove Publisher or Podigee).
Every export format is rendered directly in the browser, no server needed.

Amazon Transcribe

The first of the two new services, Amazon Transcribe, offers accurate transcriptions in English and Spanish at low costs, including keywords, word confidence, timestamps, and punctuation.

UPDATE 2019:
Amazon Transcribe offers more languages now - please see Amazon Transcribe Features!

Pricing
The free tier offers 60 minutes of free usage a month for 12 months. After that, it is billed monthly at a rate of $0.0004 per second ($1.44/h).
More information is available at Amazon Transcribe Pricing.
Custom Vocabulary (Keywords) Support
Custom Vocabulary (called Keywords in Auphonic) gives you the ability to expand and customize the speech recognition vocabulary, specific to your case (i.e. product names, domain-specific terminology, or names of individuals).
The same feature is also available in the Google Cloud Speech API.
Timestamps, Word Confidence, and Punctuation
Amazon Transcribe returns a timestamp and confidence value for each word so that you can easily locate the audio in the original recording by searching for the text.
It also adds some punctuation, which is combined with our own punctuation and formatting automatically.

The high-quality (especially in combination with keywords) and low costs of Amazon Transcribe make it attractive, despite only currently supporting two languages.
However, the processing time of Amazon Transcribe is much slower compared to all our other integrated services!

Try it yourself:
Connect your Auphonic account with Amazon Transcribe at our External Services Page.

Speechmatics

Speechmatics offers accurate transcriptions in many languages including word confidence values, timestamps, and punctuation.

Many Languages
Speechmatics’ clear advantage is the sheer number of languages it supports (all major European and some Asiatic languages).
It also has a Global English feature, which supports different English accents during transcription.
Timestamps, Word Confidence, and Punctuation
Like Amazon, Speechmatics creates timestamps, word confidence values, and punctuation.
Pricing
Speechmatics is the most expensive speech recognition service at Auphonic.
Pricing starts at £0.06 per minute of audio and can be purchased in blocks of £10 or £100. This equates to a starting rate of about $4.78/h. Reduced rate of £0.05 per minute ($3.98/h) are available if purchasing £1,000 blocks.
They offer significant discounts for users requiring higher volumes. At this further reduced price point it is a similar cost to the Google Speech API (or lower). If you process a lot of content, you should contact them directly at sales@speechmatics.com and say that you wish to use it with Auphonic.
More information is available at Speechmatics Pricing.

Speechmatics offers high-quality transcripts in many languages. But these features do come at a price, it is the most expensive speech recognition services at Auphonic.

Unfortunately, their existing Custom Dictionary (keywords) feature, which would further improve the results, is not available in the Speechmatics API yet.

Try it yourself:
Connect your Auphonic account with Speechmatics at our External Services Page.

What do you think?

Any feedback about the new speech recognition services, especially about the recognition quality in various languages, is highly appreciated.

We would also like to hear any comments you have on the transcript editor particularly - is there anything missing, or anything that could be implemented better?
Please let us know!






to

Resumable File Uploads to Auphonic

Large file uploads in a web browser are problematic, even in 2018. If working with a poor network connection, uploads can fail and have to be retried from the start.

At Auphonic, our users have to upload large audio and video files, or multiple media files when creating a multitrack production. To minimize any potential issues, we integrated various external services which are specialized for large file transfers, like FTP, SFTP, Dropbox, Google Drive, S3, etc.

To further minimize issues, as of today we have also released resumable and chunked direct file uploads in the web browser to auphonic.com.

If you are not interested in the technical details, please just go to the section Resumable Uploads in Auphonic below.

The Problem with Large File Uploads in the Browser

If using either mobile networks (which remain fragile) or unstable WiFi connections, file uploads are often interrupted and will fail. There are also many areas in the world where connections are quite poor, which makes uploading big files frustrating.

After an interrupted file upload, the web browser must restart the whole upload from the start, which is a problem when it happens in the middle of a 4GB video file upload on a slow connection.
Furthermore, the longer an upload takes, the more likely it is to have a network glitch interrupting the upload, which then has to be retried from the start.

The Solution: Chunked, Resumable Uploads

To avoid user frustration, we need to be able to detect network errors and potentially resume an upload without having to restart it from the beginning.

To achieve this, we have to split a file upload in smaller chunks directly within the web browser, so that these chunks can then be sent to the server afterwards.
If an upload fails or the user wants to pause, it is possible to resume it later and only send those chunks that have not already been uploaded.
If there is a network interruption or change, the upload will be retried automatically.

Companies like Dropbox, Google, Amazon AWS etc. all have their own protocols and API's for chunked uploads, but there are also some open source implementations available, which offer resumable uploads:

resumable.js [link]:
"A JavaScript library providing multiple simultaneous, stable and resumable uploads via the HTML5 File API"
This solutions is a JavaScript library only and requires that the protocol is implemented on the server as well.
tus.io [link]:
"Open Protocol for Resumable File Uploads"
Tus.io offers a simple, cheap and reusable stack for clients and servers (in many languages). They have a blog with further information about resumable uploads, see tus blog.
plupload [link]:
A JavaScript library, similar to resumable.js, which requires a separate server implementation.

We chose to use resumable.js and developed our own server implementation.

Resumable Uploads in Auphonic

If you upload files to a singletrack or multitrack production, you will see the upload progress bar and a pause button, which is one way to pause and resume an upload:

It is also possible to close the browser completely or shut down your computer during the upload, then edit the production and upload the file again later. This will just resume the file upload from the position where it was stopped before.
(Previously uploaded chunks are saved for 24h on our servers, after that you have to start the whole upload again.)

In case of a network problem or if you switch to a different connection, we will resume the upload automatically.
This should solve many problems which were reported by some users in the past!

You can of course also use any of our external services for stable incoming and outgoing file transfers!

Do you still have Uploading Issues?

We hope that uploads to Auphonic are much more reliable now, even on poor connections.

If you still experience any problems, please let us know.
We are very happy about any bug reports and will do our best to fix them!







to

Auphonic Adaptive Leveler Customization (Beta Update)

In late August, we launched the private beta program of our advanced audio algorithm parameters. After feedback by our users and many new experiments, we are proud to release a complete rework of the Adaptive Leveler parameters:

In the previous version, we based our Adaptive Leveler parameters on the Loudness Range descriptor (LRA), which is included in the EBU R128 specification.
Although it worked, it turned out that it is very difficult to set a loudness range target for diverse audio content, which does include speech, background sounds, music parts, etc. The results were not predictable and it was hard to find good target values.
Therefore we developed our own algorithm to measure the dynamic range of audio signals, which works similarly for speech, music and other audio content.

The following advanced parameters for our Adaptive Leveler allow you to customize which parts of the audio should be leveled (foreground, all, speech, music, etc.), how much they should be leveled (dynamic range), and how much micro-dynamics compression should be applied.

To try out the new algorithms, please join our private beta program and let us know your feedback!

Leveler Preset

The Leveler Preset defines which parts of the audio should be adjusted by our Adaptive Leveler:

  • Default Leveler:
    Our classic, default leveling algorithm as demonstrated in the Leveler Audio Examples. Use it if you are unsure.
  • Foreground Only Leveler:
    This preset reacts slower and levels foreground parts only. Use it if you have background speech or background music, which should not be amplified.
  • Fast Leveler:
    A preset which reacts much faster. It is built for recordings with fast and extreme loudness differences, for example, to amplify very quiet questions from the audience in a lecture recording, to balance fast-changing soft and loud voices within one audio track, etc.
  • Amplify Everything:
    Amplify as much as possible. Similar to the Fast Leveler, but also amplifies non-speech background sounds like noise.

Leveler Dynamic Range

Our default Leveler tries to normalize all speakers to a similar loudness so that a consumer in a car or subway doesn't feel the need to reach for the volume control.
However, in other environments (living room, cinema, etc.) or in dynamic recordings, you might want more level differences (Dynamic Range, Loudness Range / LRA) between speakers and within music segments.

The parameter Dynamic Range controls how much leveling is applied: Higher values result in more dynamic output audio files (less leveling). If you want to increase the dynamic range by 3dB (or LU), just increase the Dynamic Range parameter by 3dB.
We also like to call this Loudness Comfort Zone: above a maximum and below a minimum possible level (the comfort zone), no leveling is applied. So if your input file already has a small dynamic range (is within the comfort zone), our leveler will be just bypassed.

Example Use Cases:
Higher dynamic range values should be used if you want to keep more loudness differences in dynamic narration or dynamic music recordings (live concert/classical).
It is also possible to utilize this parameter to generate automatic mixdowns with different loudness range (LRA) values for different target environments (very compressed ones like mobile devices or Alexa, very dynamic ones like home cinema, etc.).

Compressor

Controls Micro-Dynamics Compression:
The compressor reduces the volume of short and loud spikes like "p", "t" or laughter ( short-term dynamics) and also shapes the sound of your voice (it will sound more or less "processed").
The Leveler, on the other hand, adjusts mid-term level differences, as done by a sound engineer, using the faders of an audio mixer, so that a listener doesn't have to adjust the playback volume all the time.
For more details please see Loudness Normalization and Compression of Podcasts and Speech Audio.

Possible values are:
  • Auto:
    The compressor setting depends on the selected Leveler Preset. Medium compression is used in Foreground Only and Default Leveler presets, Hard compression in our Fast Leveler and Amplify Everything presets.
  • Soft:
    Uses less compression.
  • Medium:
    Our default setting.
  • Hard:
    More compression, especially tries to compress short and extreme level overshoots. Use this preset if you want your voice to sound very processed, our if you have extreme and fast-changing level differences.
  • Off:
    No short-term dynamics compression is used at all, only mid-term leveling. Switch off the compressor if you just want to adjust the loudness range without any additional micro-dynamics compression.

Separate Music/Speech Parameters

Use the switch Separate MusicSpeech Parameters (top right), to see separate Adaptive Leveler parameters for music and speech segments, to control all leveling details separately for speech and music parts:

For dialog intelligibility improvements in films and TV, it is important that the speech/dialog level and loudness range is not too soft compared to the overall programme level and loudness range. This parameter allows you to use more leveling in speech parts while keeping music and FX elements less processed.
Note: Speech, music and overall loudness and loudness range of your production are also displayed in our Audio Processing Statistics!

Example Use Case:
Music live recordings or dynamic music mixes, where you want to amplify all speakers (speech dynamic range should be small) but keep the dynamic range within and between music segments (music dynamic range should be high).
Dialog intelligibility improvements for films and TV, without effecting music and FX elements.

Other Advanced Audio Algorithm Parameters

We also offer advanced audio parameters for our Noise, Hum Reduction and Global Loudness Normalization algorithms:

For more details, please see the Advanced Audio Algorithms Documentation.

Want to know more?

If you want to know more details about our advanced algorithm parameters (especially the leveler parameters), please listen to the following podcast interview with Chris Curran (Podcast Engineering School):
Auphonic’s New Advanced Features, with Georg Holzmann – PES 108

Advanced Parameters Private Beta and Feedback

At the moment the advanced algorithm parameters are for beta users only. This is to allow us to get user feedback, so we can change the parameters to suit user needs.
Please let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

jbwCVpLYrl 6zmLqq8o3z RXYIUbC6al QDmIZLuPKa JIrnGRZBgl SWQOWeZOBD ISeBCA9gTy w5FdsyhZVI qWAvANQ5mC twOjdHrit3
KwnL2Le6jB 63SE2V54KK G32AULFyaM 3H0CLYAwLU mp1GFNVZHr swzvEBRCVa rLcNJHUNZT CGGbL0O4q1 5o5dUjruJ9 hAggWBpGvj
ykJ57cFQSe 0OHAD2u1Dx RG4wSYTLbf UcsSYI78Md Xedr3NPCgK mI8gd7eDvO 0Au4gpUDJB mYLkvKYz1C ukrKoW5hoy S34sraR0BU
J2tlV0yNwX QwNdnStYD3 Zho9oZR2e9 jHdjgUq420 51zLbV09p4 c0cth0abCf 3iVBKHVKXU BK4kTbDQzt uTBEkMnSPv tg6cJtsMrZ
BdB8gFyhRg wBsLHg90GG EYwxVUZJGp HLQ72b65uH NNd415ktFS JIm2eTkxMX EV2C5RAUXI a3iwbxWjKj X1AT7DCD7V y0AFIrWo5l
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the advanced audio algorithm parameters:
Auphonic Algorithm Parameters Private Beta Activation







to

Horizontal or/and Vertical Format in Kayak Photography

Like most paddlers I have a tendency to shoot pictures in a horizontal (landscape) format. It is more tricky to shoot in a vertical format from my tippy kayaks, especially, when I have to use a paddle to stabilize my camera.




to

Winter Stand Up Paddling on Horsetooth Reservoir

I love paddling on the Horsetooth Reservoir in cold season. Boat ramps are closed, no power boat traffic, usually quiet and calm. Snow and ice can enhance scenery. A great time to paddle, train, relax or photograph. The Horsetooth stays […]




to

How to Foster Real-Time Client Engagement During Moderated Research

When we conduct moderated research, like user interviews or usability tests, for our clients, we encourage them to observe as many sessions as possible. We find when clients see us interview their users, and get real-time responses, they’re able to learn about the needs of their users in real-time and be more active participants in the process. One way we help clients feel engaged with the process during remote sessions is to establish a real-time communication backchannel that empowers clients to flag responses they’d like to dig into further and to share their ideas for follow-up questions.

There are several benefits to establishing a communication backchannel for moderated sessions:

  • Everyone on the team, including both internal and client team members, can be actively involved throughout the data collection process rather than waiting to passively consume findings.
  • Team members can identify follow-up questions in real-time which allows the moderator to incorporate those questions during the current session, rather than just considering them for future sessions.
  • Subject matter experts can identify more detailed and specific follow-up questions that the moderator may not think to ask.
  • Even though the whole team is engaged, a single moderator still maintains control over the conversation which creates a consistent experience for the participant.

If you’re interested in creating your own backchannel, here are some tips to make the process work smoothly:

  • Use the chat tool that is already being used on the project. In most cases, we use a joint Slack workspace for the session backchannel but we’ve also used Microsoft Teams.
  • Create a dedicated channel like #moderated-sessions. Conversation in this channel should be limited to backchannel discussions during sessions. This keeps the communication consolidated and makes it easier for the moderator to stay focused during the session.
  • Keep communication limited. Channel participants should ask basic questions that are easy to consume quickly. Supplemental commentary and analysis should not take place in the dedicated channel.
  • Use emoji responses. The moderator can add a quick thumbs up to indicate that they’ve seen a question.

Introducing backchannels for communication during remote moderated sessions has been a beneficial change to our research process. It not only provides an easy way for clients to stay engaged during the data collection process but also increases the moderator’s ability to focus on the most important topics and to ask the most useful follow-up questions.




to

Markdown Comes Alive! Part 1, Basic Editor

In my last post, I covered what LiveView is at a high level. In this series, we’re going to dive deeper and implement a LiveView powered Markdown editor called Frampton. This series assumes you have some familiarity with Phoenix and Elixir, including having them set up locally. Check out Elizabeth’s three-part series on getting started with Phoenix for a refresher.

This series has a companion repository published on GitHub. Get started by cloning it down and switching to the starter branch. You can see the completed application on master. Our goal today is to make a Markdown editor, which allows a user to enter Markdown text on a page and see it rendered as HTML next to it in real-time. We’ll make use of LiveView for the interaction and the Earmark package for rendering Markdown. The starter branch provides some styles and installs LiveView.

Rendering Markdown

Let’s set aside the LiveView portion and start with our data structures and the functions that operate on them. To begin, a Post will have a body, which holds the rendered HTML string, and title. A string of markdown can be turned into HTML by calling Post.render(post, markdown). I think that just about covers it!

First, let’s define our struct in lib/frampton/post.ex:

defmodule Frampton.Post do
  defstruct body: "", title: ""

  def render(%__MODULE{} = post, markdown) do
    # Fill me in!
  end
end

Now the failing test (in test/frampton/post_test.exs):

describe "render/2" do
  test "returns our post with the body set" do
    markdown = "# Hello world!"                                                                                                                 
    assert Post.render(%Post{}, markdown) == {:ok, %Post{body: "<h1>Hello World</h1>
"}}
  end
end

Our render method will just be a wrapper around Earmark.as_html!/2 that puts the result into the body of the post. Add {:earmark, "~> 1.4.3"} to your deps in mix.exs, run mix deps.get and fill out render function:

def render(%__MODULE{} = post, markdown) do
  html = Earmark.as_html!(markdown)
  {:ok, Map.put(post, :body, html)}
end

Our test should now pass, and we can render posts! [Note: we’re using the as_html! method, which prints error messages instead of passing them back to the user. A smarter version of this would handle any errors and show them to the user. I leave that as an exercise for the reader…] Time to play around with this in an IEx prompt (run iex -S mix in your terminal):

iex(1)> alias Frampton.Post
Frampton.Post
iex(2)> post = %Post{}
%Frampton.Post{body: "", title: ""}
iex(3)> {:ok, updated_post} = Post.render(post, "# Hello world!")
{:ok, %Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}}
iex(4)> updated_post
%Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}

Great! That’s exactly what we’d expect. You can find the final code for this in the render_post branch.

LiveView Editor

Now for the fun part: Editing this live!

First, we’ll need a route for the editor to live at: /editor sounds good to me. LiveViews can be rendered from a controller, or directly in the router. We don’t have any initial state, so let's go straight from a router.

First, let's put up a minimal test. In test/frampton_web/live/editor_live_test.exs:

defmodule FramptonWeb.EditorLiveTest do
  use FramptonWeb.ConnCase
  import Phoenix.LiveViewTest

  test "the editor renders" do
    conn = get(build_conn(), "/editor")
    assert html_response(conn, 200) =~ "data-test="editor""
  end
end

This test doesn’t do much yet, but notice that it isn’t live view specific. Our first render is just the same as any other controller test we’d write. The page’s content is there right from the beginning, without the need to parse JavaScript or make API calls back to the server. Nice.

To make that test pass, add a route to lib/frampton_web/router.ex. First, we import the LiveView code, then we render our Editor:

import Phoenix.LiveView.Router
# … Code skipped ...
# Inside of `scope "/"`:
live "/editor", EditorLive

Now place a minimal EditorLive module, in lib/frampton_web/live/editor_live.ex:

defmodule FramptonWeb.EditorLive do
  use Phoenix.LiveView

  def render(assigns) do
    ~L"""
      <div data-test=”editor”>
        <h1>Hello world!</h1>
      </div>
      """
  end

  def mount(_params, _session, socket) do
    {:ok, socket}
  end
end

And we have a passing test suite! The ~L sigil designates that LiveView should track changes to the content inside. We could keep all of our markup in this render/1 method, but let’s break it out into its own template for demonstration purposes.

Move the contents of render into lib/frampton_web/templates/editor/show.html.leex, and replace EditorLive.render/1 with this one liner: def render(assigns), do: FramptonWeb.EditorView.render("show.html", assigns). And finally, make an EditorView module in lib/frampton_web/views/editor_view.ex:

defmodule FramptonWeb.EditorView do
  use FramptonWeb, :view
  import Phoenix.LiveView
end

Our test should now be passing, and we’ve got a nicely separated out template, view and “live” server. We can keep markup in the template, helper functions in the view, and reactive code on the server. Now let’s move forward to actually render some posts!

Handling User Input

We’ve got four tasks to accomplish before we are done:

  1. Take markdown input from the textarea
  2. Send that input to the LiveServer
  3. Turn that raw markdown into HTML
  4. Return the rendered HTML to the page.

Event binding

To start with, we need to annotate our textarea with an event binding. This tells the liveview.js framework to forward DOM events to the server, using our liveview channel. Open up lib/frampton_web/templates/editor/show.html.leex and annotate our textarea:

<textarea phx-keyup="render_post"></textarea>

This names the event (render_post) and sends it on each keyup. Let’s crack open our web inspector and look at the web socket traffic. Using Chrome, open the developer tools, navigate to the network tab and click WS. In development you’ll see two socket connections: one is Phoenix LiveReload, which polls your filesystem and reloads pages appropriately. The second one is our LiveView connection. If you let it sit for a while, you’ll see that it's emitting a “heartbeat” call. If your server is running, you’ll see that it responds with an “ok” message. This lets LiveView clients know when they've lost connection to the server and respond appropriately.

Now, type some text and watch as it sends down each keystroke. However, you’ll also notice that the server responds with a “phx_error” message and wipes out our entered text. That's because our server doesn’t know how to handle the event yet and is throwing an error. Let's fix that next.

Event handling

We’ll catch the event in our EditorLive module. The LiveView behavior defines a handle_event/3 callback that we need to implement. Open up lib/frampton_web/live/editor_live.ex and key in a basic implementation that lets us catch events:

def handle_event("render_post", params, socket) do
  IO.inspect(params)

  {:noreply, socket}
end

The first argument is the name we gave to our event in the template, the second is the data from that event, and finally the socket we’re currently talking through. Give it a try, typing in a few characters. Look at your running server and you should see a stream of events that look something like this:

There’s our keystrokes! Next, let’s pull out that value and use it to render HTML.

Rendering Markdown

Lets adjust our handle_event to pattern match out the value of the textarea:

def handle_event("render_post", %{"value" => raw}, socket) do

Now that we’ve got the raw markdown string, turning it into HTML is easy thanks to the work we did earlier in our Post module. Fill out the body of the function like this:

{:ok, post} = Post.render(%Post{}, raw)
IO.inspect(post)

If you type into the textarea you should see output that looks something like this:

Perfect! Lastly, it’s time to send that rendered html back to the page.

Returning HTML to the page

In a LiveView template, we can identify bits of dynamic data that will change over time. When they change, LiveView will compare what has changed and send over a diff. In our case, the dynamic content is the post body.

Open up show.html.leex again and modify it like so:

<div class="rendered-output">
  <%= @post.body %>
</div>

Refresh the page and see:

Whoops!

The @post variable will only be available after we put it into the socket’s assigns. Let’s initialize it with a blank post. Open editor_live.ex and modify our mount/3 function:

def mount(_params, _session, socket) do
  post = %Post{}
  {:ok, assign(socket, post: post)}
end

In the future, we could retrieve this from some kind of storage, but for now, let's just create a new one each time the page refreshes. Finally, we need to update the Post struct with user input. Update our event handler like this:

def handle_event("render_post", %{"value" => raw}, %{assigns: %{post: post}} = socket) do
  {:ok, post} = Post.render(post, raw)
  {:noreply, assign(socket, post: post)
end

Let's load up http://localhost:4000/editor and see it in action.

Nope, that's not quite right! Phoenix won’t render this as HTML because it’s unsafe user input. We can get around this (very good and useful) security feature by wrapping our content in a raw/1 call. We don’t have a database and user processes are isolated from each other by Elixir. The worst thing a malicious user could do would be crash their own session, which doesn’t bother me one bit.

Check the edit_posts branch for the final version.

Conclusion

That’s a good place to stop for today. We’ve accomplished a lot! We’ve got a dynamically rendering editor that takes user input, processes it and updates the page. And we haven’t written any JavaScript, which means we don’t have to maintain or update any JavaScript. Our server code is built on the rock-solid foundation of the BEAM virtual machine, giving us a great deal of confidence in its reliability and resilience.

In the next post, we’ll tackle making a shared editor, allowing multiple users to edit the same post. This project will highlight Elixir’s concurrency capabilities and demonstrate how LiveView builds on them to enable some incredible user experiences.



  • Code
  • Back-end Engineering

to

Why's it so hard to get the cool stuff approved?

The classic adage is “good design speaks for itself.” Which would mean that if something’s as good of an idea as you think it is, a client will instantly see that it’s good too, right?

Here at Viget, we’re always working with new and different clients. Each with their own challenges and sensibilities. But after ten years of client work, I can’t help but notice a pattern emerge when we’re trying to get approval on especially cool, unconventional parts of a design.

So let’s break down some of those patterns to hopefully better understand why clients hesitate, and what strategies we’ve been using lately to help get the work we’re excited about approved.

Imagine this: the parallax homepage with elements that move around in surprising ways or a unique navigation menu that conceptually reinforces a site’s message. The way the content cards on a page will, like, be literal cards that will shuffle and move around. Basically, any design that feels like an exciting, novel challenge, will need the client to “get it.” And that often turns out to be the biggest challenge of all.

There are plenty of practical reasons cool designs get shot down. A client is usually more than one stakeholder, and more than the team of people you’re working with directly. On any project, there’s an amount of telephone you end up playing. Or, there’s always the classic foes: budgets and deadlines. Any idea should fit in those predetermined constraints. But as a project goes along, budgets and deadlines find a way to get tighter than you planned.

But innovative designs and interactions can seem especially scary for clients to approve. There’s three fears that often pop up on projects:

The fear of change. 

Maybe the client expected something simple, a light refresh. Something that doesn’t challenge their design expectations or require more time and effort to understand. And on our side, maybe we didn’t sufficiently ease them into our way of thinking and open them up to why we think something bigger and bolder is the right solution for them. Baby steps, y’all.

The fear of the unknown. 

Or, less dramatically, a lack of understanding of the medium. In the past, we have struggled with how to present an interactive, animated design to a client before it’s actually built. Looking at a site that does something conceptually similar as an example can be tough. It’s asking a lot of a client’s imagination to show them a site about boots that has a cool spinning animation and get meaningful feedback about how a spinning animation would work on their site about after-school tutoring. Or maybe we’ve created static designs, then talked around what we envision happening. Again, what seems so clear in our minds as professionals entrenched in this stuff every day can be tough for someone outside the tech world to clearly understand.

    The fear of losing control. 

    We’re all about learning from past mistakes. So lets say, after dealing with that fear of the unknown on a project, next time you go in the opposite direction. You invest time up front creating something polished. Maybe you even get the developer to build a prototype that moves and looks like the real thing. You’ve taken all the vague mystery out of the process, so a client will be thrilled, right? Surprise, probably not! Most clients are working with you because they want to conquer the noble quest that is their redesign together. When we jump straight to showing something that looks polished, even if it’s not really, it can feel like we jumped ahead without keeping them involved. Like we took away their input. They can also feel demotivated to give good, meaningful feedback on a polished prototype because it looks “done.”

    So what to do? Lately we have found low-fidelity prototypes to be a great tool for combating these fears and better communicating our ideas.

    What are low-fidelity prototypes?

    Low fidelity prototypes are a tool that designers can create quickly to illustrate an idea, without sinking time into making it pixel-perfect. Some recent examples of prototypes we've created include a clickable Figma or Invision prototype put together with Whimsical wireframes:

    A rough animation created in Principle illustrating less programatic animation:

    And even creating an animated storyboard in Photoshop:

    They’re rough enough that there’s no way they could be confused for a final product. But customized so that a client can immediately understand what they’re looking at and what they need to respond to. Low-fidelity prototypes hit a sweet spot that addresses those client fears head on.

    That fear of change? A lo-fi prototype starts rough and small, so it can ease a client into a dramatic change without overwhelming them. It’s just a first step. It gives them time to react and warm up to something that’ll ultimately be a big change.

    It also cuts out the fear of the unknown. Seeing something moving around, even if it’s rough, can be so much more clear than talking ourselves in circles about how we think it will move, and hoping the client can imagine it. The feature is no longer an enigma cloaked in mystery and big talk, but something tangible they can point at and ask concrete questions about.

    And finally, a lo-fi prototype doesn’t threaten a client’s sense of control. Low-fidelity means it’s clearly still a work in progress! It’s just an early step in the creative process, and therefore communicates that we’re still in the middle of that process together. There’s still plenty of room for their ideas and feedback.

    Lo-fi prototypes: client-tested, internal team-approved

    There are a lot of reasons to love lo-fi prototypes internally, too!

    They’re quick and easy. 

    We can whip up multiple ideas within a few hours, without sinking the time into getting our hearts set on any one thing. In an agency setting especially, time is limited, so the faster we can get an idea out of our own heads, the better.

    They’re great to share with developers. 

    Ideally, the whole team is working together simultaneously, collaborating every step of the way. Realistically, a developer often doesn’t have time during a project’s early design phase. Lo-fi prototypes are concrete enough that a developer can quickly tell if building an idea will be within scope. It helps us catch impractical ideas early and helps us all collaborate to create something that’s both cool and feasible.

      Stay tuned for posts in the near future diving into some of our favorite processes for creating lo-fi prototypes!



      • Design & Content

      to

      Committed to the wrong branch? -, @{upstream}, and @{-1} to the rescue

      I get into this situation sometimes. Maybe you do too. I merge feature work into a branch used to collect features, and then continue development but on that branch instead of back on the feature branch

      git checkout feature
      # ... bunch of feature commits ...
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit feature
      git push
      # deploy qa-environment to the QA remote environment
      # ... more feature commits ...
      # oh. I'm not committing in the feature branch like I should be

      and have to move those commits to the feature branch they belong in and take them out of the throwaway accumulator branch

      git checkout feature
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout qa-environment
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit feature
      git checkout feature
      # ready for more feature commits

      Maybe you prefer

      git branch -D qa-environment
      git checkout qa-environment

      over

      git checkout qa-environment
      git reset --hard origin/qa-environment

      Either way, that works. But it'd be nicer if we didn't have to type or even remember the branches' names and the remote's name. They are what is keeping this from being a context-independent string of commands you run any time this mistake happens. That's what we're going to solve here.

      Shorthands for longevity

      I like to use all possible natively supported shorthands. There are two broad motivations for that.

      1. Fingers have a limited number of movements in them. Save as many as possible left late in life.
      2. Current research suggests that multitasking has detrimental effects on memory. Development tends to be very heavy on multitasking. Maybe relieving some of the pressure on quick-access short term memory (like knowing all relevant branch names) add up to leave a healthier memory down the line.

      First up for our scenario: the - shorthand, which refers to the previously checked out branch. There are a few places we can't use it, but it helps a lot:

      Bash
      # USING -
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit -        # ????
      git push
      # hack hack hack
      # whoops
      git checkout -        # now on feature ???? 
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout - # now on qa-environment ????
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit -        # ????
      git checkout -                       # ????
      # on feature and ready for more feature commits
      Bash
      # ORIGINAL
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit feature
      git push
      # hack hack hack
      # whoops
      git checkout feature
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout qa-environment
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit feature
      git checkout feature
      # ready for more feature commits

      We cannot use - when cherry-picking a range

      > git cherry-pick origin/-..-
      fatal: bad revision 'origin/-..-'
      
      > git cherry-pick origin/qa-environment..-
      fatal: bad revision 'origin/qa-environment..-'

      and even if we could we'd still have provide the remote's name (here, origin).

      That shorthand doesn't apply in the later reset --hard command, and we cannot use it in the branch -D && checkout approach either. branch -D does not support the - shorthand and once the branch is deleted checkout can't reach it with -:

      # assuming that branch-a has an upstream origin/branch-a
      > git checkout branch-a
      > git checkout branch-b
      > git checkout -
      > git branch -D -
      error: branch '-' not found.
      > git branch -D branch-a
      > git checkout -
      error: pathspec '-' did not match any file(s) known to git

      So we have to remember the remote's name (we know it's origin because we are devoting memory space to knowing that this isn't one of those times it's something else), the remote tracking branch's name, the local branch's name, and we're typing those all out. No good! Let's figure out some shorthands.

      @{-<n>} is hard to say but easy to fall in love with

      We can do a little better by using @{-<n>} (you'll also sometimes see it referred to be the older @{-N}). It is a special construct for referring to the nth previously checked out ref.

      > git checkout branch-a
      > git checkout branch-b
      > git rev-parse --abbrev-rev @{-1} # the name of the previously checked out branch
      branch-a
      > git checkout branch-c
      > git rev-parse --abbrev-rev @{-2} # the name of branch checked out before the previously checked out one
      branch-a

      Back in our scenario, we're on qa-environment, we switch to feature, and then want to refer to qa-environment. That's @{-1}! So instead of

      git cherry-pick origin/qa-environment..qa-environment

      We can do

      git cherry-pick origin/qa-environment..@{-1}

      Here's where we are (🎉 marks wins from -, 💥 marks the win from @{-1})

      Bash
      # USING - AND @{-1}
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit -                # ????
      git push
      # hack hack hack
      # whoops
      git checkout -                               # ????
      git cherry-pick origin/qa-environment..@{-1} # ????
      git push
      git checkout -                               # ????
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit -                # ????
      git checkout -                               # ????
      # ready for more feature commits
      Bash
      # ORIGINAL
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit feature
      git push
      # hack hack hack
      # whoops
      git checkout feature
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout qa-environment
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit feature
      git checkout feature
      # ready for more feature commits

      One down, two to go: we're still relying on memory for the remote's name and the remote branch's name and we're still typing both out in full. Can we replace those with generic shorthands?

      @{-1} is the ref itself, not the ref's name, we can't do

      > git cherry-pick origin/@{-1}..@{-1}
      origin/@{-1}
      fatal: ambiguous argument 'origin/@{-1}': unknown revision or path not in the working tree.
      Use '--' to separate paths from revisions, like this:
      'git <command> [<revision>...] -- [<file>...]'

      because there is no branch origin/@{-1}. For the same reason, @{-1} does not give us a generalized shorthand for the scenario's later git reset --hard origin/qa-environment command.

      But good news!

      Do @{u} @{push}

      @{upstream} or its shorthand @{u} is the remote branch a that would be pulled from if git pull were run. @{push} is the remote branch that would be pushed to if git push was run.

      > git checkout branch-a
      Switched to branch 'branch-a'
      Your branch is ahead of 'origin/branch-a' by 3 commits.
        (use "git push" to publish your local commits)
      > git reset --hard origin/branch-a
      HEAD is now at <the SHA origin/branch-a is at>

      we can

      > git checkout branch-a
      Switched to branch 'branch-a'
      Your branch is ahead of 'origin/branch-a' by 3 commits.
        (use "git push" to publish your local commits)
      > git reset --hard @{u}                                # <-- So Cool!
      HEAD is now at <the SHA origin/branch-a is at>

      Tacking either onto a branch name will give that branch's @{upstream} or @{push}. For example

      git checkout branch-a@{u}

      is the branch branch-a pulls from.

      In the common workflow where a branch pulls from and pushes to the same branch, @{upstream} and @{push} will be the same, leaving @{u} as preferable for its terseness. @{push} shines in triangular workflows where you pull from one remote and push to another (see the external links below).

      Going back to our scenario, it means short, portable commands with a minimum human memory footprint. (🎉 marks wins from -, 💥 marks the win from @{-1}, 😎 marks the wins from @{u}.)

      Bash
      # USING - AND @{-1} AND @{u}
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit -    # ????
      git push
      # hack hack hack
      # whoops
      git checkout -                   # ????
      git cherry-pick @{-1}@{u}..@{-1} # ????????
      git push
      git checkout -                   # ????
      git reset --hard @{u}            # ????
      git merge --no-ff --no-edit -    # ????
      git checkout -                   # ????
      # ready for more feature commits
      Bash
      # ORIGINAL
      
      git checkout feature
      # hack hack hack
      git push
      git checkout qa-environment
      git merge --no-ff --no-edit feature
      git push
      # hack hack hack
      # whoops
      git checkout feature
      git cherry-pick origin/qa-environment..qa-environment
      git push
      git checkout qa-environment
      git reset --hard origin/qa-environment
      git merge --no-ff --no-edit feature
      git checkout feature
      # ready for more feature commits

      Make the things you repeat the easiest to do

      Because these commands are generalized, we can run some series of them once, maybe

      git checkout - && git reset --hard @{u} && git checkout -

      or

      git checkout - && git cherry-pick @{-1}@{u}.. @{-1} && git checkout - && git reset --hard @{u} && git checkout -

      and then those will be in the shell history just waiting to be retrieved and run again the next time, whether with CtrlR incremental search or history substring searching bound to the up arrow or however your interactive shell is configured. Or make it an alias, or even better an abbreviation if your interactive shell supports them. Save the body wear and tear, give memory a break, and level up in Git.

      And keep going

      The GitHub blog has a good primer on triangular workflows and how they can polish your process of contributing to external projects.

      The FreeBSD Wiki has a more in-depth article on triangular workflow process (though it doesn't know about @{push} and @{upstream}).

      The construct @{-<n>} and the suffixes @{push} and @{upstream} are all part of the gitrevisions spec. Direct links to each:



        • Code
        • Front-end Engineering
        • Back-end Engineering

        to

        TrailBuddy: Using AI to Create a Predictive Trail Conditions App

        Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

        While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

        There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

        Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

        We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

        The quest for data.

        We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

        First on that list was weather.

        We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

        But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

        We used Whimsical to build our initial wireframes.

        Putting our design hats on.

        From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

        We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

        • How easy or difficult of a trail are they looking for?
        • How long is this trail?
        • What does the trail look like?
        • How far away is the trail in relation to my location?
        • For what activity am I needing a trail for?
        • Is this a trail I’d want to come back to in the future?

        By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

        We used Figma to design, prototype, and gather feedback on TrailBuddy.

        Using machine learning to predict trail conditions.

        As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

        We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

        We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

        With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

        Where we go from here.

        It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



        • News & Culture

        to

        Our New Normal, Together

        As the world works to mitigate the impact of the COVID-19 pandemic, our thoughts are foremost with those already ill from the virus and those on the frontlines, slowing its spread. The bravery and commitment of healthcare workers everywhere is an inspiration.

        While Viget’s physical offices are effectively closed, we’re continuing to work with our clients on projects that evolve by the day. Viget has been working with distributed teams to varying degrees for most of our 20-year history, and while we’re comfortable with the tools and best practices that make doing so effective, we realize that some of our clients are learning as they go. We’re here to help.

        These are unprecedented times, but our business playbook is clear: Take care of each other. We’re in this together.

        Our People Team is meeting with everyone on our staff to confirm their work-from-home situation. Do they have family or roommates they can rely on in an emergency? How are they feeling physically and mentally? Do they have what they need to be productive? As a team, we’re working extra hard to communicate. Andy hosts and records video calls to answer questions anyone has about the crisis, and our weekly staff meeting schedule will continue. Recognizing that our daily informal group lunches are a vital social glue in our offices, Aubrey has organized a virtual lunch table Hangout, allowing our now fully-distributed team to catch up over video. It ensures we have some laughs and helps keep us feeling connected.

        Our project teams are well-versed in remote collaboration, but we understand that not all client projects can proceed as planned. We’re doing our best to accommodate evolving schedules while keeping the momentum on as many projects as possible. For all of our clients, we’re making clear that we think long-term. We’re partners through this, and can adapt to help our clients not just weather the storm, but come through it stronger when possible. Some clients have been forced to pause work entirely, while others are busier than ever.

        Viget has persevered through many downturns -- the dot com crash, 9/11, the 2008 financial crisis, and a few self-inflicted close-calls. In retrospect, it’s easy to reflect on how these situations made us stronger, but mid-crisis it can be hard to stay positive. The consistent lesson has been that taking care of each other -- co-workers, clients, partners, community peers -- is what gets us through. It motivates our hard work, it focuses our priorities and collaboration, and inspires us to do what needs to be done.

        I don’t know for certain how this crisis will play out, but I know that all of us at Viget will be doing everything we can to support each other as we go through it together.



        • News & Culture

        to

        Scurry: A Race-To-Finish Scavenger Hunt App

        We have a lot of traditions here at Viget, many of which you may have read about - TTT, FLF, Pointless Weekend. There are others, but you have to be an insider for more information on those.

        Pointless Weekend is one of our favorite traditions, though. It’s been around over a decade and some pretty fun work has come out of it over the years, like Storyboard, Baby Bookie, and Short Order. At a high level, we take 48 hours to build a tool, experiment, or stunt as a team, across all four of our offices. These projects are entirely separate from our client work and we use them to try out new technologies, explore roles on the team, and stress-test our processes.

        The first step for a Pointless Weekend is assembling the teams. We had two teams this year, with a record number of participants. You can read about TrailBuddy, what the other team built, here.

        The Scurry team was split between the DC and Durham offices, so all meetings were held via Hangout.

        Once we were assembled, we set out to understand the constraints and the goals of our Pointless Project. We went into this weekend with an extra pep in our step, as we were determined to build something for the upcoming Viget 20th anniversary TTT this summer. Here’s what we knew we wanted:

        1. An activity all Vigets could do together, where they could create memories, and share broadly on social
        2. Something that we could use in a spotty network at C Lazy U Ranch in Colorado
        3. A product we can share with others: corporate groups, families and friends, schools, bachelor/ette parties

        We landed on a scavenger hunt native app, which we named Scurry (Scavenger + Hurry = Scurry. Brilliant, right?). There are already a few scavenger apps available, so we set out to create something that was

        • Quick and easy to set up hunts
        • Free and intuitive for users
        • A nice combination of trivia and activities
        • Social! We wanted to enable teams to share photos and progress

        One of the main reasons we have Pointless Weekends is to test out new technologies and processes. In that vein, we tried out Notion as our central organizing tool - we used it for user journeys, data modeling, and even writing tickets, which we typically use Github for.

        We tested out Notion as our primary tool, writing tickets and tracking progress.

        When we built the app, we needed to prepare for spotty network service, as internet connectivity isn’t guaranteed at C Lazy U Ranch – where our Viget20 celebration will be. A Progressive Web Application (PWA) didn't make sense for our tech requirements, so we chose the route of creating a native application.

        There are a number of options available to build native applications. But, as we were looking to make as much progress as possible in 48-hours, we chose one of our favorite frameworks: React Native. React Native allows developers to build true, cross-platform native applications, using some of our favorite technologies: javascript, the React framework, and a native-specific variant of CSS. We decided on the turn-key solution Expo. Expo has extra tooling allowing for easy development, deployment, and debugging.

        This is a snap shot of our app and Expo.

        Our frontend developers were able to immediately dive in making screens and styling components, and quickly made the mockups in Whimsical a reality.

        On the backend, we used the supported library to connect to the backend datastore, Firebase. Firebase is a hosted solution for data storage, with key features built-in like authentication, realtime updates, and offline support. Our backend developer worked behind the frontend developers hooking those views up to live data.

        Both of these tools, Expo and Firebase, were easy to use and allowed us to focus on building a working application quickly, rather than being mired in setup or bespoke solutions to common problems.

        Whimsical is one of our favorite tools for building out mockups of an app.

        We made impressive progress in our 48-hour sprint, but there’s still some work to do. We have some additional features we hope to add before TTT, which will require additional testing and refining. For now, stay tuned and sign up for our newsletter. We’ll be sure to share when Scurry is ready for the world!



        • News & Culture

        to

        Together We Flourish, Remotely

        Like many other companies, Viget is working through the new challenge of suddenly being a fully-distributed company. We don’t know how long it will last or every challenge that will arise because of these unfortunate circumstances, but we know the health and well-being of our people is paramount. As Employee Engagement Manager, I feel inspired by these new challenges, eager to step up, and committed to seeing what good can come of this.

        Now more than ever, we want to maintain the culture that has sustained us over the last 20 years – a culture that I think is best captured by our mantra, “do great work and be a great teammate.” As everyone is adjusting to new work environments, schedules, and distractions, I am adjusting my approach to employee engagement, and the People Team is looking for new ways to nurture and protect the culture we treasure.

        The backbone of being a great teammate is knowing each other and caring about each other. For years the People Team has focused on making sure people who work at Viget are known, accepted, and cared about. From onboarding to events to weekly and monthly touchpoints, we invest in coworkers knowing each other. On top of that, we have well-appointed offices where people like to be, and friendships unfold over time. Abruptly becoming fully distributed makes it impossible for some of these connections to happen organically, like they would have around the coffee machine and the lunch tables. These microinteractions between colleagues in the same office, the hellos when you get off the elevator or the “what’d you get up to this weekend” chit chat near the seltzer refrigerator, all add up. We realize more than ever how valuable those moments are, and I know I will feel extra grateful for them when we are all back together.

        Until that time, we are working to make sure everyone at Viget feels connected, safe, healthy, and most importantly, together, even when we are physically apart. We are keeping up our weekly staff meetings and monthly team lunches, and we just onboarded a new hire last week as thoroughly as ever. There are some other, new ways we’re sparking connections, too.

        New ways we're sparking connections:

        Connecting IntentionallyWe are making the most of the tools that we’ve been using for years. New Slack channels have spun up, including #exercise, where folks are sharing how they are making do without a gym, and #igotyou, a place where folks can post where they’ve found supplies in stock as grocery stores are being emptied at an alarming pace.
        Remote Lunch TablesWe have teammates in three different time zones, on different project teams, and at different stages of life. We’ve created two virtual lunch tables, one at 12PM EST and one at 12PM MST, where folks can join with or without their lunches and with or without their kids, partners, or pets. There are no rules or structure, just an opportunity to chat and see a friendly face as a touchpoint to your day.
        Last Weekend This MorningCatching up Monday morning is a great way to kick off your week. Historically, I’ve done this from my desk over coffee as I greet folks coming off the elevator (I usually have the privilege of sitting at our front desk). I now do this from my desk, at home, over coffee as folks pop in or out of our Zoom call. One upshot of the new normal is I can “greet” anyone who shows up, not just people who work from my same office. Again, no structure, just a way to start our week, together.
        Munch MadnessYes, you read that right. Most of the sports world is enjoying an intermission. Since our CEO can’t cheer on his beloved Cavaliers and our VP of Design can’t cheer on his Gators, we’ve created something potentially much better. A definitive snack bracket. There is a minimal time commitment and folks with no sports knowledge can participate. The rules are simple: create and submit your bracket, ranking who you believe will win each snack faceoff. Then as we move through the rounds, vote on your favorite snacks. The competition has already sparked tons of conversation and plenty of snack hot takes. Want to start a munch-off of your own? Check out our bracket as a starting point.
        Virtual Happy HoursSigning off for the day and shutting down your machine is incredibly important for maintaining a work-life balance. Casually checking in, unwinding, and being able to chat about your day is also important. We have big, beautiful kitchens in each of our offices, along with casual spaces where at the end of any given day you can find a few Vigets catching up before heading home. This is something we don’t want to miss! So we’re setting up weekly happy hours where folks can hop in and say hi to each other face-to-face. We’ve found Zoom to be a great platform so we can see the maximum number of our teammates possible. Like all of our other events, it’s optional. There is also an understanding that your roommate, kid, significant other, or pet might show up on screen (and are welcome!). No one is shamed for multitasking and we encourage our teammates to join as they can. So far we’ve toasted new teammates, played a song or two, and up next we’ll play trivia.

        At the end of the day, we are all here for one reason: to do great work. Our award-winning work is made possible by the trust we’ve built within our teams. Staying focused and accountable to ourselves and our clients is what drives our motivation to continue to show up and do our best. In our new working environment, it is crucial that we can both stay connected and productive; a lot of teammates are stepping up to support one another. Here are a few ways we are continuing to foster our “do great work” mantra.

        New ways we're fostering great work:

        Staying in TouchThe People Team is actively touching base with every employee. Our focus is on their health, productivity, and connection. These 1:1s have given us a baseline for how we can provide the best support for our team, from making sure they're aware of flexible work options to setting them up with the tools they need to be successful. We’ve delivered chairs, monitors, and helped troubleshoot in-home wifi issues. We are committed to making sure every Viget is set up for success.
        Sharing is CaringWe’re no stranger to remote teams. We have four offices across the U.S. and a handful of full-time remote folks, and we’ve leaned on our inside experts to share their expertise on remote work. Most recently, ourData & Analytics Director, who has been working remotely full time for five years, gave a presentation on best practices for working from home. His top tips for working from home include:
        • Minimize other windows in remote meetings.
        • Set a schedule and avoid midday chores.
        • Take breaks away from the screen.
        • Plan your workday on your shared calendar.
        • Be mindful of Slack and social media as a distraction.
        • Use timers.
        • Keep your work area separate from where you relax.
        • Pretend that you’re still working from work.
        • Experiment and figure out what works for you.

        Our UX Research Director also stepped up to share her expertise to aid in adjusting to our new working conditions. She led a microclass on remote facilitation where she shared best practices and went over tools that support remote collaboration. Some of the tools she highlighted included Miro, Mural, Whimsical, and Jamboard. During the microclass she demonstrated use of Whimsical’s voting feature, which makes it easy for distributed groups to establish discussion topic priorities.

        Always PreparedHaving all of our project materials stored in the Cloud in a consistent, predictable way is a cornerstone of our business continuity plan. It is more important than ever for our team to follow the established best practices and ensure that project files are accessible to the full Viget team in the event of unplanned time off. Our VP of Client Services is leading efforts to ensure everyone is aware of and following our established guidelines with tools like Drive, Slack, Github, and Figma. Our priorities are that clients’ needs are met, quality is high, and timelines are honored.

        As the pandemic unfolds, our approach to employee engagement will evolve. We have more things in the works to build and maintain connections while distributed, including trivia and game nights, book clubs, virtual movie nights, and community service opportunities, just to name a few. No matter what we’re doing or what tool we’re using to connect, we’ll be in it together: doing great work, being great teammates, and looking forward.



        • News & Culture

        to

        5 things to Note in a New Phoenix 1.5 App

        Yesterday (Apr 22, 2020) Phoenix 1.5 was officially released ????

        There’s a long list of changes and improvements, but the big feature is better integration with LiveView. I’ve previously written about why LiveView interests me, so I was quite excited to dive into this release. After watching this awesome Twitter clone in 15 minutes demo from Chris McCord, I had to try out some of the new features. I generated a new phoenix app with the —live flag, installed dependencies and started a server. Here are five new features I noticed.

        1. Database actions in browser

        Oops! Looks like I forgot to configure the database before starting the server. There’s now a helpful message and a button in the browser that can run the command for me. There’s a similar button when migrations are pending. This is a really smooth UX to fix a very common error while developing.

        2. New Tagline!

        Peace-of-mind from prototype to production

        This phrase looked unfamiliar, so I went digging. Turns out that the old tagline was “A productive web framework that does not compromise speed or maintainability.” (I also noticed that it was previously “speed and maintainability” until this PR from 2019 was opened on a dare to clarify the language.)

        Chris McCord updated the language while adding phx.new —live. I love this framing, particularly for LiveView. I am very excited about the progressive enhancement path for LiveView apps. A project can start out with regular, server rendered HTML templates. This is a very productive way to work, and a great way to start a prototype for just about any website. Updating those templates to work with LiveView is an easier lift than a full rebuild in React. And finally, when you’re in production you have the peace-of-mind that the reliable BEAM provides.

        3. Live dependency search

        There’s now a big search bar right in the middle of the page. You can search through the dependencies in your app and navigate to the hexdocs for them. This doesn’t seem terribly useful, but is a cool demo of LiveView. The implementation is a good illustration of how compact a feature like this can be using LiveView.

        4. LiveDashboard

        This is the really cool one. In the top right of that page you see a link to LiveDashboard. Clicking it will take you to a page that looks like this.

        This page is built with LiveView, and gives you a ton of information about your running system. This landing page has version numbers, memory usage, and atom count.

        Clicking over to metrics brings you to this page.

        By default it will tell you how long average queries are taking, but the metrics are configurable so you can define your own custom telemetry options.

        The other tabs include process info, so you can monitor specific processes in your system:

        And ETS tables, the in memory storage that many apps use for caching:

        The dashboard is a really nice thing to get out of the box and makes it free for application developers to monitor their running system. It’s also developing very quickly. I tried an earlier version a week ago which didn’t support ETS tables, ports or sockets. I made a note to look into adding them, but it's already done! I’m excited to follow along and see where this project goes.

        5. New LiveView generators

        1.5 introduces a new generator mix phx.gen.live.. Like other generators, it will create all the code you need for a basic resource in your app, including the LiveView modules. The interesting part here is that it introduces patterns for organizing LiveView code, which is something I have previously been unsure about. At first glance, the new organization makes sense and feels like a good approach. I look forward to seeing how this works on a real project.

        Conclusion

        The 1.5 release brings more changes under the hood of course, but these are the first five differences you’ll notice after generating a new Phoenix 1.5 app with LiveView. Congratulations to the entire Phoenix team, but particularly José Valim and Chris McCord for getting this work released.



        • Code
        • Back-end Engineering

        to

        Unsolved Zoom Mysteries: Why We Have to Say “You’re Muted” So Much

        Video conference tools are an indispensable part of the Plague Times. Google Meet, Microsoft Teams, Zoom, and their compatriots are keeping us close and connected in a physically distanced world.

        As tech-savvy folks with years of cross-office collaboration, we’ve laughed at the sketches and memes about vidconf mishaps. We practice good Zoomiquette, including muting ourselves when we’re not talking.

        Yet even we can’t escape one vidconf pitfall. (There but for the grace of Zoom go I.) On nearly every vidconf, someone starts to talk, and then someone else says: “Oop, you’re muted.” And, inevitably: “Oop, you’re still muted.”

        That’s right: we’re trying to follow Zoomiquette by muting, but then we forget or struggle to unmute when we do want to talk.

        In this post, I’ll share my theories for why the You’re Muted Problems are so pervasive, using Google Meet, Microsoft Teams, and Zoom as examples. Spoiler alert: While I hope this will help you be more mindful of the problem, I can’t offer a good solution. It still happens to me. All. The. Time.

        Skip the why and go straight to the vidconf app keyboard shortcuts you should memorize right now.

        Why we don't realize we’re muted before talking

        Why does this keep happening?!?

        Simply put: UX and design decisions make it harder to remember that you’re muted before you start to talk.

        Here’s a common scenario: You haven’t talked for a bit, so you haven’t interacted with the Zoom screen for a few seconds. Then you start to talk — and that’s when someone tells you, “You’re muted.”

        We forget so easily in these scenarios because when our mouse has been idle for a few seconds, the apps hide or downplay the UI elements that tell us we’re muted.

        Zoom and Teams are the worst offenders:

        • Zoom hides both the toolbar with the main in-app controls (the big mute button) and the mute status indicator on your video pane thumbnail.
        • Teams hides the toolbar, and doesn't show a mute status indicator on your video thumbnail in the first place.

        Meet is only slightly better:

        • Meet hides the toolbar, and shows only a small mute status icon in your video thumbnail.

        Even when our mouse is active, the apps’ subtle approach to muted state UI can make it easy to forget that we’re muted:

        Teams is the worst offender:

        • The mute button is an icon rather than words.
        • The muted-state icon's styling could be confused with unmuted state: Teams does not follow the common pattern of using red to denote muted state.
        • The mute button is not differentiated in visual hierarchy from all the other controls.
        • As mentioned above, Teams never shows a secondary mute status indicator.

        Zoom is a bit better, but still makes it pretty easy to forget that you’re muted:

        • Pros:
          • Zoom is the only app to use words on the mute button, in this case to denote the button action (rather than the muted state).
          • The muted-state icon’s styling (red line) is less likely to be confused with the unmuted-state icon.
        • Cons:
          • The mute button’s placement (bottom left corner of the page) is easy to overlook.
          • The mute button is not differentiated in visual hierarchy from the other toolbar buttons — and Zoom has a lot of toolbar buttons, especially when logged in as host.
          • The secondary mute status indicator is a small icon.
          • The mute button’s muted-state icon is styled slightly differently from the secondary mute status indicator.
        • Potential Cons:
          • While words denote the button action, only an icon denotes the muted state.

        Meet is probably the clearest of the three apps, but still has pitfalls:

        • Pros:
          • The mute button is visually prominent in the UI: It’s clearly differentiated in the visual hierarchy relative to other controls (styled as a primary button); is a large button; and is placed closer to the center of the controls bar.
          • The muted-state icon’s styling (red fill) is less likely to be confused with the unmuted-state icon.
        • Cons:
          • Uses only an icon rather than words to denote the muted state.
        • Unrelated Con:
          • While the mute button is visually prominent, it’s also placed next to the hang-up button. So in Meet’s active state you might be less likely to forget you’re muted … but more likely to accidentally hang up when trying to unmute. 😬

        I know modern app design leans toward minimalism. There’s often good rationale to use icons rather than words, or to de-emphasize controls and indicators when not in use.

        But again: This happens on basically every call! Often multiple times per call!! And we’re supposed to be tech-savvy!!! Imagine what it’s like for the tens of millions of vidconf newbs.

        I would argue that “knowing your muted state” has turned out to be a major vidconf user need. At this point, it’s certainly worth rethinking UX patterns for.

        Why we keep unsuccessfully unmuting once we realize we’re muted

        So we can blame the You’re Muted Problem on UX and design. But what causes the You’re Still Muted Problem? Once we know we’re muted, why do we sometimes fail to unmute before talking again?

        This one is more complicated — and definitely more speculative. To start making sense of this scenario, here’s the sequence I’m guessing most commonly plays out (I did this a couple times before I became aware of it):

        The crucial part is when the person tries to unmute by pressing the keyboard Volume On/Off key.

        If that’s in fact what’s happening (again, this is just a hypothesis), I’m guessing they did that because when someone says “You’re muted” or “I can’t hear you,” our subconscious thought process is: “Oh, Audio is Off. Press the keyboard key that I usually press when I want to change Audio Off to Audio On.”

        There are two traps in this reflexive thought process:

        First, the keyboard volume keys control the speaker volume, not the microphone volume. (More specifically, they control the system sound output settings, rather than the system sound input settings or the vidconf app’s sound input settings.)

        In fact, there isn’t a keyboard key to control the microphone volume. You can’t unmute your mic via a dedicated keyboard key, the way that you can turn the speaker volume on/off via a keyboard key while watching a movie or listening to music.

        Second, I think we reflexively press the keyboard key anyway because our mental model of the keyboard audio keys is just: Audio. Not microphone vs. speaker.

        This fuzzy mental model makes sense: There’s only one set of keyboard keys related to audio, so why would I think to distinguish between microphone and speaker? 

        So my best guess is hardware design causes the You’re Still Muted Problem. After all, keyboard designs are from a pre-Zoom era, when the average person rarely used the computer’s microphone.

        If that is the cause, one potential solution is for hardware manufacturers to start including dedicated keys to control microphone volume:

        Video conference keyboard shortcuts you should memorize right now

        Let me know if you have other theories for the You’re Still Muted Problem!

        In the meantime, the best alternative is to learn all of the vidconf app keyboard shortcuts for muting/unmuting:

        • Meet
          • Mac: Command(⌘) + D
          • Windows: Control + D
        • Teams
          • Mac: Command(⌘) + Shift + M
          • Windows: Ctrl + Shift + M
        • Zoom
          • Mac: Command(⌘) + Shift + A
          • Windows: Alt + A
          • Hold Spacebar: Temporarily unmute

        Other vidconf apps not included in my analysis:

        • Cisco Webex Meetings
          • Mac: Ctrl + Alt + M
          • Windows: Ctrl + Shift + M
        • GoToMeeting

        Bonus protip from Jackson Fox: If you use multiple vidconf apps, pick a keyboard shortcut that you like and manually change each app’s mute/unmute shortcut to that. Then you only have to remember one shortcut!




        to

        A Parent’s Guide to Working From Home, During a Global Pandemic, Without Going Insane

        Though I usually enjoy working from Viget’s lovely Boulder office, during quarantine I am now working from home while simultaneously parenting my 3-year-old daughter Audrey. My husband works in healthcare and though he is not on the front lines battling COVID-19, he is still an essential worker and as such leaves our home to work every day.

        Some working/parenting days are great! I somehow get my tasks accomplished, my kid is happy, and we spend some quality time together.

        And some days are awful. I have to ignore my daughter having a meltdown and try to focus on meetings, and I wish I wasn’t in this situation at all. Most days are somewhere in the middle; I’m just doing my best to get by.

        I’ve seen enough working parent memes and cries for help on social media to know that I’m not alone. There are many parents out there who now get to experience the stress and anxiety of living through a global pandemic while simultaneously navigating ways to stay productive while working from home and being an effective parent. Fun isn’t it?

        I’m not an expert on the matter, but I have found a few small things that are making me feel a bit more sane. I hope sharing them will make someone else’s life easier too.

        Truths to Accept

        First, let’s acknowledge some truths about this new situation we find ourselves in:

        Truth 1: We’ve lost something.

        Parents have lost more than daycare and schools during this epidemic. We’ve lost any time that we had for ourselves, and that was really valuable. We no longer have small moments in the day to catch up on our personal lives. I no longer have a commute to separate my work duties from my mom duties, or catch up with my friends, or just be quiet.

        Truth 2: We’re human.

        The reason you can’t be a great employee and a great parent and a great friend and a great partner or spouse all day every day isn’t because you’re doing a bad job, it’s because being constantly wonderful in all aspects of your life is impossible. Pick one or two of those things a day to focus on.

        Truth 3: We’re all doing our best.

        This is the most important part of this article. Be kind to yourselves. This isn’t easy, and putting so much pressure on yourself that you break isn’t going to make it any easier.

        Work from Home Goals

        Now that we’ve accepted some truths about our current situation, let’s set some goals.

        Goal 1: Do Good Work

        At Viget, and wherever you work, with kids or without we all want to make sure that the quality of our work stays up throughout the pandemic and that we can continue to be reliable team members and employees to the best of our abilities.

        Goal 2: Stay Sane

        We need to figure out ways to do this without sacrificing ourselves entirely. For me, this means fitting my work into normal work hours as much as possible so that I can still have some downtime in the evenings.

        Goal 3: Make This Sustainable

        None of us knows how long this will last but we may as well begin mentally preparing for a long haul.

        Work from Home Rules

        Now, there are some great Work from Home Rules that apply to everyone with or without kids. My coworker Paul Koch shared these with the Viget team a Jeremy Bearimy ago and I agree this is also the foundation for working from home with kids.

        1. When you’re in a remote meeting, minimize other windows to stay focused
        2. Set a schedule and avoid chores*
        3. Take breaks away from the screen
        4. Plan your workday on the calendar+
        5. Be mindful of Slack and social media as a distraction
        6. Use timers+
        7. Keep your work area separate from where you relax
        8. Pretend that you’re still WFW
        9. Experiment and figure out what works for you

        In the improv spirit I say “Yes, AND….” to these tips. And so, here are my adjusted rules for WFH while kiddos around: These have both been really solid tools for me, so let’s dig in.

        Daily flexible schedule for kids

        Day Planning: Calendars and Timers

        A few small tweaks and adjustments make this even more doable for me and my 3-year-old. First- I don’t avoid chores entirely. If I’m going up and down the stairs all day anyway I might as well throw in a load of laundry while I’m at it. The more I can get done during the day means a greater chance of some down time in the evening.

        Each morning I plan my day and Audrey’s day:

        My Work Day:

        Audrey's Day

        Identify times of day you are more likely to be focus and protect them. For me, I know I have a block of time from 5-7a before Audrey wakes up and again during “nap time” from 1-3p.I built a construction paper “schedule” that we update and reorganize daily. We make the schedule together each day. She feels ownership over it and she gets to be the one who tells me what we do next.
        Look at your calendar first thing and make adjustments either in your plans or move meetings if you have to.I’m strategic about screen time- I try to schedule it when I have meetings. It also helps to schedule a physical activity before screen time as she is less likely to get bored.
        Make goals for your day: Tackle time sensitive tasks first. Take care of things that either your co-workers or clients are waiting on from you first, this will help your day be a lot less stressful. Non-time sensitive tasks come next- these can be done at any time of day.We always include “nap time” even though she rarely naps anymore. This is mostly a time for us both to be alone.

        When we make the schedule together it also helps me understand her favorite parts of the day and reminds me to include them.

        Once our days are planned, I also use timers to help keep the structure of the day. (I bought a great alarm clock for kids on Amazon that turns colors to signal bedtime and quiet time. It’s been hugely worth it for me.)

        Timers for Me:

        Timers for Audrey:

        More than ever, I rely on a time tracking timer. At Viget we use Harvest to track time, and it has a handy built in timer, but there are many apps or online tools that could help you keep track of your time as well.Audrey knows what time she can come out of her room in the morning. If she wakes up before the light is green she plays quietly in her room.
        I need a timer because the days and hours are bleeding together- without tracking as I go it would be really hard for me to remember when I worked on certain projects or know for certain if I gave Viget enough time for the day.She knows how long “nap time” is in the afternoon.
        Starting and stopping the timer helps me turn on and off “work mode”, which is a helpful sanity bonus.Perhaps best of all I am not the bad guy! “Sorry honey, the light isn’t green yet and there really isn’t anything mommy can do about it” is my new favorite way to ensure we both get some quiet time.

        Work from Home Rules: Updated for Parents

        Finally, I have a few more Work from Home Rules for parents to add to the list:

        1. Minimize other windows in remote meetings
        2. Set a schedule and fit in some chores if time allows
        3. Take breaks away from the screen
        4. Schedule both your and your kids’ days
        5. Be mindful of Slack and social media as a distraction
        6. Use timers to track your own time and help your kids understand the day
        7. Keep your work area separate from where you relax
        8. Pretend that you’re still WFW
        9. Experiment and figure out what works for you
        10. Be prepared with a few activities
          • Each morning, have just ONE thing ready to go. This can be a worksheet you printed out, a coloring station setup, a new bag of kinetic sand you just got delivered from Amazon, a kids dance video on YouTube or an iPad game. Recently I started enlisting my mom to read stories on Facetime. The activity doesn’t have to be new each day but (especially for young kids) it has to be handy for you to start up quickly if your schedule changes
        11. Clearly communicate your availability with your team and project PMs
          • Life happens. Some days are going to be hard. Whatever you do, don’t burn yourself out or leave your team hanging. If you need to move a meeting or take a day off, communicate that as early and as clearly as you can.
        12. Take PTO if you can
          • None of us are superheroes. If you’re feeling overwhelmed- take a look at the next few days and figure out which one makes the most sense for you to take a break.
        13. Take breaks to be alone without doing a task
          • Work and family responsibilities have blended together, there’s almost no room for being alone. If you can find some precious alone time don’t use it to fold laundry or clean the bathroom. Just zone out. I think we all really need this.

        Last but not least, enjoy your time at home if you can. This is an unusual circumstance and even though it’s really hard, there are parts that are really great too.

        If you have some great WFH tips we’d love to hear about them in the comments!




        to

        What happens if my visa is refused or cancelled due to my character?

        If you have your visa refused or cancelled, you need to get expert advice a soon as possible. Strict time limits apply to drafting submissions and appeals. A visa refusal or cancellation can limit the type or visas you can apply for in the future or even prohibit you from applying for any visa to […]

        The post What happens if my visa is refused or cancelled due to my character? appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




        to

        Occupations that may be taken off or put onto the skilled migration occupation lists

        The Department of Employment, Skills, Small and Family Business is considering removing the following occupations from the Skilled Migration Occupation Lists (Skills List) in March 2020: Careers Counsellor Vehicle Trimmer Business Machine Mechanic Animal Attendants and Trainers Gardener (General) Hairdresser Wood Machinist Massage Therapist Community Worker Diving Instructor (Open Water) Gymnastics Coach or Instructor At […]

        The post Occupations that may be taken off or put onto the skilled migration occupation lists appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




        to

        Visa cancelled due to incorrect information given or provided to the Department of Home Affairs

        It is a requirement that a visa applicant must fill in or complete his or her application form in a manner that all questions are answered, and no incorrect answers are given or provided. There is also a requirement that visa applicants must not provide incorrect information during interviews with the Minister for Immigration (‘Minister’), […]

        The post Visa cancelled due to incorrect information given or provided to the Department of Home Affairs appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.



        • Visa Cancellation
        • 1703474 (Refugee) [2017] AATA 2985
        • cancel a visa
        • cancelledvi sa
        • Citizenship and Multicultural Affairs
        • Department of Home Affairs
        • migration act 1958
        • minister for immigration
        • NOICC
        • notice of intention to consider cancellation
        • Sanaee (Migration) [2019] AATA 4506
        • section 109
        • time limits

        to

        7 Best WordPress Membership Plugins to Generate Recurring Revenue

        Do you want to turn your WordPress blog into a membership site? Businesses around the globe use this model to sell their physical products or offer exclusive digital content, and many of them are super successful. CopyBlogger, a site with content marketing lessons, offers premium courses to members and they’re currently an eight-figure business. Meanwhile, the owner of the razor […]




        to

        9 Things You Can Do To Your WordPress Website During Quarantine

        If you’d have told us at WPZOOM about the current situation we find ourselves in six months ago, we wouldn’t have believed you. It’s all we can see if we turn on the TV and it’s clear right now, humanity has taken a break. Worrying about loved ones, ensuring we stay safe, and for heaven’s sake, stay inside. Staying inside […]




        to

        How to Create an Online Ordering Page for Restaurants with WooCommerce

        Until recently it was something normal for any restaurant to have a well-maintained website. Even so, it seems that for many restaurants this was something difficult to achieve. In these difficult times, for many restaurant owners and other businesses in this field, owning just a simple website is no longer enough. If you still want to remain in business you […]




        to

        How to Foster Real-Time Client Engagement During Moderated Research

        When we conduct moderated research, like user interviews or usability tests, for our clients, we encourage them to observe as many sessions as possible. We find when clients see us interview their users, and get real-time responses, they’re able to learn about the needs of their users in real-time and be more active participants in the process. One way we help clients feel engaged with the process during remote sessions is to establish a real-time communication backchannel that empowers clients to flag responses they’d like to dig into further and to share their ideas for follow-up questions.

        There are several benefits to establishing a communication backchannel for moderated sessions:

        • Everyone on the team, including both internal and client team members, can be actively involved throughout the data collection process rather than waiting to passively consume findings.
        • Team members can identify follow-up questions in real-time which allows the moderator to incorporate those questions during the current session, rather than just considering them for future sessions.
        • Subject matter experts can identify more detailed and specific follow-up questions that the moderator may not think to ask.
        • Even though the whole team is engaged, a single moderator still maintains control over the conversation which creates a consistent experience for the participant.

        If you’re interested in creating your own backchannel, here are some tips to make the process work smoothly:

        • Use the chat tool that is already being used on the project. In most cases, we use a joint Slack workspace for the session backchannel but we’ve also used Microsoft Teams.
        • Create a dedicated channel like #moderated-sessions. Conversation in this channel should be limited to backchannel discussions during sessions. This keeps the communication consolidated and makes it easier for the moderator to stay focused during the session.
        • Keep communication limited. Channel participants should ask basic questions that are easy to consume quickly. Supplemental commentary and analysis should not take place in the dedicated channel.
        • Use emoji responses. The moderator can add a quick thumbs up to indicate that they’ve seen a question.

        Introducing backchannels for communication during remote moderated sessions has been a beneficial change to our research process. It not only provides an easy way for clients to stay engaged during the data collection process but also increases the moderator’s ability to focus on the most important topics and to ask the most useful follow-up questions.




        to

        Markdown Comes Alive! Part 1, Basic Editor

        In my last post, I covered what LiveView is at a high level. In this series, we’re going to dive deeper and implement a LiveView powered Markdown editor called Frampton. This series assumes you have some familiarity with Phoenix and Elixir, including having them set up locally. Check out Elizabeth’s three-part series on getting started with Phoenix for a refresher.

        This series has a companion repository published on GitHub. Get started by cloning it down and switching to the starter branch. You can see the completed application on master. Our goal today is to make a Markdown editor, which allows a user to enter Markdown text on a page and see it rendered as HTML next to it in real-time. We’ll make use of LiveView for the interaction and the Earmark package for rendering Markdown. The starter branch provides some styles and installs LiveView.

        Rendering Markdown

        Let’s set aside the LiveView portion and start with our data structures and the functions that operate on them. To begin, a Post will have a body, which holds the rendered HTML string, and title. A string of markdown can be turned into HTML by calling Post.render(post, markdown). I think that just about covers it!

        First, let’s define our struct in lib/frampton/post.ex:

        defmodule Frampton.Post do
          defstruct body: "", title: ""
        
          def render(%__MODULE{} = post, markdown) do
            # Fill me in!
          end
        end

        Now the failing test (in test/frampton/post_test.exs):

        describe "render/2" do
          test "returns our post with the body set" do
            markdown = "# Hello world!"                                                                                                                 
            assert Post.render(%Post{}, markdown) == {:ok, %Post{body: "<h1>Hello World</h1>
        "}}
          end
        end

        Our render method will just be a wrapper around Earmark.as_html!/2 that puts the result into the body of the post. Add {:earmark, "~> 1.4.3"} to your deps in mix.exs, run mix deps.get and fill out render function:

        def render(%__MODULE{} = post, markdown) do
          html = Earmark.as_html!(markdown)
          {:ok, Map.put(post, :body, html)}
        end

        Our test should now pass, and we can render posts! [Note: we’re using the as_html! method, which prints error messages instead of passing them back to the user. A smarter version of this would handle any errors and show them to the user. I leave that as an exercise for the reader…] Time to play around with this in an IEx prompt (run iex -S mix in your terminal):

        iex(1)> alias Frampton.Post
        Frampton.Post
        iex(2)> post = %Post{}
        %Frampton.Post{body: "", title: ""}
        iex(3)> {:ok, updated_post} = Post.render(post, "# Hello world!")
        {:ok, %Frampton.Post{body: "<h1>Hello world!</h1>
        ", title: ""}}
        iex(4)> updated_post
        %Frampton.Post{body: "<h1>Hello world!</h1>
        ", title: ""}

        Great! That’s exactly what we’d expect. You can find the final code for this in the render_post branch.

        LiveView Editor

        Now for the fun part: Editing this live!

        First, we’ll need a route for the editor to live at: /editor sounds good to me. LiveViews can be rendered from a controller, or directly in the router. We don’t have any initial state, so let's go straight from a router.

        First, let's put up a minimal test. In test/frampton_web/live/editor_live_test.exs:

        defmodule FramptonWeb.EditorLiveTest do
          use FramptonWeb.ConnCase
          import Phoenix.LiveViewTest
        
          test "the editor renders" do
            conn = get(build_conn(), "/editor")
            assert html_response(conn, 200) =~ "data-test="editor""
          end
        end

        This test doesn’t do much yet, but notice that it isn’t live view specific. Our first render is just the same as any other controller test we’d write. The page’s content is there right from the beginning, without the need to parse JavaScript or make API calls back to the server. Nice.

        To make that test pass, add a route to lib/frampton_web/router.ex. First, we import the LiveView code, then we render our Editor:

        import Phoenix.LiveView.Router
        # … Code skipped ...
        # Inside of `scope "/"`:
        live "/editor", EditorLive

        Now place a minimal EditorLive module, in lib/frampton_web/live/editor_live.ex:

        defmodule FramptonWeb.EditorLive do
          use Phoenix.LiveView
        
          def render(assigns) do
            ~L"""
              <div data-test=”editor”>
                <h1>Hello world!</h1>
              </div>
              """
          end
        
          def mount(_params, _session, socket) do
            {:ok, socket}
          end
        end

        And we have a passing test suite! The ~L sigil designates that LiveView should track changes to the content inside. We could keep all of our markup in this render/1 method, but let’s break it out into its own template for demonstration purposes.

        Move the contents of render into lib/frampton_web/templates/editor/show.html.leex, and replace EditorLive.render/1 with this one liner: def render(assigns), do: FramptonWeb.EditorView.render("show.html", assigns). And finally, make an EditorView module in lib/frampton_web/views/editor_view.ex:

        defmodule FramptonWeb.EditorView do
          use FramptonWeb, :view
          import Phoenix.LiveView
        end

        Our test should now be passing, and we’ve got a nicely separated out template, view and “live” server. We can keep markup in the template, helper functions in the view, and reactive code on the server. Now let’s move forward to actually render some posts!

        Handling User Input

        We’ve got four tasks to accomplish before we are done:

        1. Take markdown input from the textarea
        2. Send that input to the LiveServer
        3. Turn that raw markdown into HTML
        4. Return the rendered HTML to the page.

        Event binding

        To start with, we need to annotate our textarea with an event binding. This tells the liveview.js framework to forward DOM events to the server, using our liveview channel. Open up lib/frampton_web/templates/editor/show.html.leex and annotate our textarea:

        <textarea phx-keyup="render_post"></textarea>

        This names the event (render_post) and sends it on each keyup. Let’s crack open our web inspector and look at the web socket traffic. Using Chrome, open the developer tools, navigate to the network tab and click WS. In development you’ll see two socket connections: one is Phoenix LiveReload, which polls your filesystem and reloads pages appropriately. The second one is our LiveView connection. If you let it sit for a while, you’ll see that it's emitting a “heartbeat” call. If your server is running, you’ll see that it responds with an “ok” message. This lets LiveView clients know when they've lost connection to the server and respond appropriately.

        Now, type some text and watch as it sends down each keystroke. However, you’ll also notice that the server responds with a “phx_error” message and wipes out our entered text. That's because our server doesn’t know how to handle the event yet and is throwing an error. Let's fix that next.

        Event handling

        We’ll catch the event in our EditorLive module. The LiveView behavior defines a handle_event/3 callback that we need to implement. Open up lib/frampton_web/live/editor_live.ex and key in a basic implementation that lets us catch events:

        def handle_event("render_post", params, socket) do
          IO.inspect(params)
        
          {:noreply, socket}
        end

        The first argument is the name we gave to our event in the template, the second is the data from that event, and finally the socket we’re currently talking through. Give it a try, typing in a few characters. Look at your running server and you should see a stream of events that look something like this:

        There’s our keystrokes! Next, let’s pull out that value and use it to render HTML.

        Rendering Markdown

        Lets adjust our handle_event to pattern match out the value of the textarea:

        def handle_event("render_post", %{"value" => raw}, socket) do

        Now that we’ve got the raw markdown string, turning it into HTML is easy thanks to the work we did earlier in our Post module. Fill out the body of the function like this:

        {:ok, post} = Post.render(%Post{}, raw)
        IO.inspect(post)

        If you type into the textarea you should see output that looks something like this:

        Perfect! Lastly, it’s time to send that rendered html back to the page.

        Returning HTML to the page

        In a LiveView template, we can identify bits of dynamic data that will change over time. When they change, LiveView will compare what has changed and send over a diff. In our case, the dynamic content is the post body.

        Open up show.html.leex again and modify it like so:

        <div class="rendered-output">
          <%= @post.body %>
        </div>

        Refresh the page and see:

        Whoops!

        The @post variable will only be available after we put it into the socket’s assigns. Let’s initialize it with a blank post. Open editor_live.ex and modify our mount/3 function:

        def mount(_params, _session, socket) do
          post = %Post{}
          {:ok, assign(socket, post: post)}
        end

        In the future, we could retrieve this from some kind of storage, but for now, let's just create a new one each time the page refreshes. Finally, we need to update the Post struct with user input. Update our event handler like this:

        def handle_event("render_post", %{"value" => raw}, %{assigns: %{post: post}} = socket) do
          {:ok, post} = Post.render(post, raw)
          {:noreply, assign(socket, post: post)
        end

        Let's load up http://localhost:4000/editor and see it in action.

        Nope, that's not quite right! Phoenix won’t render this as HTML because it’s unsafe user input. We can get around this (very good and useful) security feature by wrapping our content in a raw/1 call. We don’t have a database and user processes are isolated from each other by Elixir. The worst thing a malicious user could do would be crash their own session, which doesn’t bother me one bit.

        Check the edit_posts branch for the final version.

        Conclusion

        That’s a good place to stop for today. We’ve accomplished a lot! We’ve got a dynamically rendering editor that takes user input, processes it and updates the page. And we haven’t written any JavaScript, which means we don’t have to maintain or update any JavaScript. Our server code is built on the rock-solid foundation of the BEAM virtual machine, giving us a great deal of confidence in its reliability and resilience.

        In the next post, we’ll tackle making a shared editor, allowing multiple users to edit the same post. This project will highlight Elixir’s concurrency capabilities and demonstrate how LiveView builds on them to enable some incredible user experiences.



        • Code
        • Back-end Engineering

        to

        Why's it so hard to get the cool stuff approved?

        The classic adage is “good design speaks for itself.” Which would mean that if something’s as good of an idea as you think it is, a client will instantly see that it’s good too, right?

        Here at Viget, we’re always working with new and different clients. Each with their own challenges and sensibilities. But after ten years of client work, I can’t help but notice a pattern emerge when we’re trying to get approval on especially cool, unconventional parts of a design.

        So let’s break down some of those patterns to hopefully better understand why clients hesitate, and what strategies we’ve been using lately to help get the work we’re excited about approved.

        Imagine this: the parallax homepage with elements that move around in surprising ways or a unique navigation menu that conceptually reinforces a site’s message. The way the content cards on a page will, like, be literal cards that will shuffle and move around. Basically, any design that feels like an exciting, novel challenge, will need the client to “get it.” And that often turns out to be the biggest challenge of all.

        There are plenty of practical reasons cool designs get shot down. A client is usually more than one stakeholder, and more than the team of people you’re working with directly. On any project, there’s an amount of telephone you end up playing. Or, there’s always the classic foes: budgets and deadlines. Any idea should fit in those predetermined constraints. But as a project goes along, budgets and deadlines find a way to get tighter than you planned.

        But innovative designs and interactions can seem especially scary for clients to approve. There’s three fears that often pop up on projects:

        The fear of change. 

        Maybe the client expected something simple, a light refresh. Something that doesn’t challenge their design expectations or require more time and effort to understand. And on our side, maybe we didn’t sufficiently ease them into our way of thinking and open them up to why we think something bigger and bolder is the right solution for them. Baby steps, y’all.

        The fear of the unknown. 

        Or, less dramatically, a lack of understanding of the medium. In the past, we have struggled with how to present an interactive, animated design to a client before it’s actually built. Looking at a site that does something conceptually similar as an example can be tough. It’s asking a lot of a client’s imagination to show them a site about boots that has a cool spinning animation and get meaningful feedback about how a spinning animation would work on their site about after-school tutoring. Or maybe we’ve created static designs, then talked around what we envision happening. Again, what seems so clear in our minds as professionals entrenched in this stuff every day can be tough for someone outside the tech world to clearly understand.

          The fear of losing control. 

          We’re all about learning from past mistakes. So lets say, after dealing with that fear of the unknown on a project, next time you go in the opposite direction. You invest time up front creating something polished. Maybe you even get the developer to build a prototype that moves and looks like the real thing. You’ve taken all the vague mystery out of the process, so a client will be thrilled, right? Surprise, probably not! Most clients are working with you because they want to conquer the noble quest that is their redesign together. When we jump straight to showing something that looks polished, even if it’s not really, it can feel like we jumped ahead without keeping them involved. Like we took away their input. They can also feel demotivated to give good, meaningful feedback on a polished prototype because it looks “done.”

          So what to do? Lately we have found low-fidelity prototypes to be a great tool for combating these fears and better communicating our ideas.

          What are low-fidelity prototypes?

          Low fidelity prototypes are a tool that designers can create quickly to illustrate an idea, without sinking time into making it pixel-perfect. Some recent examples of prototypes we've created include a clickable Figma or Invision prototype put together with Whimsical wireframes:

          A rough animation created in Principle illustrating less programatic animation:

          And even creating an animated storyboard in Photoshop:

          They’re rough enough that there’s no way they could be confused for a final product. But customized so that a client can immediately understand what they’re looking at and what they need to respond to. Low-fidelity prototypes hit a sweet spot that addresses those client fears head on.

          That fear of change? A lo-fi prototype starts rough and small, so it can ease a client into a dramatic change without overwhelming them. It’s just a first step. It gives them time to react and warm up to something that’ll ultimately be a big change.

          It also cuts out the fear of the unknown. Seeing something moving around, even if it’s rough, can be so much more clear than talking ourselves in circles about how we think it will move, and hoping the client can imagine it. The feature is no longer an enigma cloaked in mystery and big talk, but something tangible they can point at and ask concrete questions about.

          And finally, a lo-fi prototype doesn’t threaten a client’s sense of control. Low-fidelity means it’s clearly still a work in progress! It’s just an early step in the creative process, and therefore communicates that we’re still in the middle of that process together. There’s still plenty of room for their ideas and feedback.

          Lo-fi prototypes: client-tested, internal team-approved

          There are a lot of reasons to love lo-fi prototypes internally, too!

          They’re quick and easy. 

          We can whip up multiple ideas within a few hours, without sinking the time into getting our hearts set on any one thing. In an agency setting especially, time is limited, so the faster we can get an idea out of our own heads, the better.

          They’re great to share with developers. 

          Ideally, the whole team is working together simultaneously, collaborating every step of the way. Realistically, a developer often doesn’t have time during a project’s early design phase. Lo-fi prototypes are concrete enough that a developer can quickly tell if building an idea will be within scope. It helps us catch impractical ideas early and helps us all collaborate to create something that’s both cool and feasible.

            Stay tuned for posts in the near future diving into some of our favorite processes for creating lo-fi prototypes!



            • Design & Content

            to

            Committed to the wrong branch? -, @{upstream}, and @{-1} to the rescue

            I get into this situation sometimes. Maybe you do too. I merge feature work into a branch used to collect features, and then continue development but on that branch instead of back on the feature branch

            git checkout feature
            # ... bunch of feature commits ...
            git push
            git checkout qa-environment
            git merge --no-ff --no-edit feature
            git push
            # deploy qa-environment to the QA remote environment
            # ... more feature commits ...
            # oh. I'm not committing in the feature branch like I should be

            and have to move those commits to the feature branch they belong in and take them out of the throwaway accumulator branch

            git checkout feature
            git cherry-pick origin/qa-environment..qa-environment
            git push
            git checkout qa-environment
            git reset --hard origin/qa-environment
            git merge --no-ff --no-edit feature
            git checkout feature
            # ready for more feature commits

            Maybe you prefer

            git branch -D qa-environment
            git checkout qa-environment

            over

            git checkout qa-environment
            git reset --hard origin/qa-environment

            Either way, that works. But it'd be nicer if we didn't have to type or even remember the branches' names and the remote's name. They are what is keeping this from being a context-independent string of commands you run any time this mistake happens. That's what we're going to solve here.

            Shorthands for longevity

            I like to use all possible natively supported shorthands. There are two broad motivations for that.

            1. Fingers have a limited number of movements in them. Save as many as possible left late in life.
            2. Current research suggests that multitasking has detrimental effects on memory. Development tends to be very heavy on multitasking. Maybe relieving some of the pressure on quick-access short term memory (like knowing all relevant branch names) add up to leave a healthier memory down the line.

            First up for our scenario: the - shorthand, which refers to the previously checked out branch. There are a few places we can't use it, but it helps a lot:

            Bash
            # USING -
            
            git checkout feature
            # hack hack hack
            git push
            git checkout qa-environment
            git merge --no-ff --no-edit -        # ????
            git push
            # hack hack hack
            # whoops
            git checkout -        # now on feature ???? 
            git cherry-pick origin/qa-environment..qa-environment
            git push
            git checkout - # now on qa-environment ????
            git reset --hard origin/qa-environment
            git merge --no-ff --no-edit -        # ????
            git checkout -                       # ????
            # on feature and ready for more feature commits
            Bash
            # ORIGINAL
            
            git checkout feature
            # hack hack hack
            git push
            git checkout qa-environment
            git merge --no-ff --no-edit feature
            git push
            # hack hack hack
            # whoops
            git checkout feature
            git cherry-pick origin/qa-environment..qa-environment
            git push
            git checkout qa-environment
            git reset --hard origin/qa-environment
            git merge --no-ff --no-edit feature
            git checkout feature
            # ready for more feature commits

            We cannot use - when cherry-picking a range

            > git cherry-pick origin/-..-
            fatal: bad revision 'origin/-..-'
            
            > git cherry-pick origin/qa-environment..-
            fatal: bad revision 'origin/qa-environment..-'

            and even if we could we'd still have provide the remote's name (here, origin).

            That shorthand doesn't apply in the later reset --hard command, and we cannot use it in the branch -D && checkout approach either. branch -D does not support the - shorthand and once the branch is deleted checkout can't reach it with -:

            # assuming that branch-a has an upstream origin/branch-a
            > git checkout branch-a
            > git checkout branch-b
            > git checkout -
            > git branch -D -
            error: branch '-' not found.
            > git branch -D branch-a
            > git checkout -
            error: pathspec '-' did not match any file(s) known to git

            So we have to remember the remote's name (we know it's origin because we are devoting memory space to knowing that this isn't one of those times it's something else), the remote tracking branch's name, the local branch's name, and we're typing those all out. No good! Let's figure out some shorthands.

            @{-<n>} is hard to say but easy to fall in love with

            We can do a little better by using @{-<n>} (you'll also sometimes see it referred to be the older @{-N}). It is a special construct for referring to the nth previously checked out ref.

            > git checkout branch-a
            > git checkout branch-b
            > git rev-parse --abbrev-rev @{-1} # the name of the previously checked out branch
            branch-a
            > git checkout branch-c
            > git rev-parse --abbrev-rev @{-2} # the name of branch checked out before the previously checked out one
            branch-a

            Back in our scenario, we're on qa-environment, we switch to feature, and then want to refer to qa-environment. That's @{-1}! So instead of

            git cherry-pick origin/qa-environment..qa-environment

            We can do

            git cherry-pick origin/qa-environment..@{-1}

            Here's where we are (🎉 marks wins from -, 💥 marks the win from @{-1})

            Bash
            # USING - AND @{-1}
            
            git checkout feature
            # hack hack hack
            git push
            git checkout qa-environment
            git merge --no-ff --no-edit -                # ????
            git push
            # hack hack hack
            # whoops
            git checkout -                               # ????
            git cherry-pick origin/qa-environment..@{-1} # ????
            git push
            git checkout -                               # ????
            git reset --hard origin/qa-environment
            git merge --no-ff --no-edit -                # ????
            git checkout -                               # ????
            # ready for more feature commits
            Bash
            # ORIGINAL
            
            git checkout feature
            # hack hack hack
            git push
            git checkout qa-environment
            git merge --no-ff --no-edit feature
            git push
            # hack hack hack
            # whoops
            git checkout feature
            git cherry-pick origin/qa-environment..qa-environment
            git push
            git checkout qa-environment
            git reset --hard origin/qa-environment
            git merge --no-ff --no-edit feature
            git checkout feature
            # ready for more feature commits

            One down, two to go: we're still relying on memory for the remote's name and the remote branch's name and we're still typing both out in full. Can we replace those with generic shorthands?

            @{-1} is the ref itself, not the ref's name, we can't do

            > git cherry-pick origin/@{-1}..@{-1}
            origin/@{-1}
            fatal: ambiguous argument 'origin/@{-1}': unknown revision or path not in the working tree.
            Use '--' to separate paths from revisions, like this:
            'git <command> [<revision>...] -- [<file>...]'

            because there is no branch origin/@{-1}. For the same reason, @{-1} does not give us a generalized shorthand for the scenario's later git reset --hard origin/qa-environment command.

            But good news!

            Do @{u} @{push}

            @{upstream} or its shorthand @{u} is the remote branch a that would be pulled from if git pull were run. @{push} is the remote branch that would be pushed to if git push was run.

            > git checkout branch-a
            Switched to branch 'branch-a'
            Your branch is ahead of 'origin/branch-a' by 3 commits.
              (use "git push" to publish your local commits)
            > git reset --hard origin/branch-a
            HEAD is now at <the SHA origin/branch-a is at>

            we can

            > git checkout branch-a
            Switched to branch 'branch-a'
            Your branch is ahead of 'origin/branch-a' by 3 commits.
              (use "git push" to publish your local commits)
            > git reset --hard @{u}                                # <-- So Cool!
            HEAD is now at <the SHA origin/branch-a is at>

            Tacking either onto a branch name will give that branch's @{upstream} or @{push}. For example

            git checkout branch-a@{u}

            is the branch branch-a pulls from.

            In the common workflow where a branch pulls from and pushes to the same branch, @{upstream} and @{push} will be the same, leaving @{u} as preferable for its terseness. @{push} shines in triangular workflows where you pull from one remote and push to another (see the external links below).

            Going back to our scenario, it means short, portable commands with a minimum human memory footprint. (🎉 marks wins from -, 💥 marks the win from @{-1}, 😎 marks the wins from @{u}.)

            Bash
            # USING - AND @{-1} AND @{u}
            
            git checkout feature
            # hack hack hack
            git push
            git checkout qa-environment
            git merge --no-ff --no-edit -    # ????
            git push
            # hack hack hack
            # whoops
            git checkout -                   # ????
            git cherry-pick @{-1}@{u}..@{-1} # ????????
            git push
            git checkout -                   # ????
            git reset --hard @{u}            # ????
            git merge --no-ff --no-edit -    # ????
            git checkout -                   # ????
            # ready for more feature commits
            Bash
            # ORIGINAL
            
            git checkout feature
            # hack hack hack
            git push
            git checkout qa-environment
            git merge --no-ff --no-edit feature
            git push
            # hack hack hack
            # whoops
            git checkout feature
            git cherry-pick origin/qa-environment..qa-environment
            git push
            git checkout qa-environment
            git reset --hard origin/qa-environment
            git merge --no-ff --no-edit feature
            git checkout feature
            # ready for more feature commits

            Make the things you repeat the easiest to do

            Because these commands are generalized, we can run some series of them once, maybe

            git checkout - && git reset --hard @{u} && git checkout -

            or

            git checkout - && git cherry-pick @{-1}@{u}.. @{-1} && git checkout - && git reset --hard @{u} && git checkout -

            and then those will be in the shell history just waiting to be retrieved and run again the next time, whether with CtrlR incremental search or history substring searching bound to the up arrow or however your interactive shell is configured. Or make it an alias, or even better an abbreviation if your interactive shell supports them. Save the body wear and tear, give memory a break, and level up in Git.

            And keep going

            The GitHub blog has a good primer on triangular workflows and how they can polish your process of contributing to external projects.

            The FreeBSD Wiki has a more in-depth article on triangular workflow process (though it doesn't know about @{push} and @{upstream}).

            The construct @{-<n>} and the suffixes @{push} and @{upstream} are all part of the gitrevisions spec. Direct links to each:



              • Code
              • Front-end Engineering
              • Back-end Engineering

              to

              TrailBuddy: Using AI to Create a Predictive Trail Conditions App

              Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

              While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

              There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

              Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

              We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

              The quest for data.

              We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

              First on that list was weather.

              We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

              But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

              We used Whimsical to build our initial wireframes.

              Putting our design hats on.

              From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

              We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

              • How easy or difficult of a trail are they looking for?
              • How long is this trail?
              • What does the trail look like?
              • How far away is the trail in relation to my location?
              • For what activity am I needing a trail for?
              • Is this a trail I’d want to come back to in the future?

              By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

              We used Figma to design, prototype, and gather feedback on TrailBuddy.

              Using machine learning to predict trail conditions.

              As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

              We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

              We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

              With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

              Where we go from here.

              It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



              • News & Culture

              to

              Our New Normal, Together

              As the world works to mitigate the impact of the COVID-19 pandemic, our thoughts are foremost with those already ill from the virus and those on the frontlines, slowing its spread. The bravery and commitment of healthcare workers everywhere is an inspiration.

              While Viget’s physical offices are effectively closed, we’re continuing to work with our clients on projects that evolve by the day. Viget has been working with distributed teams to varying degrees for most of our 20-year history, and while we’re comfortable with the tools and best practices that make doing so effective, we realize that some of our clients are learning as they go. We’re here to help.

              These are unprecedented times, but our business playbook is clear: Take care of each other. We’re in this together.

              Our People Team is meeting with everyone on our staff to confirm their work-from-home situation. Do they have family or roommates they can rely on in an emergency? How are they feeling physically and mentally? Do they have what they need to be productive? As a team, we’re working extra hard to communicate. Andy hosts and records video calls to answer questions anyone has about the crisis, and our weekly staff meeting schedule will continue. Recognizing that our daily informal group lunches are a vital social glue in our offices, Aubrey has organized a virtual lunch table Hangout, allowing our now fully-distributed team to catch up over video. It ensures we have some laughs and helps keep us feeling connected.

              Our project teams are well-versed in remote collaboration, but we understand that not all client projects can proceed as planned. We’re doing our best to accommodate evolving schedules while keeping the momentum on as many projects as possible. For all of our clients, we’re making clear that we think long-term. We’re partners through this, and can adapt to help our clients not just weather the storm, but come through it stronger when possible. Some clients have been forced to pause work entirely, while others are busier than ever.

              Viget has persevered through many downturns -- the dot com crash, 9/11, the 2008 financial crisis, and a few self-inflicted close-calls. In retrospect, it’s easy to reflect on how these situations made us stronger, but mid-crisis it can be hard to stay positive. The consistent lesson has been that taking care of each other -- co-workers, clients, partners, community peers -- is what gets us through. It motivates our hard work, it focuses our priorities and collaboration, and inspires us to do what needs to be done.

              I don’t know for certain how this crisis will play out, but I know that all of us at Viget will be doing everything we can to support each other as we go through it together.



              • News & Culture

              to

              Scurry: A Race-To-Finish Scavenger Hunt App

              We have a lot of traditions here at Viget, many of which you may have read about - TTT, FLF, Pointless Weekend. There are others, but you have to be an insider for more information on those.

              Pointless Weekend is one of our favorite traditions, though. It’s been around over a decade and some pretty fun work has come out of it over the years, like Storyboard, Baby Bookie, and Short Order. At a high level, we take 48 hours to build a tool, experiment, or stunt as a team, across all four of our offices. These projects are entirely separate from our client work and we use them to try out new technologies, explore roles on the team, and stress-test our processes.

              The first step for a Pointless Weekend is assembling the teams. We had two teams this year, with a record number of participants. You can read about TrailBuddy, what the other team built, here.

              The Scurry team was split between the DC and Durham offices, so all meetings were held via Hangout.

              Once we were assembled, we set out to understand the constraints and the goals of our Pointless Project. We went into this weekend with an extra pep in our step, as we were determined to build something for the upcoming Viget 20th anniversary TTT this summer. Here’s what we knew we wanted:

              1. An activity all Vigets could do together, where they could create memories, and share broadly on social
              2. Something that we could use in a spotty network at C Lazy U Ranch in Colorado
              3. A product we can share with others: corporate groups, families and friends, schools, bachelor/ette parties

              We landed on a scavenger hunt native app, which we named Scurry (Scavenger + Hurry = Scurry. Brilliant, right?). There are already a few scavenger apps available, so we set out to create something that was

              • Quick and easy to set up hunts
              • Free and intuitive for users
              • A nice combination of trivia and activities
              • Social! We wanted to enable teams to share photos and progress

              One of the main reasons we have Pointless Weekends is to test out new technologies and processes. In that vein, we tried out Notion as our central organizing tool - we used it for user journeys, data modeling, and even writing tickets, which we typically use Github for.

              We tested out Notion as our primary tool, writing tickets and tracking progress.

              When we built the app, we needed to prepare for spotty network service, as internet connectivity isn’t guaranteed at C Lazy U Ranch – where our Viget20 celebration will be. A Progressive Web Application (PWA) didn't make sense for our tech requirements, so we chose the route of creating a native application.

              There are a number of options available to build native applications. But, as we were looking to make as much progress as possible in 48-hours, we chose one of our favorite frameworks: React Native. React Native allows developers to build true, cross-platform native applications, using some of our favorite technologies: javascript, the React framework, and a native-specific variant of CSS. We decided on the turn-key solution Expo. Expo has extra tooling allowing for easy development, deployment, and debugging.

              This is a snap shot of our app and Expo.

              Our frontend developers were able to immediately dive in making screens and styling components, and quickly made the mockups in Whimsical a reality.

              On the backend, we used the supported library to connect to the backend datastore, Firebase. Firebase is a hosted solution for data storage, with key features built-in like authentication, realtime updates, and offline support. Our backend developer worked behind the frontend developers hooking those views up to live data.

              Both of these tools, Expo and Firebase, were easy to use and allowed us to focus on building a working application quickly, rather than being mired in setup or bespoke solutions to common problems.

              Whimsical is one of our favorite tools for building out mockups of an app.

              We made impressive progress in our 48-hour sprint, but there’s still some work to do. We have some additional features we hope to add before TTT, which will require additional testing and refining. For now, stay tuned and sign up for our newsletter. We’ll be sure to share when Scurry is ready for the world!



              • News & Culture

              to

              Together We Flourish, Remotely

              Like many other companies, Viget is working through the new challenge of suddenly being a fully-distributed company. We don’t know how long it will last or every challenge that will arise because of these unfortunate circumstances, but we know the health and well-being of our people is paramount. As Employee Engagement Manager, I feel inspired by these new challenges, eager to step up, and committed to seeing what good can come of this.

              Now more than ever, we want to maintain the culture that has sustained us over the last 20 years – a culture that I think is best captured by our mantra, “do great work and be a great teammate.” As everyone is adjusting to new work environments, schedules, and distractions, I am adjusting my approach to employee engagement, and the People Team is looking for new ways to nurture and protect the culture we treasure.

              The backbone of being a great teammate is knowing each other and caring about each other. For years the People Team has focused on making sure people who work at Viget are known, accepted, and cared about. From onboarding to events to weekly and monthly touchpoints, we invest in coworkers knowing each other. On top of that, we have well-appointed offices where people like to be, and friendships unfold over time. Abruptly becoming fully distributed makes it impossible for some of these connections to happen organically, like they would have around the coffee machine and the lunch tables. These microinteractions between colleagues in the same office, the hellos when you get off the elevator or the “what’d you get up to this weekend” chit chat near the seltzer refrigerator, all add up. We realize more than ever how valuable those moments are, and I know I will feel extra grateful for them when we are all back together.

              Until that time, we are working to make sure everyone at Viget feels connected, safe, healthy, and most importantly, together, even when we are physically apart. We are keeping up our weekly staff meetings and monthly team lunches, and we just onboarded a new hire last week as thoroughly as ever. There are some other, new ways we’re sparking connections, too.

              New ways we're sparking connections:

              Connecting IntentionallyWe are making the most of the tools that we’ve been using for years. New Slack channels have spun up, including #exercise, where folks are sharing how they are making do without a gym, and #igotyou, a place where folks can post where they’ve found supplies in stock as grocery stores are being emptied at an alarming pace.
              Remote Lunch TablesWe have teammates in three different time zones, on different project teams, and at different stages of life. We’ve created two virtual lunch tables, one at 12PM EST and one at 12PM MST, where folks can join with or without their lunches and with or without their kids, partners, or pets. There are no rules or structure, just an opportunity to chat and see a friendly face as a touchpoint to your day.
              Last Weekend This MorningCatching up Monday morning is a great way to kick off your week. Historically, I’ve done this from my desk over coffee as I greet folks coming off the elevator (I usually have the privilege of sitting at our front desk). I now do this from my desk, at home, over coffee as folks pop in or out of our Zoom call. One upshot of the new normal is I can “greet” anyone who shows up, not just people who work from my same office. Again, no structure, just a way to start our week, together.
              Munch MadnessYes, you read that right. Most of the sports world is enjoying an intermission. Since our CEO can’t cheer on his beloved Cavaliers and our VP of Design can’t cheer on his Gators, we’ve created something potentially much better. A definitive snack bracket. There is a minimal time commitment and folks with no sports knowledge can participate. The rules are simple: create and submit your bracket, ranking who you believe will win each snack faceoff. Then as we move through the rounds, vote on your favorite snacks. The competition has already sparked tons of conversation and plenty of snack hot takes. Want to start a munch-off of your own? Check out our bracket as a starting point.
              Virtual Happy HoursSigning off for the day and shutting down your machine is incredibly important for maintaining a work-life balance. Casually checking in, unwinding, and being able to chat about your day is also important. We have big, beautiful kitchens in each of our offices, along with casual spaces where at the end of any given day you can find a few Vigets catching up before heading home. This is something we don’t want to miss! So we’re setting up weekly happy hours where folks can hop in and say hi to each other face-to-face. We’ve found Zoom to be a great platform so we can see the maximum number of our teammates possible. Like all of our other events, it’s optional. There is also an understanding that your roommate, kid, significant other, or pet might show up on screen (and are welcome!). No one is shamed for multitasking and we encourage our teammates to join as they can. So far we’ve toasted new teammates, played a song or two, and up next we’ll play trivia.

              At the end of the day, we are all here for one reason: to do great work. Our award-winning work is made possible by the trust we’ve built within our teams. Staying focused and accountable to ourselves and our clients is what drives our motivation to continue to show up and do our best. In our new working environment, it is crucial that we can both stay connected and productive; a lot of teammates are stepping up to support one another. Here are a few ways we are continuing to foster our “do great work” mantra.

              New ways we're fostering great work:

              Staying in TouchThe People Team is actively touching base with every employee. Our focus is on their health, productivity, and connection. These 1:1s have given us a baseline for how we can provide the best support for our team, from making sure they're aware of flexible work options to setting them up with the tools they need to be successful. We’ve delivered chairs, monitors, and helped troubleshoot in-home wifi issues. We are committed to making sure every Viget is set up for success.
              Sharing is CaringWe’re no stranger to remote teams. We have four offices across the U.S. and a handful of full-time remote folks, and we’ve leaned on our inside experts to share their expertise on remote work. Most recently, ourData & Analytics Director, who has been working remotely full time for five years, gave a presentation on best practices for working from home. His top tips for working from home include:
              • Minimize other windows in remote meetings.
              • Set a schedule and avoid midday chores.
              • Take breaks away from the screen.
              • Plan your workday on your shared calendar.
              • Be mindful of Slack and social media as a distraction.
              • Use timers.
              • Keep your work area separate from where you relax.
              • Pretend that you’re still working from work.
              • Experiment and figure out what works for you.

              Our UX Research Director also stepped up to share her expertise to aid in adjusting to our new working conditions. She led a microclass on remote facilitation where she shared best practices and went over tools that support remote collaboration. Some of the tools she highlighted included Miro, Mural, Whimsical, and Jamboard. During the microclass she demonstrated use of Whimsical’s voting feature, which makes it easy for distributed groups to establish discussion topic priorities.

              Always PreparedHaving all of our project materials stored in the Cloud in a consistent, predictable way is a cornerstone of our business continuity plan. It is more important than ever for our team to follow the established best practices and ensure that project files are accessible to the full Viget team in the event of unplanned time off. Our VP of Client Services is leading efforts to ensure everyone is aware of and following our established guidelines with tools like Drive, Slack, Github, and Figma. Our priorities are that clients’ needs are met, quality is high, and timelines are honored.

              As the pandemic unfolds, our approach to employee engagement will evolve. We have more things in the works to build and maintain connections while distributed, including trivia and game nights, book clubs, virtual movie nights, and community service opportunities, just to name a few. No matter what we’re doing or what tool we’re using to connect, we’ll be in it together: doing great work, being great teammates, and looking forward.



              • News & Culture

              to

              5 things to Note in a New Phoenix 1.5 App

              Yesterday (Apr 22, 2020) Phoenix 1.5 was officially released ????

              There’s a long list of changes and improvements, but the big feature is better integration with LiveView. I’ve previously written about why LiveView interests me, so I was quite excited to dive into this release. After watching this awesome Twitter clone in 15 minutes demo from Chris McCord, I had to try out some of the new features. I generated a new phoenix app with the —live flag, installed dependencies and started a server. Here are five new features I noticed.

              1. Database actions in browser

              Oops! Looks like I forgot to configure the database before starting the server. There’s now a helpful message and a button in the browser that can run the command for me. There’s a similar button when migrations are pending. This is a really smooth UX to fix a very common error while developing.

              2. New Tagline!

              Peace-of-mind from prototype to production

              This phrase looked unfamiliar, so I went digging. Turns out that the old tagline was “A productive web framework that does not compromise speed or maintainability.” (I also noticed that it was previously “speed and maintainability” until this PR from 2019 was opened on a dare to clarify the language.)

              Chris McCord updated the language while adding phx.new —live. I love this framing, particularly for LiveView. I am very excited about the progressive enhancement path for LiveView apps. A project can start out with regular, server rendered HTML templates. This is a very productive way to work, and a great way to start a prototype for just about any website. Updating those templates to work with LiveView is an easier lift than a full rebuild in React. And finally, when you’re in production you have the peace-of-mind that the reliable BEAM provides.

              3. Live dependency search

              There’s now a big search bar right in the middle of the page. You can search through the dependencies in your app and navigate to the hexdocs for them. This doesn’t seem terribly useful, but is a cool demo of LiveView. The implementation is a good illustration of how compact a feature like this can be using LiveView.

              4. LiveDashboard

              This is the really cool one. In the top right of that page you see a link to LiveDashboard. Clicking it will take you to a page that looks like this.

              This page is built with LiveView, and gives you a ton of information about your running system. This landing page has version numbers, memory usage, and atom count.

              Clicking over to metrics brings you to this page.

              By default it will tell you how long average queries are taking, but the metrics are configurable so you can define your own custom telemetry options.

              The other tabs include process info, so you can monitor specific processes in your system:

              And ETS tables, the in memory storage that many apps use for caching:

              The dashboard is a really nice thing to get out of the box and makes it free for application developers to monitor their running system. It’s also developing very quickly. I tried an earlier version a week ago which didn’t support ETS tables, ports or sockets. I made a note to look into adding them, but it's already done! I’m excited to follow along and see where this project goes.

              5. New LiveView generators

              1.5 introduces a new generator mix phx.gen.live.. Like other generators, it will create all the code you need for a basic resource in your app, including the LiveView modules. The interesting part here is that it introduces patterns for organizing LiveView code, which is something I have previously been unsure about. At first glance, the new organization makes sense and feels like a good approach. I look forward to seeing how this works on a real project.

              Conclusion

              The 1.5 release brings more changes under the hood of course, but these are the first five differences you’ll notice after generating a new Phoenix 1.5 app with LiveView. Congratulations to the entire Phoenix team, but particularly José Valim and Chris McCord for getting this work released.



              • Code
              • Back-end Engineering

              to

              Unsolved Zoom Mysteries: Why We Have to Say “You’re Muted” So Much

              Video conference tools are an indispensable part of the Plague Times. Google Meet, Microsoft Teams, Zoom, and their compatriots are keeping us close and connected in a physically distanced world.

              As tech-savvy folks with years of cross-office collaboration, we’ve laughed at the sketches and memes about vidconf mishaps. We practice good Zoomiquette, including muting ourselves when we’re not talking.

              Yet even we can’t escape one vidconf pitfall. (There but for the grace of Zoom go I.) On nearly every vidconf, someone starts to talk, and then someone else says: “Oop, you’re muted.” And, inevitably: “Oop, you’re still muted.”

              That’s right: we’re trying to follow Zoomiquette by muting, but then we forget or struggle to unmute when we do want to talk.

              In this post, I’ll share my theories for why the You’re Muted Problems are so pervasive, using Google Meet, Microsoft Teams, and Zoom as examples. Spoiler alert: While I hope this will help you be more mindful of the problem, I can’t offer a good solution. It still happens to me. All. The. Time.

              Skip the why and go straight to the vidconf app keyboard shortcuts you should memorize right now.

              Why we don't realize we’re muted before talking

              Why does this keep happening?!?

              Simply put: UX and design decisions make it harder to remember that you’re muted before you start to talk.

              Here’s a common scenario: You haven’t talked for a bit, so you haven’t interacted with the Zoom screen for a few seconds. Then you start to talk — and that’s when someone tells you, “You’re muted.”

              We forget so easily in these scenarios because when our mouse has been idle for a few seconds, the apps hide or downplay the UI elements that tell us we’re muted.

              Zoom and Teams are the worst offenders:

              • Zoom hides both the toolbar with the main in-app controls (the big mute button) and the mute status indicator on your video pane thumbnail.
              • Teams hides the toolbar, and doesn't show a mute status indicator on your video thumbnail in the first place.

              Meet is only slightly better:

              • Meet hides the toolbar, and shows only a small mute status icon in your video thumbnail.

              Even when our mouse is active, the apps’ subtle approach to muted state UI can make it easy to forget that we’re muted:

              Teams is the worst offender:

              • The mute button is an icon rather than words.
              • The muted-state icon's styling could be confused with unmuted state: Teams does not follow the common pattern of using red to denote muted state.
              • The mute button is not differentiated in visual hierarchy from all the other controls.
              • As mentioned above, Teams never shows a secondary mute status indicator.

              Zoom is a bit better, but still makes it pretty easy to forget that you’re muted:

              • Pros:
                • Zoom is the only app to use words on the mute button, in this case to denote the button action (rather than the muted state).
                • The muted-state icon’s styling (red line) is less likely to be confused with the unmuted-state icon.
              • Cons:
                • The mute button’s placement (bottom left corner of the page) is easy to overlook.
                • The mute button is not differentiated in visual hierarchy from the other toolbar buttons — and Zoom has a lot of toolbar buttons, especially when logged in as host.
                • The secondary mute status indicator is a small icon.
                • The mute button’s muted-state icon is styled slightly differently from the secondary mute status indicator.
              • Potential Cons:
                • While words denote the button action, only an icon denotes the muted state.

              Meet is probably the clearest of the three apps, but still has pitfalls:

              • Pros:
                • The mute button is visually prominent in the UI: It’s clearly differentiated in the visual hierarchy relative to other controls (styled as a primary button); is a large button; and is placed closer to the center of the controls bar.
                • The muted-state icon’s styling (red fill) is less likely to be confused with the unmuted-state icon.
              • Cons:
                • Uses only an icon rather than words to denote the muted state.
              • Unrelated Con:
                • While the mute button is visually prominent, it’s also placed next to the hang-up button. So in Meet’s active state you might be less likely to forget you’re muted … but more likely to accidentally hang up when trying to unmute. 😬

              I know modern app design leans toward minimalism. There’s often good rationale to use icons rather than words, or to de-emphasize controls and indicators when not in use.

              But again: This happens on basically every call! Often multiple times per call!! And we’re supposed to be tech-savvy!!! Imagine what it’s like for the tens of millions of vidconf newbs.

              I would argue that “knowing your muted state” has turned out to be a major vidconf user need. At this point, it’s certainly worth rethinking UX patterns for.

              Why we keep unsuccessfully unmuting once we realize we’re muted

              So we can blame the You’re Muted Problem on UX and design. But what causes the You’re Still Muted Problem? Once we know we’re muted, why do we sometimes fail to unmute before talking again?

              This one is more complicated — and definitely more speculative. To start making sense of this scenario, here’s the sequence I’m guessing most commonly plays out (I did this a couple times before I became aware of it):

              The crucial part is when the person tries to unmute by pressing the keyboard Volume On/Off key.

              If that’s in fact what’s happening (again, this is just a hypothesis), I’m guessing they did that because when someone says “You’re muted” or “I can’t hear you,” our subconscious thought process is: “Oh, Audio is Off. Press the keyboard key that I usually press when I want to change Audio Off to Audio On.”

              There are two traps in this reflexive thought process:

              First, the keyboard volume keys control the speaker volume, not the microphone volume. (More specifically, they control the system sound output settings, rather than the system sound input settings or the vidconf app’s sound input settings.)

              In fact, there isn’t a keyboard key to control the microphone volume. You can’t unmute your mic via a dedicated keyboard key, the way that you can turn the speaker volume on/off via a keyboard key while watching a movie or listening to music.

              Second, I think we reflexively press the keyboard key anyway because our mental model of the keyboard audio keys is just: Audio. Not microphone vs. speaker.

              This fuzzy mental model makes sense: There’s only one set of keyboard keys related to audio, so why would I think to distinguish between microphone and speaker? 

              So my best guess is hardware design causes the You’re Still Muted Problem. After all, keyboard designs are from a pre-Zoom era, when the average person rarely used the computer’s microphone.

              If that is the cause, one potential solution is for hardware manufacturers to start including dedicated keys to control microphone volume:

              Video conference keyboard shortcuts you should memorize right now

              Let me know if you have other theories for the You’re Still Muted Problem!

              In the meantime, the best alternative is to learn all of the vidconf app keyboard shortcuts for muting/unmuting:

              • Meet
                • Mac: Command(⌘) + D
                • Windows: Control + D
              • Teams
                • Mac: Command(⌘) + Shift + M
                • Windows: Ctrl + Shift + M
              • Zoom
                • Mac: Command(⌘) + Shift + A
                • Windows: Alt + A
                • Hold Spacebar: Temporarily unmute

              Other vidconf apps not included in my analysis:

              • Cisco Webex Meetings
                • Mac: Ctrl + Alt + M
                • Windows: Ctrl + Shift + M
              • GoToMeeting

              Bonus protip from Jackson Fox: If you use multiple vidconf apps, pick a keyboard shortcut that you like and manually change each app’s mute/unmute shortcut to that. Then you only have to remember one shortcut!




              to

              A Parent’s Guide to Working From Home, During a Global Pandemic, Without Going Insane

              Though I usually enjoy working from Viget’s lovely Boulder office, during quarantine I am now working from home while simultaneously parenting my 3-year-old daughter Audrey. My husband works in healthcare and though he is not on the front lines battling COVID-19, he is still an essential worker and as such leaves our home to work every day.

              Some working/parenting days are great! I somehow get my tasks accomplished, my kid is happy, and we spend some quality time together.

              And some days are awful. I have to ignore my daughter having a meltdown and try to focus on meetings, and I wish I wasn’t in this situation at all. Most days are somewhere in the middle; I’m just doing my best to get by.

              I’ve seen enough working parent memes and cries for help on social media to know that I’m not alone. There are many parents out there who now get to experience the stress and anxiety of living through a global pandemic while simultaneously navigating ways to stay productive while working from home and being an effective parent. Fun isn’t it?

              I’m not an expert on the matter, but I have found a few small things that are making me feel a bit more sane. I hope sharing them will make someone else’s life easier too.

              Truths to Accept

              First, let’s acknowledge some truths about this new situation we find ourselves in:

              Truth 1: We’ve lost something.

              Parents have lost more than daycare and schools during this epidemic. We’ve lost any time that we had for ourselves, and that was really valuable. We no longer have small moments in the day to catch up on our personal lives. I no longer have a commute to separate my work duties from my mom duties, or catch up with my friends, or just be quiet.

              Truth 2: We’re human.

              The reason you can’t be a great employee and a great parent and a great friend and a great partner or spouse all day every day isn’t because you’re doing a bad job, it’s because being constantly wonderful in all aspects of your life is impossible. Pick one or two of those things a day to focus on.

              Truth 3: We’re all doing our best.

              This is the most important part of this article. Be kind to yourselves. This isn’t easy, and putting so much pressure on yourself that you break isn’t going to make it any easier.

              Work from Home Goals

              Now that we’ve accepted some truths about our current situation, let’s set some goals.

              Goal 1: Do Good Work

              At Viget, and wherever you work, with kids or without we all want to make sure that the quality of our work stays up throughout the pandemic and that we can continue to be reliable team members and employees to the best of our abilities.

              Goal 2: Stay Sane

              We need to figure out ways to do this without sacrificing ourselves entirely. For me, this means fitting my work into normal work hours as much as possible so that I can still have some downtime in the evenings.

              Goal 3: Make This Sustainable

              None of us knows how long this will last but we may as well begin mentally preparing for a long haul.

              Work from Home Rules

              Now, there are some great Work from Home Rules that apply to everyone with or without kids. My coworker Paul Koch shared these with the Viget team a Jeremy Bearimy ago and I agree this is also the foundation for working from home with kids.

              1. When you’re in a remote meeting, minimize other windows to stay focused
              2. Set a schedule and avoid chores*
              3. Take breaks away from the screen
              4. Plan your workday on the calendar+
              5. Be mindful of Slack and social media as a distraction
              6. Use timers+
              7. Keep your work area separate from where you relax
              8. Pretend that you’re still WFW
              9. Experiment and figure out what works for you

              In the improv spirit I say “Yes, AND….” to these tips. And so, here are my adjusted rules for WFH while kiddos around: These have both been really solid tools for me, so let’s dig in.

              Daily flexible schedule for kids

              Day Planning: Calendars and Timers

              A few small tweaks and adjustments make this even more doable for me and my 3-year-old. First- I don’t avoid chores entirely. If I’m going up and down the stairs all day anyway I might as well throw in a load of laundry while I’m at it. The more I can get done during the day means a greater chance of some down time in the evening.

              Each morning I plan my day and Audrey’s day:

              My Work Day:

              Audrey's Day

              Identify times of day you are more likely to be focus and protect them. For me, I know I have a block of time from 5-7a before Audrey wakes up and again during “nap time” from 1-3p.I built a construction paper “schedule” that we update and reorganize daily. We make the schedule together each day. She feels ownership over it and she gets to be the one who tells me what we do next.
              Look at your calendar first thing and make adjustments either in your plans or move meetings if you have to.I’m strategic about screen time- I try to schedule it when I have meetings. It also helps to schedule a physical activity before screen time as she is less likely to get bored.
              Make goals for your day: Tackle time sensitive tasks first. Take care of things that either your co-workers or clients are waiting on from you first, this will help your day be a lot less stressful. Non-time sensitive tasks come next- these can be done at any time of day.We always include “nap time” even though she rarely naps anymore. This is mostly a time for us both to be alone.

              When we make the schedule together it also helps me understand her favorite parts of the day and reminds me to include them.

              Once our days are planned, I also use timers to help keep the structure of the day. (I bought a great alarm clock for kids on Amazon that turns colors to signal bedtime and quiet time. It’s been hugely worth it for me.)

              Timers for Me:

              Timers for Audrey:

              More than ever, I rely on a time tracking timer. At Viget we use Harvest to track time, and it has a handy built in timer, but there are many apps or online tools that could help you keep track of your time as well.Audrey knows what time she can come out of her room in the morning. If she wakes up before the light is green she plays quietly in her room.
              I need a timer because the days and hours are bleeding together- without tracking as I go it would be really hard for me to remember when I worked on certain projects or know for certain if I gave Viget enough time for the day.She knows how long “nap time” is in the afternoon.
              Starting and stopping the timer helps me turn on and off “work mode”, which is a helpful sanity bonus.Perhaps best of all I am not the bad guy! “Sorry honey, the light isn’t green yet and there really isn’t anything mommy can do about it” is my new favorite way to ensure we both get some quiet time.

              Work from Home Rules: Updated for Parents

              Finally, I have a few more Work from Home Rules for parents to add to the list:

              1. Minimize other windows in remote meetings
              2. Set a schedule and fit in some chores if time allows
              3. Take breaks away from the screen
              4. Schedule both your and your kids’ days
              5. Be mindful of Slack and social media as a distraction
              6. Use timers to track your own time and help your kids understand the day
              7. Keep your work area separate from where you relax
              8. Pretend that you’re still WFW
              9. Experiment and figure out what works for you
              10. Be prepared with a few activities
                • Each morning, have just ONE thing ready to go. This can be a worksheet you printed out, a coloring station setup, a new bag of kinetic sand you just got delivered from Amazon, a kids dance video on YouTube or an iPad game. Recently I started enlisting my mom to read stories on Facetime. The activity doesn’t have to be new each day but (especially for young kids) it has to be handy for you to start up quickly if your schedule changes
              11. Clearly communicate your availability with your team and project PMs
                • Life happens. Some days are going to be hard. Whatever you do, don’t burn yourself out or leave your team hanging. If you need to move a meeting or take a day off, communicate that as early and as clearly as you can.
              12. Take PTO if you can
                • None of us are superheroes. If you’re feeling overwhelmed- take a look at the next few days and figure out which one makes the most sense for you to take a break.
              13. Take breaks to be alone without doing a task
                • Work and family responsibilities have blended together, there’s almost no room for being alone. If you can find some precious alone time don’t use it to fold laundry or clean the bathroom. Just zone out. I think we all really need this.

              Last but not least, enjoy your time at home if you can. This is an unusual circumstance and even though it’s really hard, there are parts that are really great too.

              If you have some great WFH tips we’d love to hear about them in the comments!




              to

              How to restart a blog after five years

              This is not the post I had planned for resuming my blog. I had in mind a lengthy article about design and its role in communication at this point in digital evolution. Deep. Thought-provoking. But I know that it’s better to start with ideas that are a little less ambitious in scope. Plus, to tell you […]




              to

              How not to overwhelm people

              When you’re putting together information (for customers, or your target audience) how much is too much? Details, details. Is it better to go light or heavy on the details? You want to be open and forthcoming with information, but on the other hand you don’t want to overwhelm people, do you? Here’s a good way to […]




              to

              I'm Loyal to Nothing Except the Dream

              There is much I take for granted in my life, and the normal functioning of American government is one of those things. In my 46 years, I've lived under nine different presidents. The first I remember is Carter. I've voted in every presidential election since 1992, but I do not




              to

              To Serve Man, with Software

              I didn't choose to be a programmer. Somehow, it seemed, the computers chose me. For a long time, that was fine, that was enough; that was all I needed. But along the way I never felt that being a programmer was this unambiguously great-for-everyone career field with zero downsides. There




              to

              What does Stack Overflow want to be when it grows up?

              I sometimes get asked by regular people in the actual real world what it is that I do for a living, and here's my 15 second answer:

              We built a sort of Wikipedia website for computer programmers to post questions and answers. It's called Stack Overflow.

              As of last month,




              to

              Adding Block Patterns to Your Theme

              Block patterns are unique, predefined combinations of blocks you can use and tweak to create stunningly designed sections of your website.




              to

              New website design launch for Automated Irrigation Systems in Zionsville, Indiana

              We’re delighted to launch the first ever website for this local irrigation company that has been around since 1989! Automated...continue reading




              to

              My First Business Mentorship Meeting

              Today was my very first one-on-one business mentorship meeting with Marie Poulin at Digital Strategy School. This was the first of what will be monthly 1 Hour sessions with Marie during the 6-month Digital Strategy School course and I can already tell these next 6 months are going to be a whirlwind! The course officially […]




              to

              How To Design An Iconic Logo

              https://www.noupe.com/design/how-to-design-an-iconic-logo.html




              to

              Should Designers Learn How to Code?

              https://thenextweb.com/growth-quarters/2020/05/08/should-designers-learn-how-to-code-syndication/




              to

              Microsoft bundled its beautiful Bing wallpapers into a free Android app

              https://thenextweb.com/microsoft/2020/05/08/microsoft-bundled-its-beautiful-bing-wallpapers-into-a-free-android-app/