es

We, Who Are Web Designers

In 2003, my wife Lowri and I went to a christening party. We were friends of the hosts but we knew almost no-one else there. Sitting next to me was a thirty-something woman and her husband, both dressed in the corporate ‘smart casual’ uniform: Jersey, knitwear, and ready-faded jeans for her, formal shoes and tucked-in formal shirt for him (plus the jeans of course; that’s the casual bit). Both appeared polite, neutral, and neat in every respect.

I smiled and said hello, and asked how they knew our hosts. The conversation stalled pretty quickly the way all conversations will when only one participant is engaged. I persevered, asked about their children who they mentioned, trying to be a good friend to our hosts by being friendly to other guests. It must have prompted her to reciprocate. With reluctant interest she asked the default question: ‘What do you do?’ I paused, uncertain for a second. ‘I’m a web designer’ I managed after a bit of nervous confusion at what exactly it was that I did. Her face managed to drop even as she smiled condescendingly. ‘Oh. White backgrounds!’ she replied with a mixture of scorn and delight. I paused. ‘Much of the time’, I nodded with an attempt at a self-deprecating smile, trying to maintain the camaraderie of the occasion. ‘What do you do?’ I asked, curious to see where her dismissal was coming from. ‘I’m the creative director for … agency’ she said smugly, overbearingly confident in the knowledge that she had a trump card, and had played it. The conversation was over.

I’d like to say her reaction didn’t matter to me, but it did. It stung to be regarded so disdainfully by someone who I would naturally have considered a colleague. I thought to try and explain. To mention how I started in print, too. To find out why she had such little respect for web design, but that was me wanting to be understood. I already knew why. Anything I said would sound defensive. She may have been rude, but at least she was honest.

I am a web designer. I neither concentrate on the party venue, food, music, guest list, or entertainment, but on it all. On the feeling people enter with and walk away remembering. That’s my job. It’s probably yours too.

I’m self-actualised, without the stamp of approval from any guild, curriculum authority, or academic institution. I’m web taught. Colleague taught. Empirically taught. Tempered by over fifteen years of failed experiments on late nights with misbehaving browsers. I learnt how to create venues because none existed. I learnt what music to play for the people I wanted at the event, and how to keep them entertained when they arrived. I empathised, failed, re-empathised, and did it again. I make sites that work. That’s my certificate. That’s my validation.

I try, just like you, to imbue my practice with an abiding sense of responsibility for the universality of the Web as Tim Berners-Lee described it. After all, it’s that very universality that’s allowed our profession and the Web to thrive. From the founding of the W3C in 1994, to Mosaic shipping with <img> tag support in 1993, to the Web Standards Project in 1998, and the CSS Zen Garden in 2003, those who care have been instrumental in shaping the Web. Web designers included. In more recent times I look to the web type revolution, driven and curated by both web designers, developers, and the typography community. Again, we’re teaching ourselves. The venues are open to all, and getting more amazing by the day.

Apart from the sites we’ve built, all the best peripheral resources that support our work are made by us. We’ve contributed vast amounts of code to our collective toolkit. We’ve created inspirational conferences like Brooklyn Beta, New Adventures, Web Directions, Build, An Event Apart, dConstruct, and Webstock. As a group, we’ve produced, written-for, and supported forward-thinking magazines like A List Apart, 8 Faces, Smashing Mag, and The Manual. We’ve written the books that distill our knowledge either independently or with publishers from our own community like Five Simple Steps and A Book Apart. We’ve created services and tools like jQuery, Fontdeck, Typekit, Hashgrid, Teuxdeux, and Firebug. That’s just a sample. There’s so many I haven’t mentioned. We did these things. What an extraordinary industry.

I know I flushed with anger and embarrassment that day at the christening party. Afterwards, I started to look a little deeper into what I do. I started to ask what exactly it means to be a web designer. I started to realise how extraordinary our community is. How extraordinary this profession is that we’ve created. How good the work is that we do. How delightful it is when it does work; for audiences, clients, and us. How fantastic it is that I help build the Web. Long may that feeling last. May it never go away. There’s so much still to learn, create, and make. This is my our party. Hi, I’m Jon; my friends and I are making Mapalong, and I’m a web designer.




es

Auphonic Leveler 1.8 and Auphonic Multitrack 1.4 Updates

Today we released free updates for the Auphonic Leveler Batch Processor and the Auphonic Multitrack Processor with many algorithm improvements and bug fixes for Mac and Windows.

Changelog

  • Linear Filtering Algorithms to avoid Asymmetric Waveforms:
    New zero-phase Adaptive Filtering Algorithms to avoid asymmetric waveforms.
    In asymmetric waveforms, the positive and negative amplitude values are disproportionate - please see Asymmetric Waveforms: Should You Be Concerned?.
    Asymmetrical waveforms are quite natural and not necessarily a problem. They are particularly common on recordings of speech, vocals and can be caused by low-end filtering. However, they limit the amount of gain that can be safely applied without introducing distortion or clipping due to aggressive limiting.
  • Noise Reduction Improvements:
    New and improved noise profile estimation algorithms and bug fixes for parallel Noise Reduction Algorithms.
  • Processing Finished Notification on Mac:
    A system notification (including a short glass sound) is now displayed on Mac OS when the Auphonic Leveler or Auphonic Multitrack has finished processing - thanks to Timo Hetzel.
  • Improved Dithering:
    Improved dithering algorithms - using SoX - if a bit-depth reduction is necessary during file export.
  • Auphonic Multitrack Fixes:
    Fixes for ducking and background tracks and for very short music tracks.
  • New Desktop Apps Documentation:
    The documentation of our desktop apps is now integrated in our new help system:
    see Auphonic Leveler Batch Processor and Auphonic Multitrack Processor.
  • Bug Fixes and Audio Algorithm Improvements:
    This release also includes many small bug fixes and all audio algorithms come with improvements and updated classifiers using the data from our Web Service.

About the Auphonic Desktop Apps

We offer two desktop programs which include our audio algorithms only. The algorithms will be computed offline on your device and are exactly the same as implemented in our Web Service.

The Auphonic Leveler Batch Processor is a batch audio file processor and includes all our (Singletrack) Audio Post Production Algorithms. It can process multiple productions at once.

Auphonic Multitrack includes our Multitrack Post Production Algorithms and requires multiple parallel input audio tracks, which will be analyzed and processed individually as well as combined to create one final mixdown.

Upgrade now

Everyone is encouraged to download the latest binaries:

Please let us know if you have any questions or feedback!






es

New Auphonic Transcript Editor and Improved Speech Recognition Services

Back in late 2016, we introduced Speech Recognition at Auphonic. This allows our users to create transcripts of their recordings, and more usefully, this means podcasts become searchable.
Now we integrated two more speech recognition engines: Amazon Transcribe and Speechmatics. Whilst integrating these services, we also took the opportunity to develop a complete new Transcription Editor:

Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.
Try out the Transcript Editor Examples yourself!


The new Auphonic Transcript Editor is included directly in our HTML transcript output file, displays word confidence values to instantly see which sections should be checked manually, supports direct audio playback, HTML/PDF/WebVTT export and allows you to share the editor with someone else for further editing.

The new services, Amazon Transcribe and Speechmatics, offer transcription quality improvements compared to our other integrated speech recognition services.
They also return word confidence values, timestamps and some punctuation, which is exported to our output files.

The Auphonic Transcript Editor

With the integration of the two new services offering improved recognition quality and word timestamps alongside confidence scores, we realized that we could leverage these improvements to give our users easy-to-use transcription editing.
Therefore we developed a new, open source transcript editor, which is embedded directly in our HTML output file and has been designed to make checking and editing transcripts as easy as possible.

Main features of our transcript editor:
  • Edit the transcription directly in the HTML document.
  • Show/hide word confidence, to instantly see which sections should be checked manually (if you use Amazon Transcribe or Speechmatics as speech recognition engine).
  • Listen to audio playback of specific words directly in the HTML editor.
  • Share the transcript editor with others: as the editor is embedded directly in the HTML file (no external dependencies), you can just send the HTML file to some else to manually check the automatically generated transcription.
  • Export the edited transcript to HTML, PDF or WebVTT.
  • Completely useable on all mobile devices and desktop browsers.

Examples: Try Out the Transcript Editor

Here are two examples of the new transcript editor, taken from our speech recognition audio examples page:

1. Singletrack Transcript Editor Example
Singletrack speech recognition example from the first 10 minutes of Common Sense 309 by Dan Carlin. Speechmatics was used as speech recognition engine without any keywords or further manual editing.
2. Multitrack Transcript Editor Example
A multitrack automatic speech recognition transcript example from the first 20 minutes of TV Eye on Marvel - Luke Cage S1E1. Amazon Transcribe was used as speech recognition engine without any further manual editing.
As this is a multitrack production, the transcript includes exact speaker names as well (try to edit them!).

Transcript Editing

By clicking the Edit Transcript button, a dashed box appears around the text. This indicates that the text is now freely editable on this page. Your changes can be saved by using one of the export options (see below).
If you make a mistake whilst editing, you can simply use the undo/redo function of the browser to undo or redo your changes.


When working with multitrack productions, another helpful feature is the ability to change all speaker names at once throughout the whole transcript just by editing one speaker. Simply click on an instance of a speaker title and change it to the appropriate name, this name will then appear throughout the whole transcript.

Word Confidence Highlighting

Word confidence values are shown visually in the transcript editor, highlighted in shades of red (see screenshot above). The shade of red is dependent on the actual word confidence value: The darker the red, the lower the confidence value. This means you can instantly see which sections you should check/re-work manually to increase the accuracy.

Once you have edited the highlighted text, it will be set to white again, so it’s easy to see which sections still require editing.
Use the button Add/Remove Highlighting to disable/enable word confidence highlighting.

NOTE: Word confidence values are only available in Amazon Transcribe or Speechmatics, not if you use our other integrated speech recognition services!

Audio Playback

The button Activate/Stop Play-on-click allows you to hear the audio playback of the section you click on (by clicking directly on the word in the transcript editor).
This is helpful in allowing you to check the accuracy of certain words by being able to listen to them directly whilst editing, without having to go back and try to find that section within your audio file.

If you use an External Service in your production to export the resulting audio file, we will automatically use the exported file in the transcript editor.
Otherwise we will use the output file generated by Auphonic. Please note that this file is password protected for the current Auphonic user and will be deleted in 21 days.

If no audio file is available in the transcript editor, or cannot be played because of the password protection, you will see the button Add Audio File to add a new audio file for playback.

Export Formats, Save/Share Transcript Editor

Click on the button Export... to see all export and saving/sharing options:

Save/Share Editor
The Save Editor button stores the whole transcript editor with all its current changes into a new HTML file. Use this button to save your changes for further editing or if you want to share your transcript with someone else for manual corrections (as the editor is embedded directly in the HTML file without any external dependencies).
Export HTML / Export PDF / Export WebVTT
Use one of these buttons to export the edited transcript to HTML (for WordPress, Word, etc.), to PDF (via the browser print function) or to WebVTT (so that the edited transcript can be used as subtitles or imported in web audio players of the Podlove Publisher or Podigee).
Every export format is rendered directly in the browser, no server needed.

Amazon Transcribe

The first of the two new services, Amazon Transcribe, offers accurate transcriptions in English and Spanish at low costs, including keywords, word confidence, timestamps, and punctuation.

UPDATE 2019:
Amazon Transcribe offers more languages now - please see Amazon Transcribe Features!

Pricing
The free tier offers 60 minutes of free usage a month for 12 months. After that, it is billed monthly at a rate of $0.0004 per second ($1.44/h).
More information is available at Amazon Transcribe Pricing.
Custom Vocabulary (Keywords) Support
Custom Vocabulary (called Keywords in Auphonic) gives you the ability to expand and customize the speech recognition vocabulary, specific to your case (i.e. product names, domain-specific terminology, or names of individuals).
The same feature is also available in the Google Cloud Speech API.
Timestamps, Word Confidence, and Punctuation
Amazon Transcribe returns a timestamp and confidence value for each word so that you can easily locate the audio in the original recording by searching for the text.
It also adds some punctuation, which is combined with our own punctuation and formatting automatically.

The high-quality (especially in combination with keywords) and low costs of Amazon Transcribe make it attractive, despite only currently supporting two languages.
However, the processing time of Amazon Transcribe is much slower compared to all our other integrated services!

Try it yourself:
Connect your Auphonic account with Amazon Transcribe at our External Services Page.

Speechmatics

Speechmatics offers accurate transcriptions in many languages including word confidence values, timestamps, and punctuation.

Many Languages
Speechmatics’ clear advantage is the sheer number of languages it supports (all major European and some Asiatic languages).
It also has a Global English feature, which supports different English accents during transcription.
Timestamps, Word Confidence, and Punctuation
Like Amazon, Speechmatics creates timestamps, word confidence values, and punctuation.
Pricing
Speechmatics is the most expensive speech recognition service at Auphonic.
Pricing starts at £0.06 per minute of audio and can be purchased in blocks of £10 or £100. This equates to a starting rate of about $4.78/h. Reduced rate of £0.05 per minute ($3.98/h) are available if purchasing £1,000 blocks.
They offer significant discounts for users requiring higher volumes. At this further reduced price point it is a similar cost to the Google Speech API (or lower). If you process a lot of content, you should contact them directly at sales@speechmatics.com and say that you wish to use it with Auphonic.
More information is available at Speechmatics Pricing.

Speechmatics offers high-quality transcripts in many languages. But these features do come at a price, it is the most expensive speech recognition services at Auphonic.

Unfortunately, their existing Custom Dictionary (keywords) feature, which would further improve the results, is not available in the Speechmatics API yet.

Try it yourself:
Connect your Auphonic account with Speechmatics at our External Services Page.

What do you think?

Any feedback about the new speech recognition services, especially about the recognition quality in various languages, is highly appreciated.

We would also like to hear any comments you have on the transcript editor particularly - is there anything missing, or anything that could be implemented better?
Please let us know!






es

Leveler Presets, LRA Target and Advanced Audio Parameters (Beta)

Lots of users have asked us about more customization and control over the sound of our audio algorithms in the past, so today, we have introduced some advanced algorithm parameters for our singletrack version in a private beta program!

The following new parameters are available:

UPDATE Nov. 2018:
We released a complete rework of the Adaptive Leveler parameters and the description here is not valid anymore!
Please see Auphonic Adaptive Leveler Customization (Beta Update)!

Please join our private beta program and let us know how you use these new features or if you need even more control!

Leveler Presets

Our Adaptive Leveler corrects level differences between speakers, between music and speech and will also apply dynamic range compression to achieve a balanced overall loudness. If you don't know about the Leveler yet, take a look at our Audio Examples.

Leveler presets are basically complete new leveling algorithms, which we have been working on in the past few months:
Our current Leveler tries to normalize all speakers to the same loudness. However, in some cases, you might want more or less loudness differences (dynamic range / loudness range) between the speakers and music segments, or more or less compression, etc.
For these use cases, we have developed additional Leveler Presets and the parameter Maximum Loudness Range.

The following Leveler presets are now available:
Preset Medium:
This is our current leveling algorithm as demonstrated in the Audio Examples.
Preset Hard:
The hard preset reacts faster and applies more gain and compression compared to the medium preset. It is built for recordings with extreme loudness differences, for example very quiet questions from the audience in a lecture recording, extremely soft and loud voices within one audio track, etc.
Preset Soft:
This preset reacts slower, applies less gain and compression compared to the medium preset. Use it if you want to keep more loudness differences (dynamic narration), if you want your voices to sound "less compressed/processed", for dynamic music (concert/classical recordings), background music, etc.
Preset Softer:
Like soft, but softer :)
Preset Speech Medium, Music Soft:
Uses the medium preset in speech segments and the soft preset in music segments. It is built for music live recordings or dynamic music mixes, where you want to amplify all speakers but keep the loudness differences within and between music segments.
Preset Medium, No Compressor:
Like the medium preset, but only (mid-term) leveling and no (short-term) compression is applied. This preset is optimal if you just use a Maximum Loudness Range Target and want to avoid any additional compression as much as possible.
Please let us know your use case, if you need more/other controls or if anything is confusing. The Leveler presets are still in private beta and can be changed as necessary!

Maximum Loudness Range (LRA) Target

The loudness range (LRA) indicates the variation of loudness over the course of a program and is measured in LU (loudness units) - for more details see Loudness Measurement and Normalization or EBU Tech 3342.

The parameter Max Loudness Range controls how much leveling is applied:
volume changes of our Adaptive Leveler will be restricted so that the loudness range of the output file is below the selected value.
High loudness range values will result in very dynamic output files, low loudness range values in compressed output audio. If the LRA value of your input file is already below the maximum loudness range value, no leveling at all will be applied.

It is also important which Leveler Preset you select, for example, if you use the soft(er) preset, it won't be possible to achieve very low loudness range targets.

Also, the Max Loudness Range parameter is not such a precise target value as the Loudness Target. The LRA of your output file might be off a few LU, as it is not reasonable to reach the exact target value.

Use Cases: The Maximum LRA parameter allows you to control the strength of our leveling algorithms, in combination with the parameter Leveler Preset. This might be used for automatic mixdowns with different LRA values for different target platforms (very compressed ones like mobile devices or Alexa, very dynamic ones like home cinema, etc.).

Maximum True Peak Level

This parameter sets the maximum allowed true peak level of the processed output file, which is controlled by the True Peak Limiter after our Global Loudness Normalization algorithms.

If set to Auto (which is the current default), a reasonable value according to the selected loudness target is used: -1dBTP for 23 LUFS (EBU R128) and higher, -2dBTP for -24 LUFS (ATSC A/85) and lower loudness targets.

The maximum true peak level parameter is already available in our desktop program.

Better Hum and Noise Reduction Controls

In addition to the parameter (Noise) Reduction Amount, we now offer two more parameters to control the combination of our Noise and Hum Reduction algorithms:
Hum Base Frequency:
Set the hum base frequency to 50Hz or 60Hz (if you know it), or use Auto to automatically detect the hum base frequency in each speech region.
Hum Reduction Amount:
Maximum hum reduction amount in dB, higher values remove more noise.
In Auto mode, a classifier decides how much hum reduction is necessary in each speech region. Set it to a custom value (> 0), if you prefer more hum reduction or want to bypass our classifier. Use Disable Dehum to disable hum reduction and use our noise reduction algorithms only.

Behavior of noise and hum reduction parameter combinations:

Noise Reduction Amount Hum Base Frequency Hum Reduction Amount
Auto Auto Auto Automatic hum and noise reduction
Auto or > 0 * Disabled No hum reduction, only denoise
Disabled 50Hz Auto or > 0 Force 50Hz hum reduction, no denoise
Disabled Auto Auto or > 0 Automatic dehum, no denoise
12dB 60Hz Auto or > 0 Always do dehum (60Hz) and denoise (12dB)

Advanced Parameters Private Beta and Feedback

At the moment the advanced algorithm parameters are for beta users only. This is to allow us to get user feedback, so we can change the parameters to suit user needs.
Please let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

y6KCBI4yo0 ksIFEsmI1y BDZec2a21V i4XRGLlVm2 0UDxuS0vbu aaBxi35sKN aaiDSZUbmY bu8lPF80Ih eMsSl6Sf8K DaWpsUnyjo
2YM00m8zDW wh7K2pPmSa jCX7mMy2OJ ZGvvhzCpTF HI0lmGhjVO eXqVhN6QLU t4BH0tYcxY LMjQREVuOx emIogTCAth 0OTPNB7Coz
VIFY8STj2f eKzRSWzOyv 40cMMKKCMN oBruOxBkqS YGgPem6Ne7 BaaFG9I1xZ iSC0aNXoLn ZaS4TykKIa l32bTSBbAx xXWraxS40J
zGtwRJeAKy mVsx489P5k 6SZM5HjkxS QmzdFYOIpf 500AHHtEFA 7Kvk6JRU66 z7ATzwado6 4QEtpzeKzC c9qt9Z1YXx pGSrDzbEED
MP3JUTdnlf PDm2MOLJIG 3uDietVFSL 1i7jZX0Y9e zPkSgmAqqP 5OhcmHIZUP E0vNsPxZ4s FzTIyZIG2r 5EywA0M7r5 FMhpcFkVN5
oRLbRGcRmI 2LTh8GlN7h Cjw6Z3cveP fayCewjE55 GbkyX89Lxu 4LpGZGZGgc iQV7CXYwkH pGLyQPgaha e3lhKDRUMs Skrei1tKIa
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the advanced audio algorithm parameters:
Auphonic Algorithm Parameters Private Beta Activation







es

Resumable File Uploads to Auphonic

Large file uploads in a web browser are problematic, even in 2018. If working with a poor network connection, uploads can fail and have to be retried from the start.

At Auphonic, our users have to upload large audio and video files, or multiple media files when creating a multitrack production. To minimize any potential issues, we integrated various external services which are specialized for large file transfers, like FTP, SFTP, Dropbox, Google Drive, S3, etc.

To further minimize issues, as of today we have also released resumable and chunked direct file uploads in the web browser to auphonic.com.

If you are not interested in the technical details, please just go to the section Resumable Uploads in Auphonic below.

The Problem with Large File Uploads in the Browser

If using either mobile networks (which remain fragile) or unstable WiFi connections, file uploads are often interrupted and will fail. There are also many areas in the world where connections are quite poor, which makes uploading big files frustrating.

After an interrupted file upload, the web browser must restart the whole upload from the start, which is a problem when it happens in the middle of a 4GB video file upload on a slow connection.
Furthermore, the longer an upload takes, the more likely it is to have a network glitch interrupting the upload, which then has to be retried from the start.

The Solution: Chunked, Resumable Uploads

To avoid user frustration, we need to be able to detect network errors and potentially resume an upload without having to restart it from the beginning.

To achieve this, we have to split a file upload in smaller chunks directly within the web browser, so that these chunks can then be sent to the server afterwards.
If an upload fails or the user wants to pause, it is possible to resume it later and only send those chunks that have not already been uploaded.
If there is a network interruption or change, the upload will be retried automatically.

Companies like Dropbox, Google, Amazon AWS etc. all have their own protocols and API's for chunked uploads, but there are also some open source implementations available, which offer resumable uploads:

resumable.js [link]:
"A JavaScript library providing multiple simultaneous, stable and resumable uploads via the HTML5 File API"
This solutions is a JavaScript library only and requires that the protocol is implemented on the server as well.
tus.io [link]:
"Open Protocol for Resumable File Uploads"
Tus.io offers a simple, cheap and reusable stack for clients and servers (in many languages). They have a blog with further information about resumable uploads, see tus blog.
plupload [link]:
A JavaScript library, similar to resumable.js, which requires a separate server implementation.

We chose to use resumable.js and developed our own server implementation.

Resumable Uploads in Auphonic

If you upload files to a singletrack or multitrack production, you will see the upload progress bar and a pause button, which is one way to pause and resume an upload:

It is also possible to close the browser completely or shut down your computer during the upload, then edit the production and upload the file again later. This will just resume the file upload from the position where it was stopped before.
(Previously uploaded chunks are saved for 24h on our servers, after that you have to start the whole upload again.)

In case of a network problem or if you switch to a different connection, we will resume the upload automatically.
This should solve many problems which were reported by some users in the past!

You can of course also use any of our external services for stable incoming and outgoing file transfers!

Do you still have Uploading Issues?

We hope that uploads to Auphonic are much more reliable now, even on poor connections.

If you still experience any problems, please let us know.
We are very happy about any bug reports and will do our best to fix them!







es

More Languages for Amazon Transcribe Speech Recognition

Until recently, Amazon Transcribe supported speech recognition in English and Spanish only.
Now they included French, Italian and Portuguese as well - and a few other languages (including German) are in private beta.

Update March 2019:
Now Amazon Transcribe supports German and Korean as well.

The Auphonic Audio Inspector on the status page of a finished Multitrack Production including speech recognition.
Please click on the screenshot to see it in full resolution!


Amazon Transcribe is integrated as speech recognition engine within Auphonic and offers accurate transcriptions (compared to other services) at low costs, including keywords / custom vocabulary support, word confidence, timestamps, and punctuation.
See the following AWS blog post and video for more information about recent Amazon Transcribe developments: Transcribe speech in three new languages: French, Italian, and Brazilian Portuguese.

Amazon Transcribe is also a perfect fit if you want to use our Transcript Editor because you will be able to see word timestamps and confidence values to instantly check which section/words should be corrected manually to increase the transcription accuracy:


Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.

These features are also available if you use Speechmatics, but unfortunately not in our other integrated speech recognition services.

About Speech Recognition within Auphonic

Auphonic has built a layer on top of a few external speech recognition services to make audio searchable:
Our classifiers generate metadata during the analysis of an audio signal (music segments, silence, multiple speakers, etc.) to divide the audio file into small and meaningful segments, which are processed by the speech recognition engine. The results from all segments are then combined, and meaningful timestamps, simple punctuation and structuring are added to the resulting text.

To learn more about speech recognition within Auphonic, take a look at our Speech Recognition and Transcript Editor help pages or listen to our Speech Recognition Audio Examples.

A comparison table of our integrated services (price, quality, languages, speed, features, etc.) can be found here: Speech Recognition Services Comparison.

Conclusion

We hope that Amazon and others will continue to add new languages, to get accurate and inexpensive automatic speech recognition in many languages.

Don't hesitate to contact us if you have any questions or feedback about speech recognition or our transcript editor!






es

The New Loudness Target War

In the classic loudness war, music and radio producers have been trying to create their recordings as loud as possible and loudness normalization was introduced to stop that. Now one can see the start of a new loudness target war, where podcasters set their loudness targets higher and higher, mainly triggered by high target recommendations of platforms like Spotify or Amazon Alexa.
In this article, we will show how to resist the loudness target war and still be compliant with major platforms.

Resist the loudness target war! (Photo by Nayani Teixeira)

What's the problem?

“Two or three years ago it seemed that many stations were finally realizing that better radio could improve ratings. And the major myth brought over from AM radio – that a louder signal, regardless of quality, attracts more listeners – appeared to be losing its strength,” writes Robert Orban. The times of excessively compressed audio, putting loudness over sound quality, were coming to an end. We were hoping the same when we wrote about the CALM Act and EBU R128 in 2012. Those measures were meant to make programs sound more evened out and set a standard for a reasonable loudness level.

Except, Orban's article was published in 1979 (PDF, p. 60 ff.), and he concludes: "The loudness war has escalated, and quality is once again being sacrificed." 40 years later, a new loudness target war is emerging. While, yes, radio and TV stations have widely adopted the new standards, prominent competitors in the audio market are pushing for higher loudness targets once again.

Loudness war: the trend of increasing audio levels in recorded music over time. (Screenshot delamar.de)

Why LOUDER is not better

Historically, the loudness war has escalated with the advent of digital technology. Peak level normalization and quasi peak program meters (QPPM) encouraged producers to push audio signals to the limit. Just shy of clipping, signals could now be compressed to the highest possible levels, using multiband compressors and limiters. While this lifted quieter signals up, transforming a waveform into a brick, marketers thought that the louder songs on CDs and almost yelling voices on FM radio would attract listeners. On the other hand, reduced dynamics makes audio less interesting and can lead to listener fatigue, as Rip Rowan pointedly illustrated in his 2002 article "Over the Limit":

WHY IS THE LOUDER IS BETTER APPROACH THE WRONG APPROACH? BECAUSE WHEN ALL OF THE SIGNAL IS AT THE MAXIMUM LEVEL, THEN THERE IS NO WAY FOR THE SIGNAL TO HAVE ANY PUNCH. THE WHOLE THING COMES SCREAMING AT YOU LIKE A MESSAGE IN ALL CAPITAL LETTERS. AS WE ALL KNOW, WHEN YOU TYPE IN ALL CAPITAL LETTERS THERE ARE NO CUES TO HELP THE BRAIN MAKE SENSE OF THE SIGNAL, AND THE MIND TIRES QUICKLY OF TRYING TO PROCESS WHAT IS, BASICALLY, WHITE NOISE. LIKEWISE, A SIGNAL THAT JUST PEGS THE METERS CAUSES THE BRAIN TO REACT AS THOUGH IT IS BEING FED WHITE NOISE. WE SIMPLY FILTER IT OUT AND QUIT TRYING TO PROCESS IT.

Hence, many spoken word producers and broadcasters luckily wisened up and committed to new standards, based on loudness normalization instead of peak normalization. LUFS, Loudness Units relative to Full Scale, is a unit that measures an audio track's average loudness. All segments of a program can then be normalized to a certain LUFS value. As we have discussed before, -23 LUFS is standard now for broadcasters of the EBU, which for example has led to advertising segments no longer being much louder than the rest of any particular program. For a short while, it seemed like a peace agreement, or at least a truce had been achieved in the loudness war.

Human loudness perception is based on average levels instead of peak levels. (Screenshot theproaudiofiles.com)

Loudness Targets and Dynamic Range

However, while LUFS adoption is increasing across the industry, this doesn't mean that people have stopped trying to be louder than everybody else. And indeed -23 LUFS is not the one-size-fits-all value. We have recommended -16 LUFS for podcasts ourselves. The loudness of a production played in a cinema should be different from one made for earphones and noisy listening environments. Some headphones expect a louder signal and don't have enough gain to work with -23 LUFS under all circumstances.

The closer a production gets to 0 LUFS though, the less dynamic range can be reproduced. However, the dynamic range also depends on the dBTP, the maximum True Peak level, which indicates the level of the highest possible peak value. For instance, the aforementioned EBU R128 standard of -23 LUFS also defines a -1 dBTP. The difference between the LUFS and dBTP levels is called Peak to Loudness Range, PLR. For EBU R128 the PLR would be 22 LU (Loudness Units), a podcast with -16 LUFS and -1 dBTP has a PLR of 15 LU. Thus, the PLR is a measure of the maximum possible dynamic range of an audio production.

Platforms: Apple, Google, Amazon, and Spotify

Our already pretty high recommendation of -16 LUFS, -1 dBTP for mobile listening was also adopted by Apple's best practices for podcasts and recommendations for Google Assistant. Some are pushing it too far, though. A daily updated analysis by podnews shows that many podcasts are much louder than -16 LUFS.

Distribution of loudness across a selection of podcasts. (Screenshot podnews.net)

This might in part be due to specs published by other competitors in the audio space:

Amazon, for instance, recommends -14 LUFS at -2 dBTP for Alexa skills, meaning a PLR of only 12 LU. While this might work for Alexa's synthesized voice, which doesn't have much variability, it produces a dynamic range too low for spoken word content.
However, Amazon also says that a skill may be rejected if the program loudness is lower -19 LUFS or higher -9 LUFS, therefore a target of -16 LUFS is perfectly fine, which means that Alexa's robot voice leads by 2 LU compared to the audio content.

Spotify normalizes audio to the equivalent of about -14 LUFS, -1 dbTP, but they still use ReplayGain, which is not exactly the same as LUFS but gives similar results. Spotify mainly decreases the volume of overly loud productions, but can also increase the volume on some playback devices if the audio is much too soft.
For pop music, -14 LUFS is acceptable, but for podcasts or classical music, it is too high. However, (pop) music can always be played a bit louder compared to speech, therefore a loudness target of -16 LUFS for podcasts is fine on Spotify as well.

Make LUFS, not war

If producers just use the highest recommended target and therefore the loudness (target) war goes into another battle, we will hear more compressed, distorted voices once again, lacking emotion and many subtleties that are reflected in the dynamic range of how we speak.

Setting a loudness target higher than -16 LUFS does not improve the listening experience in any way. However, many productions would benefit from sensitively adjusting differences in dynamics throughout the production, lifting up quieter segments, lowering loud segments, and treating speech and music differently. As pointed out in a presentation recently (video, in German), you can do that directly in your DAW or use a leveler to automate dynamics processing.

Instead of just raising the loudness target, adjusting differences in levels can help to make the listening experience much more enjoyable.

Recent album releases suggest that the music industry is still in the middle of the loudness war, often limiting the peak to loudness range to single digit LU values. (By the way, your vinyl-buying friend is right, vinyl records do sound better because they don't work with the highest compression rates.) Hopefully, podcasters, radio producers and other spoken word artists, as well as the platforms that host and publish their productions, can resist the temptation of louder and louder audio.
Make (no more than -16) LUFS, not war!

Conclusion

With some podcasts, smart speakers and streaming platforms trying to be louder than the competition, the listening experience deteriorates. However, podcasts can sound great and loud enough even in noisy environments, when well produced:

  • Never set a loudness target higher than -16 LUFS for spoken word audio.
  • If your audio is too quiet, try lifting up quieter sections and reducing the volume of louder sections, directly in your DAW or by using a leveler.
  • These settings will work fine on all current platforms, including Amazon Alexa and Spotify.

Resist the loudness target war!







es

Dynamic Range Processing in Audio Post Production

If listeners find themselves using the volume up and down buttons a lot, level differences within your podcast or audio file are too big.
In this article, we are discussing why audio dynamic range processing (or leveling) is more important than loudness normalization, why it depends on factors like the listening environment and the individual character of the content, and why the loudness range descriptor (LRA) is only reliable for speech programs.

Photo by Alexey Ruban.

Why loudness normalization is not enough

Everybody who has lived in an apartment building knows the problem: you want to enjoy a movie late at night, but you're constantly on the edge - not only because of the thrilling story, but because your index finger is hovering over the volume down button of your remote. The next loud sound effect is going to come sooner rather than later, and you want to avoid waking up your neighbors with some gunshot sounds blasting from your TV.

In our previous post, we talked about the overall loudness of a production. While that's certainly important to keep in mind, the loudness target is only an average value, ignoring how much the loudness varies within a production. The loudness target of your movie might be in the ideal range, yet the level differences between a gunshot and someone whispering can still be enormous - having you turn the volume down for the former and up for the latter.

While the average loudness might be perfect, level differences can lead to an unpleasant listening experience.

Of course, this doesn't apply to movies alone. The image above shows a podcast or radio production. The loud section is music, the very quiet section just breathing, and the remaining sections are different voices.

To be clear, we're not saying that the above example is problematic per se. There are many situations, where a big difference in levels - a high dynamic range - is justified: for instance, in a movie theater, optimized for listening and without any outside noise, or in classical music.
Also, if the dynamic range is too small, listening can be tiring.

But if you watch the same movie in an outdoor screening in the summer on a beach next to the crashing waves or in the middle of a noisy city, it can be tricky to hear the softer parts.
Spoken word usually has a smaller dynamic range, and if you produce your podcast for a target audience of train or car commuters, the dynamic range should be even smaller, adjusting for the listening situation.

Therefore, hitting the loudness target has less impact on the listening experience than level differences (dynamic range) within one file!
What makes a suitable dynamic range does not only depend on the listening environment, but also on the nature of the content itself. If the dynamic range is too small, the audio can be tiring to listen to, whereas more variability in levels can make a program more interesting, but might not work in all environments, such as a noisy car.

Dynamic range experiment in a car

Wolfgang Rein, audio technician at SWR, a public broadcaster in Germany, did an experiment to test how drivers react to programs with different dynamic ranges. They monitored to what level drivers set the car stereo depending on speed (thus noise level) and audio dynamic range.
While the results are preliminary, it seems like drivers set the volume as low as possible so that they can still understand the content, but don't get distracted by loud sounds.

As drivers adjust the volume to the loudest voice in a program, they won't understand quieter speakers in content with a high dynamic range anymore. To some degree and for short periods of time, they can compensate by focusing more on the radio program, but over time that's tiring. Therefore, if the loudness varies too much, drivers tend to switch to another program rather than adjusting the volume.
Similar results have been found in a study conducted by NPR Labs and Towson University.

On the other hand, the perception was different in pure music programs. When drivers set the volume according to louder parts, they weren't able to hear softer segments or the beginning of a song very well. But that did not matter to them as much and didn't make them want to turn up the volume or switch the program.

Listener's reaction in response to frequent loudness changes. (from John Kean, Eli Johnson, Dr. Ellyn Sheffield: Study of Audio Loudness Range for Consumers in Various Listening Modes and Ambient Noise Levels)

Loudness comfort zone

The reaction of drivers to variable loudness hints at something that BBC sound engineer Mike Thornton calls the loudness comfort zone.

Tests (...) have shown that if the short-term loudness stays within the "comfort zone" then the consumer doesn’t feel the need to reach for the remote control to adjust the volume.
In a blog post, he highlights how the series Blue Planet 2 and Planet Earth 2 might not always have been the easiest to listen to. The graph below shows an excerpt with very loud music, followed by commentary just at the bottom of the green comfort zone. Thornton writes: "with the volume set at a level that was comfortable when the music was playing we couldn’t always hear the excellent commentary from Sir David Attenborough and had to resort to turning on the subtitles to be sure we knew what Sir David was saying!"

Planet Earth 2 Loudness Plot Excerpt. Colored green: comfort zone of +3 to -5LU around the loudness target. (from Mike Thornton: BBC Blue Planet 2 Latest Show In Firing Line For Sound Issues - Are They Right?)

As already mentioned above, a good mix considers the maximum and minimum possible loudness in the target listening environment.
In a movie theater the loudness comfort zone is big (loudness can vary a lot), and loud music is part of the fun, while quiet scenes work just as well. The opposite was true in the aforementioned experiment with drivers, where the loudness comfort zone is much smaller and quiet voices are difficult to understand.

Hence, the loudness comfort zone determines how much dynamic range an audio signal can use in a specific listening environment.

How to measure dynamic range: LRA

When producing audio for various environments, it would be great to have a target value for dynamic range, (the difference between the smallest and largest signal values of an audio signal) as well. Then you could just set a dynamic range target, similarly to a loudness target.

Theoretically, the maximum possible dynamic range of a production is defined by the bit-depth of the audio format. A 16-bit recording can have a dynamic range of 96 dB; for 24-bit, it's 144 dB - which is well above the approx. 120 dB the human ear can handle. However, most of those bits are typically being used to get to a reasonable base volume. Picture a glass of water: you want it to be almost full, with some headroom so that it doesn't spill when there's a sudden movement, i.e. a bigger amplitude wave at the top.

Determining the dynamic range of a production is easier said than done, though. It depends on which signals are included in the measurement: for example, if something like background music or breathing should be considered at all.
The currently preferred method for broadcasting is called Loudness Range, LRA. It is measured in Loudness Units (LU), and takes into account everything between the 10th and the 95th percentile of a loudness distribution, after an additional gating method. In other words, the loudest 5% and quietest 10% of the audio signal are being ignored. This way, quiet breathing or an occasional loud sound effect won't affect the measurement.

Loudness distribution and LRA for the film 'The Matrix'. Figure from EBU Tech Doc 3343 (p.13).

However, the main difficulty is which signals should be included in the loudness range measurement and which ones should be gated. This is unfortunately often very subjective and difficult to define with a purely statistical method like LRA.

Where LRA falls short

Therefore, only pure speech programs give reliable LRA values that are comparable!
For instance, a typical LRA for news programs is 3 LU; for talks and discussions 5 LU is common. LRA values for features, radio dramas, movies or music very much depend on the individual character and might be in the range between 5 and 25 LU.

To further illustrate this, here are some typical LRA values, according to a paper by Thomas Lund (table 2):

ProgramLoudness Range
Matrix, full movie25.0
NBC Interstitials, Jan. 2008, all together (3:30)9.4
Friends Episode 166.6
Speak Ref., Male, German, SQUAM Trk 546.2
Speak Ref., Female, French, SQUAM Trk 514.8
Speak Ref., Male, English, Sound Check3.3
Wish You Were Here, Pink Floyd22.1
Gilgamesh, Battle of Titans, Osaka Symph.19.7
Don’t Cry For Me Arg., Sinead O’Conner13.7
Beethoven Son in F, Op17, Kliegel & Tichman12.0
Rock’n Roll Train, AC/DC6.0
I.G.Y., Donald Fagen3.6

LRA values of music are very unpredictable as well.
For instance, Tom Frampton measured the LRA of songs in multiple genres, and the differences within each genre are quite big. The ten pop songs that he analyzed varied in LRA between 3.7 and 12 LU, country songs between 3.6 and 14.9 LU. In the Electronic genre the individual LRAs were between 3.7 and 15.2 LU. Please see the tables at the bottom of his blog post for more details.

We at Auphonic also tried to base our Adaptive Leveler parameters on the LRA descriptor. Although it worked, it turned out that it is very difficult to set a loudness range target for diverse audio content, which does include speech, background sounds, music parts, etc. The results were not predictable and it was hard to find good target values. Therefore we developed our own algorithm to measure the dynamic range of audio signals.

In conclusion, LRA comparisons are only useful for productions with spoken word only and the LRA value is therefore not applicable as a general dynamic range target value. The more complex a production gets, the more difficult it is to make any judgment based on the LRA.
This is, because the definition of LRA is purely statistical. There's no smart measurement using classifiers that distinguish between music, speech, quiet breathing, background noises and other types of audio. One would need a more intelligent algorithm (as we use in our Adaptive Leveler), that knows which audio segments should be included and excluded from the measurement.

From theory to application: tools

Loudness and dynamic range clearly is a complicated topic. Luckily, there are tools that can help. To keep short-term loudness in range, a compressor can help control sudden changes in loudness - such as p-pops or consonants like t or k. To achieve a good mid-term loudness, i.e. a signal that doesn't go outside the comfort zone too much, a leveler is a good option. Or, just use a fader or manually adjust volume curves. And to make sure that separate productions sound consistent, loudness normalization is the way to go. We have covered all of this in-depth before.

Looking at the audio from above again, with an adaptive leveler applied it looks like this:

Leveler example. Output at the top, input with leveler envelope at the bottom.

Now, the voices are evened out and the music is at a comfortable level, while the breathing has not been touched at all.
We recently extended Auphonic's adaptive leveler, so that it is now possible to customize the dynamic range - please see adaptive leveler customization and advanced multitrack audio algorithms.
If you wanted to increase the loudness comfort zone (or dynamic range) of the standard preset by 10 dB (or LU), for example, the envelope would look like this:

Leveler with higher dynamic range, only touching sections with extremely low or extremely high loudness to fit into a specific loudness comfort zone.

When a production is done, our adaptive leveler uses classifiers to also calculate the integrated loudness and loudness range of dialog and music sections separately. This way it is possible to just compare the dialog LRA and loudness of complex productions.

Assessing the LRA and loudness of dialog and music separately.

Conclusion

Getting audio dynamics right is not easy. Yet, it is an important thing to keep in mind, because focusing on loudness normalization alone is not enough. In fact, hitting the loudness target often has less impact on the listening experience than level differences, i.e. audio dynamics.

If the dynamic range is too small, the audio can be tiring to listen to, whereas a bigger dynamic range can make a program more interesting, but might not work in loud environments, such as a noisy train.
Therefore, a good mix adapts the audio dynamic range according to the target listening environment (different loudness comfort zones in cinema, at home, in a car) and according to the nature of the content (radio feature, movie, podcast, music, etc.).

Furthermore, because the definition of the loudness range / LRA is purely statistical, only speech programs give reliable LRA values that are comparable.
More "intelligent" algorithms are in development, which use classifiers to decide which signals should be included and excluded from the dynamic range measurement.

If you understand German, take a look at our presentation about audio dynamic processing in podcasts for further information:







es

Winter Stand Up Paddling on Horsetooth Reservoir

I love paddling on the Horsetooth Reservoir in cold season. Boat ramps are closed, no power boat traffic, usually quiet and calm. Snow and ice can enhance scenery. A great time to paddle, train, relax or photograph. The Horsetooth stays […]




es

Some Rights Reserved




es

How to Foster Real-Time Client Engagement During Moderated Research

When we conduct moderated research, like user interviews or usability tests, for our clients, we encourage them to observe as many sessions as possible. We find when clients see us interview their users, and get real-time responses, they’re able to learn about the needs of their users in real-time and be more active participants in the process. One way we help clients feel engaged with the process during remote sessions is to establish a real-time communication backchannel that empowers clients to flag responses they’d like to dig into further and to share their ideas for follow-up questions.

There are several benefits to establishing a communication backchannel for moderated sessions:

  • Everyone on the team, including both internal and client team members, can be actively involved throughout the data collection process rather than waiting to passively consume findings.
  • Team members can identify follow-up questions in real-time which allows the moderator to incorporate those questions during the current session, rather than just considering them for future sessions.
  • Subject matter experts can identify more detailed and specific follow-up questions that the moderator may not think to ask.
  • Even though the whole team is engaged, a single moderator still maintains control over the conversation which creates a consistent experience for the participant.

If you’re interested in creating your own backchannel, here are some tips to make the process work smoothly:

  • Use the chat tool that is already being used on the project. In most cases, we use a joint Slack workspace for the session backchannel but we’ve also used Microsoft Teams.
  • Create a dedicated channel like #moderated-sessions. Conversation in this channel should be limited to backchannel discussions during sessions. This keeps the communication consolidated and makes it easier for the moderator to stay focused during the session.
  • Keep communication limited. Channel participants should ask basic questions that are easy to consume quickly. Supplemental commentary and analysis should not take place in the dedicated channel.
  • Use emoji responses. The moderator can add a quick thumbs up to indicate that they’ve seen a question.

Introducing backchannels for communication during remote moderated sessions has been a beneficial change to our research process. It not only provides an easy way for clients to stay engaged during the data collection process but also increases the moderator’s ability to focus on the most important topics and to ask the most useful follow-up questions.




es

Markdown Comes Alive! Part 1, Basic Editor

In my last post, I covered what LiveView is at a high level. In this series, we’re going to dive deeper and implement a LiveView powered Markdown editor called Frampton. This series assumes you have some familiarity with Phoenix and Elixir, including having them set up locally. Check out Elizabeth’s three-part series on getting started with Phoenix for a refresher.

This series has a companion repository published on GitHub. Get started by cloning it down and switching to the starter branch. You can see the completed application on master. Our goal today is to make a Markdown editor, which allows a user to enter Markdown text on a page and see it rendered as HTML next to it in real-time. We’ll make use of LiveView for the interaction and the Earmark package for rendering Markdown. The starter branch provides some styles and installs LiveView.

Rendering Markdown

Let’s set aside the LiveView portion and start with our data structures and the functions that operate on them. To begin, a Post will have a body, which holds the rendered HTML string, and title. A string of markdown can be turned into HTML by calling Post.render(post, markdown). I think that just about covers it!

First, let’s define our struct in lib/frampton/post.ex:

defmodule Frampton.Post do
  defstruct body: "", title: ""

  def render(%__MODULE{} = post, markdown) do
    # Fill me in!
  end
end

Now the failing test (in test/frampton/post_test.exs):

describe "render/2" do
  test "returns our post with the body set" do
    markdown = "# Hello world!"                                                                                                                 
    assert Post.render(%Post{}, markdown) == {:ok, %Post{body: "<h1>Hello World</h1>
"}}
  end
end

Our render method will just be a wrapper around Earmark.as_html!/2 that puts the result into the body of the post. Add {:earmark, "~> 1.4.3"} to your deps in mix.exs, run mix deps.get and fill out render function:

def render(%__MODULE{} = post, markdown) do
  html = Earmark.as_html!(markdown)
  {:ok, Map.put(post, :body, html)}
end

Our test should now pass, and we can render posts! [Note: we’re using the as_html! method, which prints error messages instead of passing them back to the user. A smarter version of this would handle any errors and show them to the user. I leave that as an exercise for the reader…] Time to play around with this in an IEx prompt (run iex -S mix in your terminal):

iex(1)> alias Frampton.Post
Frampton.Post
iex(2)> post = %Post{}
%Frampton.Post{body: "", title: ""}
iex(3)> {:ok, updated_post} = Post.render(post, "# Hello world!")
{:ok, %Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}}
iex(4)> updated_post
%Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}

Great! That’s exactly what we’d expect. You can find the final code for this in the render_post branch.

LiveView Editor

Now for the fun part: Editing this live!

First, we’ll need a route for the editor to live at: /editor sounds good to me. LiveViews can be rendered from a controller, or directly in the router. We don’t have any initial state, so let's go straight from a router.

First, let's put up a minimal test. In test/frampton_web/live/editor_live_test.exs:

defmodule FramptonWeb.EditorLiveTest do
  use FramptonWeb.ConnCase
  import Phoenix.LiveViewTest

  test "the editor renders" do
    conn = get(build_conn(), "/editor")
    assert html_response(conn, 200) =~ "data-test="editor""
  end
end

This test doesn’t do much yet, but notice that it isn’t live view specific. Our first render is just the same as any other controller test we’d write. The page’s content is there right from the beginning, without the need to parse JavaScript or make API calls back to the server. Nice.

To make that test pass, add a route to lib/frampton_web/router.ex. First, we import the LiveView code, then we render our Editor:

import Phoenix.LiveView.Router
# … Code skipped ...
# Inside of `scope "/"`:
live "/editor", EditorLive

Now place a minimal EditorLive module, in lib/frampton_web/live/editor_live.ex:

defmodule FramptonWeb.EditorLive do
  use Phoenix.LiveView

  def render(assigns) do
    ~L"""
      <div data-test=”editor”>
        <h1>Hello world!</h1>
      </div>
      """
  end

  def mount(_params, _session, socket) do
    {:ok, socket}
  end
end

And we have a passing test suite! The ~L sigil designates that LiveView should track changes to the content inside. We could keep all of our markup in this render/1 method, but let’s break it out into its own template for demonstration purposes.

Move the contents of render into lib/frampton_web/templates/editor/show.html.leex, and replace EditorLive.render/1 with this one liner: def render(assigns), do: FramptonWeb.EditorView.render("show.html", assigns). And finally, make an EditorView module in lib/frampton_web/views/editor_view.ex:

defmodule FramptonWeb.EditorView do
  use FramptonWeb, :view
  import Phoenix.LiveView
end

Our test should now be passing, and we’ve got a nicely separated out template, view and “live” server. We can keep markup in the template, helper functions in the view, and reactive code on the server. Now let’s move forward to actually render some posts!

Handling User Input

We’ve got four tasks to accomplish before we are done:

  1. Take markdown input from the textarea
  2. Send that input to the LiveServer
  3. Turn that raw markdown into HTML
  4. Return the rendered HTML to the page.

Event binding

To start with, we need to annotate our textarea with an event binding. This tells the liveview.js framework to forward DOM events to the server, using our liveview channel. Open up lib/frampton_web/templates/editor/show.html.leex and annotate our textarea:

<textarea phx-keyup="render_post"></textarea>

This names the event (render_post) and sends it on each keyup. Let’s crack open our web inspector and look at the web socket traffic. Using Chrome, open the developer tools, navigate to the network tab and click WS. In development you’ll see two socket connections: one is Phoenix LiveReload, which polls your filesystem and reloads pages appropriately. The second one is our LiveView connection. If you let it sit for a while, you’ll see that it's emitting a “heartbeat” call. If your server is running, you’ll see that it responds with an “ok” message. This lets LiveView clients know when they've lost connection to the server and respond appropriately.

Now, type some text and watch as it sends down each keystroke. However, you’ll also notice that the server responds with a “phx_error” message and wipes out our entered text. That's because our server doesn’t know how to handle the event yet and is throwing an error. Let's fix that next.

Event handling

We’ll catch the event in our EditorLive module. The LiveView behavior defines a handle_event/3 callback that we need to implement. Open up lib/frampton_web/live/editor_live.ex and key in a basic implementation that lets us catch events:

def handle_event("render_post", params, socket) do
  IO.inspect(params)

  {:noreply, socket}
end

The first argument is the name we gave to our event in the template, the second is the data from that event, and finally the socket we’re currently talking through. Give it a try, typing in a few characters. Look at your running server and you should see a stream of events that look something like this:

There’s our keystrokes! Next, let’s pull out that value and use it to render HTML.

Rendering Markdown

Lets adjust our handle_event to pattern match out the value of the textarea:

def handle_event("render_post", %{"value" => raw}, socket) do

Now that we’ve got the raw markdown string, turning it into HTML is easy thanks to the work we did earlier in our Post module. Fill out the body of the function like this:

{:ok, post} = Post.render(%Post{}, raw)
IO.inspect(post)

If you type into the textarea you should see output that looks something like this:

Perfect! Lastly, it’s time to send that rendered html back to the page.

Returning HTML to the page

In a LiveView template, we can identify bits of dynamic data that will change over time. When they change, LiveView will compare what has changed and send over a diff. In our case, the dynamic content is the post body.

Open up show.html.leex again and modify it like so:

<div class="rendered-output">
  <%= @post.body %>
</div>

Refresh the page and see:

Whoops!

The @post variable will only be available after we put it into the socket’s assigns. Let’s initialize it with a blank post. Open editor_live.ex and modify our mount/3 function:

def mount(_params, _session, socket) do
  post = %Post{}
  {:ok, assign(socket, post: post)}
end

In the future, we could retrieve this from some kind of storage, but for now, let's just create a new one each time the page refreshes. Finally, we need to update the Post struct with user input. Update our event handler like this:

def handle_event("render_post", %{"value" => raw}, %{assigns: %{post: post}} = socket) do
  {:ok, post} = Post.render(post, raw)
  {:noreply, assign(socket, post: post)
end

Let's load up http://localhost:4000/editor and see it in action.

Nope, that's not quite right! Phoenix won’t render this as HTML because it’s unsafe user input. We can get around this (very good and useful) security feature by wrapping our content in a raw/1 call. We don’t have a database and user processes are isolated from each other by Elixir. The worst thing a malicious user could do would be crash their own session, which doesn’t bother me one bit.

Check the edit_posts branch for the final version.

Conclusion

That’s a good place to stop for today. We’ve accomplished a lot! We’ve got a dynamically rendering editor that takes user input, processes it and updates the page. And we haven’t written any JavaScript, which means we don’t have to maintain or update any JavaScript. Our server code is built on the rock-solid foundation of the BEAM virtual machine, giving us a great deal of confidence in its reliability and resilience.

In the next post, we’ll tackle making a shared editor, allowing multiple users to edit the same post. This project will highlight Elixir’s concurrency capabilities and demonstrate how LiveView builds on them to enable some incredible user experiences.



  • Code
  • Back-end Engineering

es

Committed to the wrong branch? -, @{upstream}, and @{-1} to the rescue

I get into this situation sometimes. Maybe you do too. I merge feature work into a branch used to collect features, and then continue development but on that branch instead of back on the feature branch

git checkout feature
# ... bunch of feature commits ...
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# deploy qa-environment to the QA remote environment
# ... more feature commits ...
# oh. I'm not committing in the feature branch like I should be

and have to move those commits to the feature branch they belong in and take them out of the throwaway accumulator branch

git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

Maybe you prefer

git branch -D qa-environment
git checkout qa-environment

over

git checkout qa-environment
git reset --hard origin/qa-environment

Either way, that works. But it'd be nicer if we didn't have to type or even remember the branches' names and the remote's name. They are what is keeping this from being a context-independent string of commands you run any time this mistake happens. That's what we're going to solve here.

Shorthands for longevity

I like to use all possible natively supported shorthands. There are two broad motivations for that.

  1. Fingers have a limited number of movements in them. Save as many as possible left late in life.
  2. Current research suggests that multitasking has detrimental effects on memory. Development tends to be very heavy on multitasking. Maybe relieving some of the pressure on quick-access short term memory (like knowing all relevant branch names) add up to leave a healthier memory down the line.

First up for our scenario: the - shorthand, which refers to the previously checked out branch. There are a few places we can't use it, but it helps a lot:

Bash
# USING -

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit -        # ????
git push
# hack hack hack
# whoops
git checkout -        # now on feature ???? 
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout - # now on qa-environment ????
git reset --hard origin/qa-environment
git merge --no-ff --no-edit -        # ????
git checkout -                       # ????
# on feature and ready for more feature commits
Bash
# ORIGINAL

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# hack hack hack
# whoops
git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

We cannot use - when cherry-picking a range

> git cherry-pick origin/-..-
fatal: bad revision 'origin/-..-'

> git cherry-pick origin/qa-environment..-
fatal: bad revision 'origin/qa-environment..-'

and even if we could we'd still have provide the remote's name (here, origin).

That shorthand doesn't apply in the later reset --hard command, and we cannot use it in the branch -D && checkout approach either. branch -D does not support the - shorthand and once the branch is deleted checkout can't reach it with -:

# assuming that branch-a has an upstream origin/branch-a
> git checkout branch-a
> git checkout branch-b
> git checkout -
> git branch -D -
error: branch '-' not found.
> git branch -D branch-a
> git checkout -
error: pathspec '-' did not match any file(s) known to git

So we have to remember the remote's name (we know it's origin because we are devoting memory space to knowing that this isn't one of those times it's something else), the remote tracking branch's name, the local branch's name, and we're typing those all out. No good! Let's figure out some shorthands.

@{-<n>} is hard to say but easy to fall in love with

We can do a little better by using @{-<n>} (you'll also sometimes see it referred to be the older @{-N}). It is a special construct for referring to the nth previously checked out ref.

> git checkout branch-a
> git checkout branch-b
> git rev-parse --abbrev-rev @{-1} # the name of the previously checked out branch
branch-a
> git checkout branch-c
> git rev-parse --abbrev-rev @{-2} # the name of branch checked out before the previously checked out one
branch-a

Back in our scenario, we're on qa-environment, we switch to feature, and then want to refer to qa-environment. That's @{-1}! So instead of

git cherry-pick origin/qa-environment..qa-environment

We can do

git cherry-pick origin/qa-environment..@{-1}

Here's where we are (🎉 marks wins from -, 💥 marks the win from @{-1})

Bash
# USING - AND @{-1}

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit -                # ????
git push
# hack hack hack
# whoops
git checkout -                               # ????
git cherry-pick origin/qa-environment..@{-1} # ????
git push
git checkout -                               # ????
git reset --hard origin/qa-environment
git merge --no-ff --no-edit -                # ????
git checkout -                               # ????
# ready for more feature commits
Bash
# ORIGINAL

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# hack hack hack
# whoops
git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

One down, two to go: we're still relying on memory for the remote's name and the remote branch's name and we're still typing both out in full. Can we replace those with generic shorthands?

@{-1} is the ref itself, not the ref's name, we can't do

> git cherry-pick origin/@{-1}..@{-1}
origin/@{-1}
fatal: ambiguous argument 'origin/@{-1}': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'

because there is no branch origin/@{-1}. For the same reason, @{-1} does not give us a generalized shorthand for the scenario's later git reset --hard origin/qa-environment command.

But good news!

Do @{u} @{push}

@{upstream} or its shorthand @{u} is the remote branch a that would be pulled from if git pull were run. @{push} is the remote branch that would be pushed to if git push was run.

> git checkout branch-a
Switched to branch 'branch-a'
Your branch is ahead of 'origin/branch-a' by 3 commits.
  (use "git push" to publish your local commits)
> git reset --hard origin/branch-a
HEAD is now at <the SHA origin/branch-a is at>

we can

> git checkout branch-a
Switched to branch 'branch-a'
Your branch is ahead of 'origin/branch-a' by 3 commits.
  (use "git push" to publish your local commits)
> git reset --hard @{u}                                # <-- So Cool!
HEAD is now at <the SHA origin/branch-a is at>

Tacking either onto a branch name will give that branch's @{upstream} or @{push}. For example

git checkout branch-a@{u}

is the branch branch-a pulls from.

In the common workflow where a branch pulls from and pushes to the same branch, @{upstream} and @{push} will be the same, leaving @{u} as preferable for its terseness. @{push} shines in triangular workflows where you pull from one remote and push to another (see the external links below).

Going back to our scenario, it means short, portable commands with a minimum human memory footprint. (🎉 marks wins from -, 💥 marks the win from @{-1}, 😎 marks the wins from @{u}.)

Bash
# USING - AND @{-1} AND @{u}

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit -    # ????
git push
# hack hack hack
# whoops
git checkout -                   # ????
git cherry-pick @{-1}@{u}..@{-1} # ????????
git push
git checkout -                   # ????
git reset --hard @{u}            # ????
git merge --no-ff --no-edit -    # ????
git checkout -                   # ????
# ready for more feature commits
Bash
# ORIGINAL

git checkout feature
# hack hack hack
git push
git checkout qa-environment
git merge --no-ff --no-edit feature
git push
# hack hack hack
# whoops
git checkout feature
git cherry-pick origin/qa-environment..qa-environment
git push
git checkout qa-environment
git reset --hard origin/qa-environment
git merge --no-ff --no-edit feature
git checkout feature
# ready for more feature commits

Make the things you repeat the easiest to do

Because these commands are generalized, we can run some series of them once, maybe

git checkout - && git reset --hard @{u} && git checkout -

or

git checkout - && git cherry-pick @{-1}@{u}.. @{-1} && git checkout - && git reset --hard @{u} && git checkout -

and then those will be in the shell history just waiting to be retrieved and run again the next time, whether with CtrlR incremental search or history substring searching bound to the up arrow or however your interactive shell is configured. Or make it an alias, or even better an abbreviation if your interactive shell supports them. Save the body wear and tear, give memory a break, and level up in Git.

And keep going

The GitHub blog has a good primer on triangular workflows and how they can polish your process of contributing to external projects.

The FreeBSD Wiki has a more in-depth article on triangular workflow process (though it doesn't know about @{push} and @{upstream}).

The construct @{-<n>} and the suffixes @{push} and @{upstream} are all part of the gitrevisions spec. Direct links to each:



    • Code
    • Front-end Engineering
    • Back-end Engineering

    es

    Setting New Project Managers Up for Success

    At Viget, we’ve brought on more than a few new Project Managers over the past couple of years, as we continue to grow. The awesome new people we’ve hired have ranged in their levels of experience, but some of them are earlier in their careers and need support from more experienced PMs to develop their skills and flourish.

    We have different levels of training and support for new PMs. These broadly fall into four categories:

    • Onboarding: Learning about Viget tools and processes
    • Shadowing: Learning by watching others
    • Pairing: Learning by doing collaboratively
    • Leading: Learning by doing solo

    Onboarding

    In addition to conducting intro sessions to each discipline at Viget, new Viget PMs go through a lengthy set of training sessions that are specific to the PM lab. These include intros to:

    PM tools and resourcesProject processes
    Project typesProject checklists
    Project taskingProject planning
    Budgets, schedules, and resourcingRetrospectives
    Working with remote teamsProject kickoffs
    Thinking about developmentGithub and development workflow
    Tickets, definition, and documentationQA testing
    Account management

    Shadowing

    After PMs complete the onboarding process, they start shadowing other PMs’ projects to get exposure to the different types of projects we run (since the variety is large). We cater length and depth of shadowing based on how much experience a PM has coming in. We also try to expose PMs to multiple project managers, so they can see how PM style differs person-to-person.

    We’ve found that it can be most effective to have PMs shadow activities that are more difficult to teach in theory, such as shadowing a PM having a difficult conversation with a client, or shadowing a front-end build-out demo to see how the PM positions the meeting and our process to the client. More straightforward tasks like setting up a Harvest project could be done via pairing, since it’s easy to get the hang of with a little guidance.

    Pairing

    While shadowing is certainly helpful, we try to get PMs into pairing mode pretty quickly, since we’ve found that most folks learn better by doing than by watching. Sometimes this might mean having a new PM setting up an invoice or budget sheet for a client while a more experienced PM sits next to them, talking them through the process. We’ve found that having a newer PM lead straightforward activities with guidance tends to be more effective than the newer PM merely watching the more experienced PM do that activity.

    Another tactic we take is to have both PMs complete a task independently, and then meet and talk through their work, with the more experienced PM giving the less experienced PM feedback. That helps the newer PM think through a task on their own, and gain experience, but still have the chance to see how someone else would have approached the task and get meaningful feedback.

    Leading

    Once new PMs are ready to be in the driver’s seat, they are staffed as the lead on projects. The timing of when someone shifts into a lead role depends on how much prior experience that person has, as well as what types of projects are actively ready to be worked on.

    Most early-career project managers have a behind-the-scenes project mentor (another PM) on at least their first couple projects, so they have a dedicated person to ask questions and get advice from who also has more detailed context than that person’s manager would. For example, mentors often shadow key client and internal meetings and have more frequent check-ins with mentees. This might be less necessary at a company where all the projects are fairly similar, but at Viget, our projects vary widely in scale and services provided, as well as client needs. Because of this, there’s no “one size fits all” process and we have a significant amount of customization per project, which can be daunting to new PMs who are still getting the hang of things.

    For these mentorship pairings, we use a mentorship plan document (template here) to help the mentor and mentee work together to define goals, mentorship focuses, and touchpoints. Sometimes the mentee’s manager will take a first stab at filling out the plan, other times, the mentor will start that process.

    Management Touchpoints

    Along the way, we make sure new PMs have touchpoints with their managers to get the level of support they need to grow and succeed. Managers have regular 1:1s with PMs that are referred to as “project 1:1s”, and are used for the managee to talk through and get advice on challenges or questions related to the projects they’re working on—though really, they can be used for whatever topics are on the managee’s mind. PMs typically have 1:1s with managers daily the first week, two to three times per week after that for the first month or so, then scale down to once per week, and then scale down to bi-weekly after the first six months.

    In addition to project 1:1s, we also have monthly 1:1s that are more bigger-picture and focused on goal-setting and progress, project feedback from that person’s peers, reflection on how satisfied and fulfilled they’re feeling in their role, and talking through project/industry interests which informs what projects we should advocate for them to be staffed on. We have a progress log template that we customize per PM to keep track of goals and progress.

    We try to foster a supportive environment that encourages growth, feedback, and experiential learning, but also that lets folks have the autonomy to get in the driver’s seat as soon as they’re comfortable. Interested in learning more about what it’s like to work at Viget? Check out our open positions here.




    es

    Our WFH Best Practices

    Our first remote office opened in 2007 when a designer and a developer left our HQ office and moved to Durham. Ever since we've been fine-tuning our ability to collaborate across locations. Today, we have team members across the country in our four offices, and we have fully remote employees in Charleston, Kansas City, New York City, Dallas, and Charlottesville.

    Because of the coronavirus outbreak, a lot of people recently started working from their homes across the world, the country, and Viget. We wanted to share some of our best practices for being great teammates and doing great work, regardless of locale, and we’d love to hear yours in the comments.

    Communicate Often and Write It Down

    We want every person at Viget to be informed and connected. We do this in a few ways. We have a company Knowledge Base, which contains critical information including HR policies, office processes, brand guidelines, project resources, etc. We also have a well-organized Google Drive that everyone can access.

    My favorite communication tool we use, however, is our Internal Lab Report. Every week, we create a Google Doc with HR updates, birthdays, upcoming events we’re attending, relevant publicity we or a client received, and timely updates on projects, sales, and recruiting. This report allows the entire team to have the same information, regardless of PTO schedules, and it provides a record that can be referenced weeks, months, or years later.

    I have also found our Slack habits really helpful. We try to make our availability easily known, mostly via a passive Slack status. We each update our status daily, sometimes multiple times, so people can see if we’re working from home, out of the office for an appointment, in a meeting, or offline for a personal phone call. We also have a few Slack Channels we use very specifically to announce PTO, important announcements, and recently, one that is specific to the updating coronavirus situation.

    My work from home station.

    Figure Out Your Boundaries

    This looks different for everyone and can be an ever-changing target. Understanding your boundaries requires you to be honest with yourself – Are you easily distracted? Can you successfully work in pajama pants? Will your dog actually allow you to get work done? Does working from the couch result in good work, or do you need a designated work spot? For some, working from home requires setting boundaries to ensure the work gets done. For others, working from home requires setting a start and stop times to ensure you don’t overwork yourself.

    Viget has a flexible work policy, so many of us work from home fairly often and have gotten our routines set up. As such, we have written about this before! Check out Trevor’s article about working remotely.

    Show Your Face

    When I first started at Viget, I’d never worked anywhere that used a Google Hangout for nearly every meeting. At first, I was tempted to call into meetings and leave the camera off because I found it exposing. Now, I can’t imagine not using it, and I’ve even embraced it in my personal life with friends and family. I realized the value in face-to-face conversations even in virtual form, the ability to see body language, and the connection you establish when you see each other's faces — even if your hair isn't perfect or you haven't arranged your plants just-so in the view behind you. Whenever possible, use your camera during a meeting. It increases trust, communication, and in my personal-not-backed-by-science-opinion, lightness, which frankly, I think we can all use a bit more of right now.

    Here's a screen shot from our Saint Patrick's Day Happy Hour.

    Create Shared Experiences

    As a company with project teams often distributed across our four locations, cross-office experiences are vital to our culture, and we’ve spent years working to keep our remote offices in sync. A few of our ongoing group activities include a monthly virtual Book Club, our weekly full-team Free Lunch Friday tradition, Donut for Slack, and, of course, our Pointless Weekends.

    The current global health crisis now requires almost all of the company to work remote, so we’ve gotten creative with our attempts to increase non-project time together, in order to keep up the vibes we’ve worked hard to create.

    What we’ve recently started:

      • Last Weekend this Morning - Monday mornings, we have an optional virtual coffee, where anyone who’d like to chat can join and share the latest gardening lesson or bingeable tv show. It lets us start our week off as we would when we’re all in the office — saying hello to each other.
      • Virtual Happy Hours - We are a company that likes to socialize, and a bit of distance doesn’t stop us. This week, we set up an after-hours Happy Hour for St. Patrick’s Day.
      • Daily Lunch Table- If you’ve ever visited our HQ office in Falls Church, you’ll notice our large kitchen table. We have an informal tradition of gathering around noon to eat together, whether it’s just a couple folks or the whole team. We now do this lunch virtually. So far, we’re mostly taking turns discussing who is eating what, and of course, sharing said recipes.

    I crowdsourced some ideas from the Viget team, and here are some noteworthy takeaways:

    "In remote meetings, minimize all your other windows and be fully present. It’s easy to allow your attention to accidentally drift if you see a new Slack channel light up, especially if you’re in a larger meeting. Suddenly, you find yourself multitasking. Treat the meeting as if you were there in person: unless you’re taking notes, minimize your other tabs, and give the conversation your full attention."
    - Paul Koch

    “I try to reach out to more folks I don’t consistently work with. Since there’s less interaction in general, I want to be more intentional about staying connected.”
    - Laura Sweltz

    “Good habits are hard to form and bad habits are hard to break, and it’s often hard to find the right time to make a change. Most of us are experiencing a disruption to our usual behaviors right now, but that doesn’t have to be entirely bad. Be deliberate now and when this is over, we might all end up with some new work habits worth keeping.”
    - Emily Bloom

    “I’ve found it helpful to create a physical space similar to the one I had at work. While this isn’t exactly possible, small things like setting up a laptop stand and second screen make it so I’m less likely to get distracted and wander to the couch or kitchen (aka the snack danger zone.).”
    - Aubrey Lear

    “It’s easy to get stuck in one spot all day, so be proactive about moving around, or creating excuses to do so. Whether that’s making yourself a cup of coffee, eating lunch away from your computer, or going for a quick walk outside for some fresh air. This will help reduce the risk of going stir crazy.”
    -Zach Robbins

    True to Viget form, our remote work is all about “Progress, Not Perfection.” While remote collaboration is ingrained in our company, we’re looking for opportunities to fine-tune our approach and improve our habits.

    We’d love to hear from you: What are your best practices? Lessons learned?




    es

    Pursuing A Professional Certification In Scrum

    Professional certifications have become increasingly popular in this age of career switchers and the freelance gig economy. A certification can be a useful way to advance your skill set quickly or make your resume stand out, which can be especially important for those trying to break into a new industry or attract business while self-employed. Whatever your reason may be for pursuing a professional certificate, there is one question only you can answer for yourself: is it worth it?

    Finding first-hand experiences from professionals with similar career goals and passions was the most helpful research I used to answer that question for myself. So, here’s mine; why I decided to get Scrum certified, how I evaluated my options, and if it was really worth it.

    A shift in mindset

    My background originates in brand strategy where it’s typical for work to follow a predictable order, each step informing the next. This made linear techniques like water-fall timelines, completing one phase of work in its entirety before moving onto the next, and documenting granular tasks weeks in advance helpful and easy to implement. When I made the move to more digitally focused work, tasks followed a much looser set of ‘typical’ milestones. While the general outline remained the same (strategy, design, development, launch) there was a lot more overlap with how tasks informed each other, and would keep informing and re-informing as an iterative workflow would encourage.

    Trying to fit a very fluid process into my very stiff linear approach to project planning didn’t work so well. I didn’t have the right strategies to manage risks in a productive way without feeling like the whole project was off track; with the habit of account for granular details all the time, I struggled to lean on others to help define what we should work on and when, and being okay if that changed once, or twice, or three times. Everything I learned about the process of product development came from learning on the job and making a ton of mistakes—and I knew I wanted to get better.

    Photo by Christin Hume on Unsplash

    I was fortunate enough to work with a group of developers who were looking to make a change, too. Being ‘agile’-enthusiasts, this group of developers were desperately looking for ways to infuse our approach to product work with agile-minded principles (the broad definition of ‘agile’ comes from ‘The Agile Manifesto’, which has influenced frameworks for organizing people and information, often applied in product development). This not only applied to how I worked with them, but how they worked with each other, and the way we all onboarded clients to these new expectations. This was a huge eye opener to me. Soon enough, I started applying these agile strategies to my day-to-day— running stand-ups, setting up backlogs, and reorganizing the way I thought about work output. It’s from this experience that I decided it may be worth learning these principles more formally.

    The choice to get certified

    There is a lot of literature out there about agile methodologies and a lot to be learned from casual research. This benefitted me for a while until I started to work on more complicated projects, or projects with more ambitious feature requests. My decision to ultimately pursue a formal agile certification really came down to three things:

    1. An increased use of agile methods across my team. Within my day-to-day I would encounter more team members who were familiar with these tactics and wanted to use them to structure the projects they worked on.
    2. The need for a clear definition of what processes to follow. I needed to grasp a real understanding of how to implement agile processes and stay consistent with using them to be an effective champion of these principles.
    3. Being able to diversify my experience. Finding ways to differentiate my resume from others with similar experience would be an added benefit to getting a certification. If nothing else, it would demonstrate that I’m curious-minded and proactive about my career.

    To achieve these things, I gravitated towards a more foundational education in a specific agile-methodology. This made Scrum the most logical choice given it’s the basis for many of the agile strategies out there and its dominance in the field.

    Evaluating all the options

    For Scrum education and certification, there are really two major players to consider.

    1. Scrum Alliance - Probably the most well known Scrum organization is Scrum Alliance. They are a highly recognizable organization that does a lot to further the broader understanding of Scrum as a practice.
    2. Scrum.org - Led by the original co-founder of Scrum, Ken Schwaber, Scrum.org is well-respected and touted for its authority in the industry.

    Each has their own approach to teaching and awarding certifications as well as differences in price point and course style that are important to be aware of.

    SCRUM ALLIANCE

    Pros

    • Strong name recognition and leaders in the Scrum field
    • Offers both in-person and online courses
    • Hosts in-person events, webinars, and global conferences
    • Provides robust amounts of educational resources for its members
    • Has specialization tracks for folks looking to apply Scrum to their specific discipline
    • Members are required to keep their skills up to date by earning educational credits throughout the year to retain their certification
    • Consistent information across all course administrators ensuring you'll be set up to succeed when taking your certification test.

    Cons

    • High cost creates a significant barrier to entry (we’re talking in the thousands of dollars here)
    • Courses are required to take the certification test
    • Certification expires after two years, requiring additional investment in time and/or money to retain credentials
    • Difficult to find sample course material ahead of committing to a course
    • Courses are several days long which may mean taking time away from a day job to complete them

    SCRUM.ORG

    Pros

    • Strong clout due to its founder, Ken Schwaber, who is the originator of Scrum
    • Offers in-person classes and self-paced options
    • Hosts in-person events and meetups around the world
    • Provides free resources and materials to the public, including practice tests
    • Has specialization tracks for folks looking to apply Scrum to their specific discipline
    • Minimum score on certification test required to pass; certification lasts for life
    • Lower cost for certification when compared to peers

    Cons

    • Much lesser known to the general public, as compared to its counterpart
    • Less sophisticated educational resources (mostly confined to PDFs or online forums) making digesting the material challenging
    • Practice tests are slightly out of date making them less effective as a study tool
    • Self-paced education is not structured and therefore can’t ensure you’re learning everything you need to know for the test
    • Lack of active and engaging community will leave something to be desired

    Before coming to a decision, it was helpful to me to weigh these pros and cons against a set of criteria. Here’s a helpful scorecard I used to compare the two institutions.

    Scrum Alliance Scrum.org
    Affordability ⚪⚪⚪
    Rigor⚪⚪⚪⚪⚪
    Reputation⚪⚪⚪⚪⚪
    Recognition⚪⚪⚪
    Community⚪⚪⚪
    Access⚪⚪⚪⚪⚪
    Flexibility⚪⚪⚪
    Specialization⚪⚪⚪⚪⚪⚪
    Requirements⚪⚪⚪
    Longevity⚪⚪⚪

    For me, the four areas that were most important to me were:

    • Affordability - I’d be self-funding this certificate so the investment of cost would need to be manageable.
    • Self-paced - Not having a lot of time to devote in one sitting, the ability to chip away at coursework was appealing to me.
    • Reputation - Having a certificate backed by a well-respected institution was important to me if I was going to put in the time to achieve this credential.
    • Access - Because I wanted to be a champion for this framework for others in my organization, having access to resources and materials would help me do that more effectively.

    Ultimately, I decided upon a Professional Scrum Master certification from Scrum.org! The price and flexibility of learning course content were most important to me. I found a ton of free materials on Scrum.org that I could study myself and their practice tests gave me a good idea of how well I was progressing before I committed to the cost of actually taking the test. And, the pedigree of certification felt comparable to that of Scrum Alliance, especially considering that the founder of Scrum himself ran the organization.

    Putting a certificate to good use

    I don’t work in a formal Agile company, and not everyone I work with knows the ins and outs of Scrum. I didn’t use my certification to leverage a career change or new job title. So after all that time, money, and energy, was it worth it?

    I think so. I feel like I use my certification every day and employ many of the principles of Scrum in my day-to-day management of projects and people.

    • Self-organizing teams is really important when fostering trust and collaboration among project members. This means leaning on each other’s past experiences and lessons learned to inform our own approach to work. It also means taking a step back as a project manager to recognize the strengths on your team and trust their lead.
    • Approaching things in bite size pieces is also a best practice I use every day. Even when there isn't a mandated sprint rhythm, breaking things down into effort level, goals, and requirements is an excellent way to approach work confidently and avoid getting too overwhelmed.
    • Retrospectives and stand ups are also absolute musts for Scrum practices, and these can be modified to work for companies and project teams of all shapes and sizes. Keeping a practice of collective communication and reflection will keep a team humming and provides a safe space to vent and improve.
    Photo by Gautam Lakum on Unsplash

    Parting advice

    I think furthering your understanding of industry standards and keeping yourself open to new ways of working will always benefit you as a professional. Professional certifications are readily available and may be more relevant than ever.

    If you’re on this path, good luck! And here are some things to consider:

    • Do your research – With so many educational institutions out there, you can definitely find the right one for you, with the level of rigor you’re looking for.
    • Look for company credits or incentives – some companies cover part or all of the cost for continuing education.
    • Get started ASAP – You don’t need a full certification to start implementing small tactics to your workflows. Implementing learnings gradually will help you determine if it’s really something you want to pursue more formally.




    es

    Unsolved Zoom Mysteries: Why We Have to Say “You’re Muted” So Much

    Video conference tools are an indispensable part of the Plague Times. Google Meet, Microsoft Teams, Zoom, and their compatriots are keeping us close and connected in a physically distanced world.

    As tech-savvy folks with years of cross-office collaboration, we’ve laughed at the sketches and memes about vidconf mishaps. We practice good Zoomiquette, including muting ourselves when we’re not talking.

    Yet even we can’t escape one vidconf pitfall. (There but for the grace of Zoom go I.) On nearly every vidconf, someone starts to talk, and then someone else says: “Oop, you’re muted.” And, inevitably: “Oop, you’re still muted.”

    That’s right: we’re trying to follow Zoomiquette by muting, but then we forget or struggle to unmute when we do want to talk.

    In this post, I’ll share my theories for why the You’re Muted Problems are so pervasive, using Google Meet, Microsoft Teams, and Zoom as examples. Spoiler alert: While I hope this will help you be more mindful of the problem, I can’t offer a good solution. It still happens to me. All. The. Time.

    Skip the why and go straight to the vidconf app keyboard shortcuts you should memorize right now.

    Why we don't realize we’re muted before talking

    Why does this keep happening?!?

    Simply put: UX and design decisions make it harder to remember that you’re muted before you start to talk.

    Here’s a common scenario: You haven’t talked for a bit, so you haven’t interacted with the Zoom screen for a few seconds. Then you start to talk — and that’s when someone tells you, “You’re muted.”

    We forget so easily in these scenarios because when our mouse has been idle for a few seconds, the apps hide or downplay the UI elements that tell us we’re muted.

    Zoom and Teams are the worst offenders:

    • Zoom hides both the toolbar with the main in-app controls (the big mute button) and the mute status indicator on your video pane thumbnail.
    • Teams hides the toolbar, and doesn't show a mute status indicator on your video thumbnail in the first place.

    Meet is only slightly better:

    • Meet hides the toolbar, and shows only a small mute status icon in your video thumbnail.

    Even when our mouse is active, the apps’ subtle approach to muted state UI can make it easy to forget that we’re muted:

    Teams is the worst offender:

    • The mute button is an icon rather than words.
    • The muted-state icon's styling could be confused with unmuted state: Teams does not follow the common pattern of using red to denote muted state.
    • The mute button is not differentiated in visual hierarchy from all the other controls.
    • As mentioned above, Teams never shows a secondary mute status indicator.

    Zoom is a bit better, but still makes it pretty easy to forget that you’re muted:

    • Pros:
      • Zoom is the only app to use words on the mute button, in this case to denote the button action (rather than the muted state).
      • The muted-state icon’s styling (red line) is less likely to be confused with the unmuted-state icon.
    • Cons:
      • The mute button’s placement (bottom left corner of the page) is easy to overlook.
      • The mute button is not differentiated in visual hierarchy from the other toolbar buttons — and Zoom has a lot of toolbar buttons, especially when logged in as host.
      • The secondary mute status indicator is a small icon.
      • The mute button’s muted-state icon is styled slightly differently from the secondary mute status indicator.
    • Potential Cons:
      • While words denote the button action, only an icon denotes the muted state.

    Meet is probably the clearest of the three apps, but still has pitfalls:

    • Pros:
      • The mute button is visually prominent in the UI: It’s clearly differentiated in the visual hierarchy relative to other controls (styled as a primary button); is a large button; and is placed closer to the center of the controls bar.
      • The muted-state icon’s styling (red fill) is less likely to be confused with the unmuted-state icon.
    • Cons:
      • Uses only an icon rather than words to denote the muted state.
    • Unrelated Con:
      • While the mute button is visually prominent, it’s also placed next to the hang-up button. So in Meet’s active state you might be less likely to forget you’re muted … but more likely to accidentally hang up when trying to unmute. 😬

    I know modern app design leans toward minimalism. There’s often good rationale to use icons rather than words, or to de-emphasize controls and indicators when not in use.

    But again: This happens on basically every call! Often multiple times per call!! And we’re supposed to be tech-savvy!!! Imagine what it’s like for the tens of millions of vidconf newbs.

    I would argue that “knowing your muted state” has turned out to be a major vidconf user need. At this point, it’s certainly worth rethinking UX patterns for.

    Why we keep unsuccessfully unmuting once we realize we’re muted

    So we can blame the You’re Muted Problem on UX and design. But what causes the You’re Still Muted Problem? Once we know we’re muted, why do we sometimes fail to unmute before talking again?

    This one is more complicated — and definitely more speculative. To start making sense of this scenario, here’s the sequence I’m guessing most commonly plays out (I did this a couple times before I became aware of it):

    The crucial part is when the person tries to unmute by pressing the keyboard Volume On/Off key.

    If that’s in fact what’s happening (again, this is just a hypothesis), I’m guessing they did that because when someone says “You’re muted” or “I can’t hear you,” our subconscious thought process is: “Oh, Audio is Off. Press the keyboard key that I usually press when I want to change Audio Off to Audio On.”

    There are two traps in this reflexive thought process:

    First, the keyboard volume keys control the speaker volume, not the microphone volume. (More specifically, they control the system sound output settings, rather than the system sound input settings or the vidconf app’s sound input settings.)

    In fact, there isn’t a keyboard key to control the microphone volume. You can’t unmute your mic via a dedicated keyboard key, the way that you can turn the speaker volume on/off via a keyboard key while watching a movie or listening to music.

    Second, I think we reflexively press the keyboard key anyway because our mental model of the keyboard audio keys is just: Audio. Not microphone vs. speaker.

    This fuzzy mental model makes sense: There’s only one set of keyboard keys related to audio, so why would I think to distinguish between microphone and speaker? 

    So my best guess is hardware design causes the You’re Still Muted Problem. After all, keyboard designs are from a pre-Zoom era, when the average person rarely used the computer’s microphone.

    If that is the cause, one potential solution is for hardware manufacturers to start including dedicated keys to control microphone volume:

    Video conference keyboard shortcuts you should memorize right now

    Let me know if you have other theories for the You’re Still Muted Problem!

    In the meantime, the best alternative is to learn all of the vidconf app keyboard shortcuts for muting/unmuting:

    • Meet
      • Mac: Command(⌘) + D
      • Windows: Control + D
    • Teams
      • Mac: Command(⌘) + Shift + M
      • Windows: Ctrl + Shift + M
    • Zoom
      • Mac: Command(⌘) + Shift + A
      • Windows: Alt + A
      • Hold Spacebar: Temporarily unmute

    Other vidconf apps not included in my analysis:

    • Cisco Webex Meetings
      • Mac: Ctrl + Alt + M
      • Windows: Ctrl + Shift + M
    • GoToMeeting

    Bonus protip from Jackson Fox: If you use multiple vidconf apps, pick a keyboard shortcut that you like and manually change each app’s mute/unmute shortcut to that. Then you only have to remember one shortcut!




    es

    So You've Written a Bad Design Take

    So you’ve just written a blog post or tweet about why wireframes are becoming obsolete, the dangers of “too accessible” design, or how a certain style of icon creates “cognitive fatigue.”

    Your post went viral, but now you’re getting ratioed by rude people on the Internet. That sucks! You were just trying to start a conversation and you probably didn’t deserve all that negativity (except for you, “too accessible” guy).

    Most likely, you made one of these common mistakes:

    1. You made generalizations about “design”

    You, a good user-centered designer, know that you are not your user. Nor are you every designer.

    First of all, let's acknowledge that there is no universal definition of design. Even if we narrow it down to software design, it’s still hard to make generalizations. Agency, in-house, product, startup, enterprise, non-profit, website, app, connected hardware, etc. – there are a lot of different work contexts and cultures for people with “designer” in their titles.

    "The Design Industry" is not a thing, but even if it were, you don't speak for it. Don’t assume that the kind of design work you do is the universal default.

    2. You didn’t share enough context

    There are many great design books and few great design blog posts. (There are, to my knowledge, no great design tweets, but I am open to your suggestions.) Writing about design is not well suited to short formats, because context plays such an important role and there’s always a lot of it to cover.

    Writing about your work should include as much context as you would include if you were presenting your portfolio for a job interview. What kind of organization did you work for? Who was your client and/or your stakeholders? What was the goal of the project? Your timeline? What was the makeup of your team? What were the notable business rules and constraints? How are you defining effectiveness and success?

    Without these kinds of details, it’s not possible for other designers to know if what you’ve written is credible or applicable to them.

    3. You were too certain

    A blog post doesn’t need to be a dissertation. It’s okay to share hunches and anecdotes, but give the necessary caveats. And if you're making claims about science, bruh, you gotta cite your sources.

    Be humble in your takes. Your account of what worked for you and why is more valuable to your peers than making sweeping claims and reheating the same old arguments. Be prepared to be told you’re wrong, and have the humility to realize that your perspective is just your perspective. Real conversations, like good design, are built on feedback and diverse viewpoints.

    Together, we can improve the discourse in our information ecosystems. Don't generalize. Give context. Be humble.




    es

    Global Gitignore Files Are Cool and So Are You

    Setting it up

    First, here's the config setup you need to even allow for such a radical concept.

    1. Define the global gitignore file as a global Git configuration:

      git config --global core.excludesfile ~/.gitignore
      

      If you're on OSX, this command will add the following config lines in your ~/.gitconfig file.

      [core]
        excludesfile = /Users/triplegirldad/.gitignore
      
    2. Load that ~/.gitignore file up with whatever you want. It probably doesn't exist as a file yet so you might have to create it first.

    Harnessing its incredible power

    There are only two lines in my global gitignore file and they are both fairly useful pretty much all the time.

    $ cat ~/.gitignore
    TODO.md
    playground
    

    This 2 line file means that no matter where I am, what project I'm working on, where in the project I'm doing so, I have an easy space to stash notes, thoughts, in progress ideas, spikes, etc.

    TODO.md

    More often than not, I'm fiddling around with a TODO.md file. Something about writing markdown in your familiar text editor speaks to my soul. It's quick, it's easy, you have all the text editing tricks available to you, and it never does anything you wouldn't expect (looking at you auto-markdown-formatting editors). I use one or two # for headings, I use nested lists, and I ask for nothing more. Nothing more than more TODO.md files that is!

    In practice I tend to just have one TODO.md file per project, right at the top, ready to pull up in a few keystrokes. Which I do often. I pull this doc up if:

    • I'm in a meeting and I just said "oh yeah that's a small thing, I'll knock it out this afternoon".
    • I'm halfway through some feature development and realize I want to make a sweeping refactor elsewhere. Toss some thoughts in the doc, and then get back to the task at hand.
    • It's the end of the day and I have to switch my brain into "feed small children" mode, thus obliterating everything work-related from my short term memory. When I open things up the next day and know exactly what the next thing to dive into was.
    • I'm preparing for a big enough refactor and I can't hold it all in my brain at once. What I'd give to have an interactive 3D playground for brain thoughts, but in the meantime a 2D text file isn't a terrible way to plan out dev work.

    playground

    Sometimes you need more than some human words in a markdown file to move an idea along. This is where my playground directory comes in. I can load this directory up with code that's related to a given project and keep it out of the git history. Because who doesn't like a place to play around.

    I find that this directory is more useful for long running maintenance projects over fast moving greenfield ones. On the maintenance projects, I tend to find myself assembling a pile of scripts and experiments for various situations:

    • The client requests a one-time obscure data export. Whip up some CSV generation code and save that code in the playground directory.
    • The client requests a different obscure data export. Pull up the last time you did something vaguely similar and save yourself the startup time.
    • A batch of data needs to be imported just once. Might as well stash that in the chance that "just once" is actually "just a few times".
    • Kicking the tires on an integration with a third party service.

    Some of these playground files end up being useful more times than I can count (eg: the ever-changing user_export.rb script). Some items get promoted into application code, which is always fun. But most files here serve their purpose and then wither away. And that's fine. It's a playground, anything goes.

    Wrapping up

    Having a personal space for project-specific notes and code has been helpful to me over the years as a developer on multiple projects. If you have your own organizational trick, or just want to brag about how you memorize everything without any markdown files, let me know in the comments below!




    es

    Australia’s global talent visa for individuals and businesses

    In late 2019 the Australian Government launched the Global Talent – Independent program which offers a streamlined, priority visa pathway for highly skilled and talented individuals to work and live permanently in Australia. There are two streams. The first is the Global Talent Independent Program (GTI) and the second is the Global Talent Employer Sponsored (GTES). […]

    The post Australia’s global talent visa for individuals and businesses appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




    es

    Reel 3.0: New Color Schemes, Portfolio Styles & More!

    We’re very excited to announce a new major update for our Reel theme. The new 3.0 version brings new color schemes and many improvements to the Portfolio Showcase widget. What’s new in 3.0? 5 New Color Schemes + 2 New Theme Styles Full-width header option New styles & options for Portfolio Showcase widget 5 New Color Schemes After long research […]




    es

    7 Best WordPress Membership Plugins to Generate Recurring Revenue

    Do you want to turn your WordPress blog into a membership site? Businesses around the globe use this model to sell their physical products or offer exclusive digital content, and many of them are super successful. CopyBlogger, a site with content marketing lessons, offers premium courses to members and they’re currently an eight-figure business. Meanwhile, the owner of the razor […]




    es

    Meet the Remote Workplaces of the WPZOOM Team

    The world turned upside down lately, forcing the majority of people to work from their homes. For the WPZOOM team, working remotely is not something new. Some of our team members have been working remotely since they joined us, others had the experience of both working from home and from the office (hello, Pavel). However, we’ve gone completely remote, without […]




    es

    9 Things You Can Do To Your WordPress Website During Quarantine

    If you’d have told us at WPZOOM about the current situation we find ourselves in six months ago, we wouldn’t have believed you. It’s all we can see if we turn on the TV and it’s clear right now, humanity has taken a break. Worrying about loved ones, ensuring we stay safe, and for heaven’s sake, stay inside. Staying inside […]




    es

    Presence 2.0: Beaver Builder Integration, Dark Skin & More!

    Great news for the users of Presence — our multipurpose theme. We have finally released the long-awaited 2.0 version, which features major changes and improvements. What’s new in Presence 2.0? Beaver Builder Integration Dark Skin New Demo: Organic Shop New Typography and Colors options in the Customizer New Templates in Page Builder Beaver Builder Integration If you have followed recent […]




    es

    How to Create an Online Ordering Page for Restaurants with WooCommerce

    Until recently it was something normal for any restaurant to have a well-maintained website. Even so, it seems that for many restaurants this was something difficult to achieve. In these difficult times, for many restaurant owners and other businesses in this field, owning just a simple website is no longer enough. If you still want to remain in business you […]




    es

    20+ Best WordPress Video Themes for 2020

    If you’re a video producer or vlogger looking to set up your own video website to showcase your content, you’ll most likely need one that reflects your own unique style. You’ll need to think about the gallery options you’d want, color schemes, customizations, and the type of business you’re running. You should also consider the different technology you’ll need to […]





    es

    How to Foster Real-Time Client Engagement During Moderated Research

    When we conduct moderated research, like user interviews or usability tests, for our clients, we encourage them to observe as many sessions as possible. We find when clients see us interview their users, and get real-time responses, they’re able to learn about the needs of their users in real-time and be more active participants in the process. One way we help clients feel engaged with the process during remote sessions is to establish a real-time communication backchannel that empowers clients to flag responses they’d like to dig into further and to share their ideas for follow-up questions.

    There are several benefits to establishing a communication backchannel for moderated sessions:

    • Everyone on the team, including both internal and client team members, can be actively involved throughout the data collection process rather than waiting to passively consume findings.
    • Team members can identify follow-up questions in real-time which allows the moderator to incorporate those questions during the current session, rather than just considering them for future sessions.
    • Subject matter experts can identify more detailed and specific follow-up questions that the moderator may not think to ask.
    • Even though the whole team is engaged, a single moderator still maintains control over the conversation which creates a consistent experience for the participant.

    If you’re interested in creating your own backchannel, here are some tips to make the process work smoothly:

    • Use the chat tool that is already being used on the project. In most cases, we use a joint Slack workspace for the session backchannel but we’ve also used Microsoft Teams.
    • Create a dedicated channel like #moderated-sessions. Conversation in this channel should be limited to backchannel discussions during sessions. This keeps the communication consolidated and makes it easier for the moderator to stay focused during the session.
    • Keep communication limited. Channel participants should ask basic questions that are easy to consume quickly. Supplemental commentary and analysis should not take place in the dedicated channel.
    • Use emoji responses. The moderator can add a quick thumbs up to indicate that they’ve seen a question.

    Introducing backchannels for communication during remote moderated sessions has been a beneficial change to our research process. It not only provides an easy way for clients to stay engaged during the data collection process but also increases the moderator’s ability to focus on the most important topics and to ask the most useful follow-up questions.




    es

    Markdown Comes Alive! Part 1, Basic Editor

    In my last post, I covered what LiveView is at a high level. In this series, we’re going to dive deeper and implement a LiveView powered Markdown editor called Frampton. This series assumes you have some familiarity with Phoenix and Elixir, including having them set up locally. Check out Elizabeth’s three-part series on getting started with Phoenix for a refresher.

    This series has a companion repository published on GitHub. Get started by cloning it down and switching to the starter branch. You can see the completed application on master. Our goal today is to make a Markdown editor, which allows a user to enter Markdown text on a page and see it rendered as HTML next to it in real-time. We’ll make use of LiveView for the interaction and the Earmark package for rendering Markdown. The starter branch provides some styles and installs LiveView.

    Rendering Markdown

    Let’s set aside the LiveView portion and start with our data structures and the functions that operate on them. To begin, a Post will have a body, which holds the rendered HTML string, and title. A string of markdown can be turned into HTML by calling Post.render(post, markdown). I think that just about covers it!

    First, let’s define our struct in lib/frampton/post.ex:

    defmodule Frampton.Post do
      defstruct body: "", title: ""
    
      def render(%__MODULE{} = post, markdown) do
        # Fill me in!
      end
    end

    Now the failing test (in test/frampton/post_test.exs):

    describe "render/2" do
      test "returns our post with the body set" do
        markdown = "# Hello world!"                                                                                                                 
        assert Post.render(%Post{}, markdown) == {:ok, %Post{body: "<h1>Hello World</h1>
    "}}
      end
    end

    Our render method will just be a wrapper around Earmark.as_html!/2 that puts the result into the body of the post. Add {:earmark, "~> 1.4.3"} to your deps in mix.exs, run mix deps.get and fill out render function:

    def render(%__MODULE{} = post, markdown) do
      html = Earmark.as_html!(markdown)
      {:ok, Map.put(post, :body, html)}
    end

    Our test should now pass, and we can render posts! [Note: we’re using the as_html! method, which prints error messages instead of passing them back to the user. A smarter version of this would handle any errors and show them to the user. I leave that as an exercise for the reader…] Time to play around with this in an IEx prompt (run iex -S mix in your terminal):

    iex(1)> alias Frampton.Post
    Frampton.Post
    iex(2)> post = %Post{}
    %Frampton.Post{body: "", title: ""}
    iex(3)> {:ok, updated_post} = Post.render(post, "# Hello world!")
    {:ok, %Frampton.Post{body: "<h1>Hello world!</h1>
    ", title: ""}}
    iex(4)> updated_post
    %Frampton.Post{body: "<h1>Hello world!</h1>
    ", title: ""}

    Great! That’s exactly what we’d expect. You can find the final code for this in the render_post branch.

    LiveView Editor

    Now for the fun part: Editing this live!

    First, we’ll need a route for the editor to live at: /editor sounds good to me. LiveViews can be rendered from a controller, or directly in the router. We don’t have any initial state, so let's go straight from a router.

    First, let's put up a minimal test. In test/frampton_web/live/editor_live_test.exs:

    defmodule FramptonWeb.EditorLiveTest do
      use FramptonWeb.ConnCase
      import Phoenix.LiveViewTest
    
      test "the editor renders" do
        conn = get(build_conn(), "/editor")
        assert html_response(conn, 200) =~ "data-test="editor""
      end
    end

    This test doesn’t do much yet, but notice that it isn’t live view specific. Our first render is just the same as any other controller test we’d write. The page’s content is there right from the beginning, without the need to parse JavaScript or make API calls back to the server. Nice.

    To make that test pass, add a route to lib/frampton_web/router.ex. First, we import the LiveView code, then we render our Editor:

    import Phoenix.LiveView.Router
    # … Code skipped ...
    # Inside of `scope "/"`:
    live "/editor", EditorLive

    Now place a minimal EditorLive module, in lib/frampton_web/live/editor_live.ex:

    defmodule FramptonWeb.EditorLive do
      use Phoenix.LiveView
    
      def render(assigns) do
        ~L"""
          <div data-test=”editor”>
            <h1>Hello world!</h1>
          </div>
          """
      end
    
      def mount(_params, _session, socket) do
        {:ok, socket}
      end
    end

    And we have a passing test suite! The ~L sigil designates that LiveView should track changes to the content inside. We could keep all of our markup in this render/1 method, but let’s break it out into its own template for demonstration purposes.

    Move the contents of render into lib/frampton_web/templates/editor/show.html.leex, and replace EditorLive.render/1 with this one liner: def render(assigns), do: FramptonWeb.EditorView.render("show.html", assigns). And finally, make an EditorView module in lib/frampton_web/views/editor_view.ex:

    defmodule FramptonWeb.EditorView do
      use FramptonWeb, :view
      import Phoenix.LiveView
    end

    Our test should now be passing, and we’ve got a nicely separated out template, view and “live” server. We can keep markup in the template, helper functions in the view, and reactive code on the server. Now let’s move forward to actually render some posts!

    Handling User Input

    We’ve got four tasks to accomplish before we are done:

    1. Take markdown input from the textarea
    2. Send that input to the LiveServer
    3. Turn that raw markdown into HTML
    4. Return the rendered HTML to the page.

    Event binding

    To start with, we need to annotate our textarea with an event binding. This tells the liveview.js framework to forward DOM events to the server, using our liveview channel. Open up lib/frampton_web/templates/editor/show.html.leex and annotate our textarea:

    <textarea phx-keyup="render_post"></textarea>

    This names the event (render_post) and sends it on each keyup. Let’s crack open our web inspector and look at the web socket traffic. Using Chrome, open the developer tools, navigate to the network tab and click WS. In development you’ll see two socket connections: one is Phoenix LiveReload, which polls your filesystem and reloads pages appropriately. The second one is our LiveView connection. If you let it sit for a while, you’ll see that it's emitting a “heartbeat” call. If your server is running, you’ll see that it responds with an “ok” message. This lets LiveView clients know when they've lost connection to the server and respond appropriately.

    Now, type some text and watch as it sends down each keystroke. However, you’ll also notice that the server responds with a “phx_error” message and wipes out our entered text. That's because our server doesn’t know how to handle the event yet and is throwing an error. Let's fix that next.

    Event handling

    We’ll catch the event in our EditorLive module. The LiveView behavior defines a handle_event/3 callback that we need to implement. Open up lib/frampton_web/live/editor_live.ex and key in a basic implementation that lets us catch events:

    def handle_event("render_post", params, socket) do
      IO.inspect(params)
    
      {:noreply, socket}
    end

    The first argument is the name we gave to our event in the template, the second is the data from that event, and finally the socket we’re currently talking through. Give it a try, typing in a few characters. Look at your running server and you should see a stream of events that look something like this:

    There’s our keystrokes! Next, let’s pull out that value and use it to render HTML.

    Rendering Markdown

    Lets adjust our handle_event to pattern match out the value of the textarea:

    def handle_event("render_post", %{"value" => raw}, socket) do

    Now that we’ve got the raw markdown string, turning it into HTML is easy thanks to the work we did earlier in our Post module. Fill out the body of the function like this:

    {:ok, post} = Post.render(%Post{}, raw)
    IO.inspect(post)

    If you type into the textarea you should see output that looks something like this:

    Perfect! Lastly, it’s time to send that rendered html back to the page.

    Returning HTML to the page

    In a LiveView template, we can identify bits of dynamic data that will change over time. When they change, LiveView will compare what has changed and send over a diff. In our case, the dynamic content is the post body.

    Open up show.html.leex again and modify it like so:

    <div class="rendered-output">
      <%= @post.body %>
    </div>

    Refresh the page and see:

    Whoops!

    The @post variable will only be available after we put it into the socket’s assigns. Let’s initialize it with a blank post. Open editor_live.ex and modify our mount/3 function:

    def mount(_params, _session, socket) do
      post = %Post{}
      {:ok, assign(socket, post: post)}
    end

    In the future, we could retrieve this from some kind of storage, but for now, let's just create a new one each time the page refreshes. Finally, we need to update the Post struct with user input. Update our event handler like this:

    def handle_event("render_post", %{"value" => raw}, %{assigns: %{post: post}} = socket) do
      {:ok, post} = Post.render(post, raw)
      {:noreply, assign(socket, post: post)
    end

    Let's load up http://localhost:4000/editor and see it in action.

    Nope, that's not quite right! Phoenix won’t render this as HTML because it’s unsafe user input. We can get around this (very good and useful) security feature by wrapping our content in a raw/1 call. We don’t have a database and user processes are isolated from each other by Elixir. The worst thing a malicious user could do would be crash their own session, which doesn’t bother me one bit.

    Check the edit_posts branch for the final version.

    Conclusion

    That’s a good place to stop for today. We’ve accomplished a lot! We’ve got a dynamically rendering editor that takes user input, processes it and updates the page. And we haven’t written any JavaScript, which means we don’t have to maintain or update any JavaScript. Our server code is built on the rock-solid foundation of the BEAM virtual machine, giving us a great deal of confidence in its reliability and resilience.

    In the next post, we’ll tackle making a shared editor, allowing multiple users to edit the same post. This project will highlight Elixir’s concurrency capabilities and demonstrate how LiveView builds on them to enable some incredible user experiences.



    • Code
    • Back-end Engineering

    es

    Committed to the wrong branch? -, @{upstream}, and @{-1} to the rescue

    I get into this situation sometimes. Maybe you do too. I merge feature work into a branch used to collect features, and then continue development but on that branch instead of back on the feature branch

    git checkout feature
    # ... bunch of feature commits ...
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit feature
    git push
    # deploy qa-environment to the QA remote environment
    # ... more feature commits ...
    # oh. I'm not committing in the feature branch like I should be

    and have to move those commits to the feature branch they belong in and take them out of the throwaway accumulator branch

    git checkout feature
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout qa-environment
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit feature
    git checkout feature
    # ready for more feature commits

    Maybe you prefer

    git branch -D qa-environment
    git checkout qa-environment

    over

    git checkout qa-environment
    git reset --hard origin/qa-environment

    Either way, that works. But it'd be nicer if we didn't have to type or even remember the branches' names and the remote's name. They are what is keeping this from being a context-independent string of commands you run any time this mistake happens. That's what we're going to solve here.

    Shorthands for longevity

    I like to use all possible natively supported shorthands. There are two broad motivations for that.

    1. Fingers have a limited number of movements in them. Save as many as possible left late in life.
    2. Current research suggests that multitasking has detrimental effects on memory. Development tends to be very heavy on multitasking. Maybe relieving some of the pressure on quick-access short term memory (like knowing all relevant branch names) add up to leave a healthier memory down the line.

    First up for our scenario: the - shorthand, which refers to the previously checked out branch. There are a few places we can't use it, but it helps a lot:

    Bash
    # USING -
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit -        # ????
    git push
    # hack hack hack
    # whoops
    git checkout -        # now on feature ???? 
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout - # now on qa-environment ????
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit -        # ????
    git checkout -                       # ????
    # on feature and ready for more feature commits
    Bash
    # ORIGINAL
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit feature
    git push
    # hack hack hack
    # whoops
    git checkout feature
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout qa-environment
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit feature
    git checkout feature
    # ready for more feature commits

    We cannot use - when cherry-picking a range

    > git cherry-pick origin/-..-
    fatal: bad revision 'origin/-..-'
    
    > git cherry-pick origin/qa-environment..-
    fatal: bad revision 'origin/qa-environment..-'

    and even if we could we'd still have provide the remote's name (here, origin).

    That shorthand doesn't apply in the later reset --hard command, and we cannot use it in the branch -D && checkout approach either. branch -D does not support the - shorthand and once the branch is deleted checkout can't reach it with -:

    # assuming that branch-a has an upstream origin/branch-a
    > git checkout branch-a
    > git checkout branch-b
    > git checkout -
    > git branch -D -
    error: branch '-' not found.
    > git branch -D branch-a
    > git checkout -
    error: pathspec '-' did not match any file(s) known to git

    So we have to remember the remote's name (we know it's origin because we are devoting memory space to knowing that this isn't one of those times it's something else), the remote tracking branch's name, the local branch's name, and we're typing those all out. No good! Let's figure out some shorthands.

    @{-<n>} is hard to say but easy to fall in love with

    We can do a little better by using @{-<n>} (you'll also sometimes see it referred to be the older @{-N}). It is a special construct for referring to the nth previously checked out ref.

    > git checkout branch-a
    > git checkout branch-b
    > git rev-parse --abbrev-rev @{-1} # the name of the previously checked out branch
    branch-a
    > git checkout branch-c
    > git rev-parse --abbrev-rev @{-2} # the name of branch checked out before the previously checked out one
    branch-a

    Back in our scenario, we're on qa-environment, we switch to feature, and then want to refer to qa-environment. That's @{-1}! So instead of

    git cherry-pick origin/qa-environment..qa-environment

    We can do

    git cherry-pick origin/qa-environment..@{-1}

    Here's where we are (🎉 marks wins from -, 💥 marks the win from @{-1})

    Bash
    # USING - AND @{-1}
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit -                # ????
    git push
    # hack hack hack
    # whoops
    git checkout -                               # ????
    git cherry-pick origin/qa-environment..@{-1} # ????
    git push
    git checkout -                               # ????
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit -                # ????
    git checkout -                               # ????
    # ready for more feature commits
    Bash
    # ORIGINAL
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit feature
    git push
    # hack hack hack
    # whoops
    git checkout feature
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout qa-environment
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit feature
    git checkout feature
    # ready for more feature commits

    One down, two to go: we're still relying on memory for the remote's name and the remote branch's name and we're still typing both out in full. Can we replace those with generic shorthands?

    @{-1} is the ref itself, not the ref's name, we can't do

    > git cherry-pick origin/@{-1}..@{-1}
    origin/@{-1}
    fatal: ambiguous argument 'origin/@{-1}': unknown revision or path not in the working tree.
    Use '--' to separate paths from revisions, like this:
    'git <command> [<revision>...] -- [<file>...]'

    because there is no branch origin/@{-1}. For the same reason, @{-1} does not give us a generalized shorthand for the scenario's later git reset --hard origin/qa-environment command.

    But good news!

    Do @{u} @{push}

    @{upstream} or its shorthand @{u} is the remote branch a that would be pulled from if git pull were run. @{push} is the remote branch that would be pushed to if git push was run.

    > git checkout branch-a
    Switched to branch 'branch-a'
    Your branch is ahead of 'origin/branch-a' by 3 commits.
      (use "git push" to publish your local commits)
    > git reset --hard origin/branch-a
    HEAD is now at <the SHA origin/branch-a is at>

    we can

    > git checkout branch-a
    Switched to branch 'branch-a'
    Your branch is ahead of 'origin/branch-a' by 3 commits.
      (use "git push" to publish your local commits)
    > git reset --hard @{u}                                # <-- So Cool!
    HEAD is now at <the SHA origin/branch-a is at>

    Tacking either onto a branch name will give that branch's @{upstream} or @{push}. For example

    git checkout branch-a@{u}

    is the branch branch-a pulls from.

    In the common workflow where a branch pulls from and pushes to the same branch, @{upstream} and @{push} will be the same, leaving @{u} as preferable for its terseness. @{push} shines in triangular workflows where you pull from one remote and push to another (see the external links below).

    Going back to our scenario, it means short, portable commands with a minimum human memory footprint. (🎉 marks wins from -, 💥 marks the win from @{-1}, 😎 marks the wins from @{u}.)

    Bash
    # USING - AND @{-1} AND @{u}
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit -    # ????
    git push
    # hack hack hack
    # whoops
    git checkout -                   # ????
    git cherry-pick @{-1}@{u}..@{-1} # ????????
    git push
    git checkout -                   # ????
    git reset --hard @{u}            # ????
    git merge --no-ff --no-edit -    # ????
    git checkout -                   # ????
    # ready for more feature commits
    Bash
    # ORIGINAL
    
    git checkout feature
    # hack hack hack
    git push
    git checkout qa-environment
    git merge --no-ff --no-edit feature
    git push
    # hack hack hack
    # whoops
    git checkout feature
    git cherry-pick origin/qa-environment..qa-environment
    git push
    git checkout qa-environment
    git reset --hard origin/qa-environment
    git merge --no-ff --no-edit feature
    git checkout feature
    # ready for more feature commits

    Make the things you repeat the easiest to do

    Because these commands are generalized, we can run some series of them once, maybe

    git checkout - && git reset --hard @{u} && git checkout -

    or

    git checkout - && git cherry-pick @{-1}@{u}.. @{-1} && git checkout - && git reset --hard @{u} && git checkout -

    and then those will be in the shell history just waiting to be retrieved and run again the next time, whether with CtrlR incremental search or history substring searching bound to the up arrow or however your interactive shell is configured. Or make it an alias, or even better an abbreviation if your interactive shell supports them. Save the body wear and tear, give memory a break, and level up in Git.

    And keep going

    The GitHub blog has a good primer on triangular workflows and how they can polish your process of contributing to external projects.

    The FreeBSD Wiki has a more in-depth article on triangular workflow process (though it doesn't know about @{push} and @{upstream}).

    The construct @{-<n>} and the suffixes @{push} and @{upstream} are all part of the gitrevisions spec. Direct links to each:



      • Code
      • Front-end Engineering
      • Back-end Engineering

      es

      Setting New Project Managers Up for Success

      At Viget, we’ve brought on more than a few new Project Managers over the past couple of years, as we continue to grow. The awesome new people we’ve hired have ranged in their levels of experience, but some of them are earlier in their careers and need support from more experienced PMs to develop their skills and flourish.

      We have different levels of training and support for new PMs. These broadly fall into four categories:

      • Onboarding: Learning about Viget tools and processes
      • Shadowing: Learning by watching others
      • Pairing: Learning by doing collaboratively
      • Leading: Learning by doing solo

      Onboarding

      In addition to conducting intro sessions to each discipline at Viget, new Viget PMs go through a lengthy set of training sessions that are specific to the PM lab. These include intros to:

      PM tools and resourcesProject processes
      Project typesProject checklists
      Project taskingProject planning
      Budgets, schedules, and resourcingRetrospectives
      Working with remote teamsProject kickoffs
      Thinking about developmentGithub and development workflow
      Tickets, definition, and documentationQA testing
      Account management

      Shadowing

      After PMs complete the onboarding process, they start shadowing other PMs’ projects to get exposure to the different types of projects we run (since the variety is large). We cater length and depth of shadowing based on how much experience a PM has coming in. We also try to expose PMs to multiple project managers, so they can see how PM style differs person-to-person.

      We’ve found that it can be most effective to have PMs shadow activities that are more difficult to teach in theory, such as shadowing a PM having a difficult conversation with a client, or shadowing a front-end build-out demo to see how the PM positions the meeting and our process to the client. More straightforward tasks like setting up a Harvest project could be done via pairing, since it’s easy to get the hang of with a little guidance.

      Pairing

      While shadowing is certainly helpful, we try to get PMs into pairing mode pretty quickly, since we’ve found that most folks learn better by doing than by watching. Sometimes this might mean having a new PM setting up an invoice or budget sheet for a client while a more experienced PM sits next to them, talking them through the process. We’ve found that having a newer PM lead straightforward activities with guidance tends to be more effective than the newer PM merely watching the more experienced PM do that activity.

      Another tactic we take is to have both PMs complete a task independently, and then meet and talk through their work, with the more experienced PM giving the less experienced PM feedback. That helps the newer PM think through a task on their own, and gain experience, but still have the chance to see how someone else would have approached the task and get meaningful feedback.

      Leading

      Once new PMs are ready to be in the driver’s seat, they are staffed as the lead on projects. The timing of when someone shifts into a lead role depends on how much prior experience that person has, as well as what types of projects are actively ready to be worked on.

      Most early-career project managers have a behind-the-scenes project mentor (another PM) on at least their first couple projects, so they have a dedicated person to ask questions and get advice from who also has more detailed context than that person’s manager would. For example, mentors often shadow key client and internal meetings and have more frequent check-ins with mentees. This might be less necessary at a company where all the projects are fairly similar, but at Viget, our projects vary widely in scale and services provided, as well as client needs. Because of this, there’s no “one size fits all” process and we have a significant amount of customization per project, which can be daunting to new PMs who are still getting the hang of things.

      For these mentorship pairings, we use a mentorship plan document (template here) to help the mentor and mentee work together to define goals, mentorship focuses, and touchpoints. Sometimes the mentee’s manager will take a first stab at filling out the plan, other times, the mentor will start that process.

      Management Touchpoints

      Along the way, we make sure new PMs have touchpoints with their managers to get the level of support they need to grow and succeed. Managers have regular 1:1s with PMs that are referred to as “project 1:1s”, and are used for the managee to talk through and get advice on challenges or questions related to the projects they’re working on—though really, they can be used for whatever topics are on the managee’s mind. PMs typically have 1:1s with managers daily the first week, two to three times per week after that for the first month or so, then scale down to once per week, and then scale down to bi-weekly after the first six months.

      In addition to project 1:1s, we also have monthly 1:1s that are more bigger-picture and focused on goal-setting and progress, project feedback from that person’s peers, reflection on how satisfied and fulfilled they’re feeling in their role, and talking through project/industry interests which informs what projects we should advocate for them to be staffed on. We have a progress log template that we customize per PM to keep track of goals and progress.

      We try to foster a supportive environment that encourages growth, feedback, and experiential learning, but also that lets folks have the autonomy to get in the driver’s seat as soon as they’re comfortable. Interested in learning more about what it’s like to work at Viget? Check out our open positions here.




      es

      Our WFH Best Practices

      Our first remote office opened in 2007 when a designer and a developer left our HQ office and moved to Durham. Ever since we've been fine-tuning our ability to collaborate across locations. Today, we have team members across the country in our four offices, and we have fully remote employees in Charleston, Kansas City, New York City, Dallas, and Charlottesville.

      Because of the coronavirus outbreak, a lot of people recently started working from their homes across the world, the country, and Viget. We wanted to share some of our best practices for being great teammates and doing great work, regardless of locale, and we’d love to hear yours in the comments.

      Communicate Often and Write It Down

      We want every person at Viget to be informed and connected. We do this in a few ways. We have a company Knowledge Base, which contains critical information including HR policies, office processes, brand guidelines, project resources, etc. We also have a well-organized Google Drive that everyone can access.

      My favorite communication tool we use, however, is our Internal Lab Report. Every week, we create a Google Doc with HR updates, birthdays, upcoming events we’re attending, relevant publicity we or a client received, and timely updates on projects, sales, and recruiting. This report allows the entire team to have the same information, regardless of PTO schedules, and it provides a record that can be referenced weeks, months, or years later.

      I have also found our Slack habits really helpful. We try to make our availability easily known, mostly via a passive Slack status. We each update our status daily, sometimes multiple times, so people can see if we’re working from home, out of the office for an appointment, in a meeting, or offline for a personal phone call. We also have a few Slack Channels we use very specifically to announce PTO, important announcements, and recently, one that is specific to the updating coronavirus situation.

      My work from home station.

      Figure Out Your Boundaries

      This looks different for everyone and can be an ever-changing target. Understanding your boundaries requires you to be honest with yourself – Are you easily distracted? Can you successfully work in pajama pants? Will your dog actually allow you to get work done? Does working from the couch result in good work, or do you need a designated work spot? For some, working from home requires setting boundaries to ensure the work gets done. For others, working from home requires setting a start and stop times to ensure you don’t overwork yourself.

      Viget has a flexible work policy, so many of us work from home fairly often and have gotten our routines set up. As such, we have written about this before! Check out Trevor’s article about working remotely.

      Show Your Face

      When I first started at Viget, I’d never worked anywhere that used a Google Hangout for nearly every meeting. At first, I was tempted to call into meetings and leave the camera off because I found it exposing. Now, I can’t imagine not using it, and I’ve even embraced it in my personal life with friends and family. I realized the value in face-to-face conversations even in virtual form, the ability to see body language, and the connection you establish when you see each other's faces — even if your hair isn't perfect or you haven't arranged your plants just-so in the view behind you. Whenever possible, use your camera during a meeting. It increases trust, communication, and in my personal-not-backed-by-science-opinion, lightness, which frankly, I think we can all use a bit more of right now.

      Here's a screen shot from our Saint Patrick's Day Happy Hour.

      Create Shared Experiences

      As a company with project teams often distributed across our four locations, cross-office experiences are vital to our culture, and we’ve spent years working to keep our remote offices in sync. A few of our ongoing group activities include a monthly virtual Book Club, our weekly full-team Free Lunch Friday tradition, Donut for Slack, and, of course, our Pointless Weekends.

      The current global health crisis now requires almost all of the company to work remote, so we’ve gotten creative with our attempts to increase non-project time together, in order to keep up the vibes we’ve worked hard to create.

      What we’ve recently started:

        • Last Weekend this Morning - Monday mornings, we have an optional virtual coffee, where anyone who’d like to chat can join and share the latest gardening lesson or bingeable tv show. It lets us start our week off as we would when we’re all in the office — saying hello to each other.
        • Virtual Happy Hours - We are a company that likes to socialize, and a bit of distance doesn’t stop us. This week, we set up an after-hours Happy Hour for St. Patrick’s Day.
        • Daily Lunch Table- If you’ve ever visited our HQ office in Falls Church, you’ll notice our large kitchen table. We have an informal tradition of gathering around noon to eat together, whether it’s just a couple folks or the whole team. We now do this lunch virtually. So far, we’re mostly taking turns discussing who is eating what, and of course, sharing said recipes.

      I crowdsourced some ideas from the Viget team, and here are some noteworthy takeaways:

      "In remote meetings, minimize all your other windows and be fully present. It’s easy to allow your attention to accidentally drift if you see a new Slack channel light up, especially if you’re in a larger meeting. Suddenly, you find yourself multitasking. Treat the meeting as if you were there in person: unless you’re taking notes, minimize your other tabs, and give the conversation your full attention."
      - Paul Koch

      “I try to reach out to more folks I don’t consistently work with. Since there’s less interaction in general, I want to be more intentional about staying connected.”
      - Laura Sweltz

      “Good habits are hard to form and bad habits are hard to break, and it’s often hard to find the right time to make a change. Most of us are experiencing a disruption to our usual behaviors right now, but that doesn’t have to be entirely bad. Be deliberate now and when this is over, we might all end up with some new work habits worth keeping.”
      - Emily Bloom

      “I’ve found it helpful to create a physical space similar to the one I had at work. While this isn’t exactly possible, small things like setting up a laptop stand and second screen make it so I’m less likely to get distracted and wander to the couch or kitchen (aka the snack danger zone.).”
      - Aubrey Lear

      “It’s easy to get stuck in one spot all day, so be proactive about moving around, or creating excuses to do so. Whether that’s making yourself a cup of coffee, eating lunch away from your computer, or going for a quick walk outside for some fresh air. This will help reduce the risk of going stir crazy.”
      -Zach Robbins

      True to Viget form, our remote work is all about “Progress, Not Perfection.” While remote collaboration is ingrained in our company, we’re looking for opportunities to fine-tune our approach and improve our habits.

      We’d love to hear from you: What are your best practices? Lessons learned?




      es

      Pursuing A Professional Certification In Scrum

      Professional certifications have become increasingly popular in this age of career switchers and the freelance gig economy. A certification can be a useful way to advance your skill set quickly or make your resume stand out, which can be especially important for those trying to break into a new industry or attract business while self-employed. Whatever your reason may be for pursuing a professional certificate, there is one question only you can answer for yourself: is it worth it?

      Finding first-hand experiences from professionals with similar career goals and passions was the most helpful research I used to answer that question for myself. So, here’s mine; why I decided to get Scrum certified, how I evaluated my options, and if it was really worth it.

      A shift in mindset

      My background originates in brand strategy where it’s typical for work to follow a predictable order, each step informing the next. This made linear techniques like water-fall timelines, completing one phase of work in its entirety before moving onto the next, and documenting granular tasks weeks in advance helpful and easy to implement. When I made the move to more digitally focused work, tasks followed a much looser set of ‘typical’ milestones. While the general outline remained the same (strategy, design, development, launch) there was a lot more overlap with how tasks informed each other, and would keep informing and re-informing as an iterative workflow would encourage.

      Trying to fit a very fluid process into my very stiff linear approach to project planning didn’t work so well. I didn’t have the right strategies to manage risks in a productive way without feeling like the whole project was off track; with the habit of account for granular details all the time, I struggled to lean on others to help define what we should work on and when, and being okay if that changed once, or twice, or three times. Everything I learned about the process of product development came from learning on the job and making a ton of mistakes—and I knew I wanted to get better.

      Photo by Christin Hume on Unsplash

      I was fortunate enough to work with a group of developers who were looking to make a change, too. Being ‘agile’-enthusiasts, this group of developers were desperately looking for ways to infuse our approach to product work with agile-minded principles (the broad definition of ‘agile’ comes from ‘The Agile Manifesto’, which has influenced frameworks for organizing people and information, often applied in product development). This not only applied to how I worked with them, but how they worked with each other, and the way we all onboarded clients to these new expectations. This was a huge eye opener to me. Soon enough, I started applying these agile strategies to my day-to-day— running stand-ups, setting up backlogs, and reorganizing the way I thought about work output. It’s from this experience that I decided it may be worth learning these principles more formally.

      The choice to get certified

      There is a lot of literature out there about agile methodologies and a lot to be learned from casual research. This benefitted me for a while until I started to work on more complicated projects, or projects with more ambitious feature requests. My decision to ultimately pursue a formal agile certification really came down to three things:

      1. An increased use of agile methods across my team. Within my day-to-day I would encounter more team members who were familiar with these tactics and wanted to use them to structure the projects they worked on.
      2. The need for a clear definition of what processes to follow. I needed to grasp a real understanding of how to implement agile processes and stay consistent with using them to be an effective champion of these principles.
      3. Being able to diversify my experience. Finding ways to differentiate my resume from others with similar experience would be an added benefit to getting a certification. If nothing else, it would demonstrate that I’m curious-minded and proactive about my career.

      To achieve these things, I gravitated towards a more foundational education in a specific agile-methodology. This made Scrum the most logical choice given it’s the basis for many of the agile strategies out there and its dominance in the field.

      Evaluating all the options

      For Scrum education and certification, there are really two major players to consider.

      1. Scrum Alliance - Probably the most well known Scrum organization is Scrum Alliance. They are a highly recognizable organization that does a lot to further the broader understanding of Scrum as a practice.
      2. Scrum.org - Led by the original co-founder of Scrum, Ken Schwaber, Scrum.org is well-respected and touted for its authority in the industry.

      Each has their own approach to teaching and awarding certifications as well as differences in price point and course style that are important to be aware of.

      SCRUM ALLIANCE

      Pros

      • Strong name recognition and leaders in the Scrum field
      • Offers both in-person and online courses
      • Hosts in-person events, webinars, and global conferences
      • Provides robust amounts of educational resources for its members
      • Has specialization tracks for folks looking to apply Scrum to their specific discipline
      • Members are required to keep their skills up to date by earning educational credits throughout the year to retain their certification
      • Consistent information across all course administrators ensuring you'll be set up to succeed when taking your certification test.

      Cons

      • High cost creates a significant barrier to entry (we’re talking in the thousands of dollars here)
      • Courses are required to take the certification test
      • Certification expires after two years, requiring additional investment in time and/or money to retain credentials
      • Difficult to find sample course material ahead of committing to a course
      • Courses are several days long which may mean taking time away from a day job to complete them

      SCRUM.ORG

      Pros

      • Strong clout due to its founder, Ken Schwaber, who is the originator of Scrum
      • Offers in-person classes and self-paced options
      • Hosts in-person events and meetups around the world
      • Provides free resources and materials to the public, including practice tests
      • Has specialization tracks for folks looking to apply Scrum to their specific discipline
      • Minimum score on certification test required to pass; certification lasts for life
      • Lower cost for certification when compared to peers

      Cons

      • Much lesser known to the general public, as compared to its counterpart
      • Less sophisticated educational resources (mostly confined to PDFs or online forums) making digesting the material challenging
      • Practice tests are slightly out of date making them less effective as a study tool
      • Self-paced education is not structured and therefore can’t ensure you’re learning everything you need to know for the test
      • Lack of active and engaging community will leave something to be desired

      Before coming to a decision, it was helpful to me to weigh these pros and cons against a set of criteria. Here’s a helpful scorecard I used to compare the two institutions.

      Scrum Alliance Scrum.org
      Affordability ⚪⚪⚪
      Rigor⚪⚪⚪⚪⚪
      Reputation⚪⚪⚪⚪⚪
      Recognition⚪⚪⚪
      Community⚪⚪⚪
      Access⚪⚪⚪⚪⚪
      Flexibility⚪⚪⚪
      Specialization⚪⚪⚪⚪⚪⚪
      Requirements⚪⚪⚪
      Longevity⚪⚪⚪

      For me, the four areas that were most important to me were:

      • Affordability - I’d be self-funding this certificate so the investment of cost would need to be manageable.
      • Self-paced - Not having a lot of time to devote in one sitting, the ability to chip away at coursework was appealing to me.
      • Reputation - Having a certificate backed by a well-respected institution was important to me if I was going to put in the time to achieve this credential.
      • Access - Because I wanted to be a champion for this framework for others in my organization, having access to resources and materials would help me do that more effectively.

      Ultimately, I decided upon a Professional Scrum Master certification from Scrum.org! The price and flexibility of learning course content were most important to me. I found a ton of free materials on Scrum.org that I could study myself and their practice tests gave me a good idea of how well I was progressing before I committed to the cost of actually taking the test. And, the pedigree of certification felt comparable to that of Scrum Alliance, especially considering that the founder of Scrum himself ran the organization.

      Putting a certificate to good use

      I don’t work in a formal Agile company, and not everyone I work with knows the ins and outs of Scrum. I didn’t use my certification to leverage a career change or new job title. So after all that time, money, and energy, was it worth it?

      I think so. I feel like I use my certification every day and employ many of the principles of Scrum in my day-to-day management of projects and people.

      • Self-organizing teams is really important when fostering trust and collaboration among project members. This means leaning on each other’s past experiences and lessons learned to inform our own approach to work. It also means taking a step back as a project manager to recognize the strengths on your team and trust their lead.
      • Approaching things in bite size pieces is also a best practice I use every day. Even when there isn't a mandated sprint rhythm, breaking things down into effort level, goals, and requirements is an excellent way to approach work confidently and avoid getting too overwhelmed.
      • Retrospectives and stand ups are also absolute musts for Scrum practices, and these can be modified to work for companies and project teams of all shapes and sizes. Keeping a practice of collective communication and reflection will keep a team humming and provides a safe space to vent and improve.
      Photo by Gautam Lakum on Unsplash

      Parting advice

      I think furthering your understanding of industry standards and keeping yourself open to new ways of working will always benefit you as a professional. Professional certifications are readily available and may be more relevant than ever.

      If you’re on this path, good luck! And here are some things to consider:

      • Do your research – With so many educational institutions out there, you can definitely find the right one for you, with the level of rigor you’re looking for.
      • Look for company credits or incentives – some companies cover part or all of the cost for continuing education.
      • Get started ASAP – You don’t need a full certification to start implementing small tactics to your workflows. Implementing learnings gradually will help you determine if it’s really something you want to pursue more formally.




      es

      Unsolved Zoom Mysteries: Why We Have to Say “You’re Muted” So Much

      Video conference tools are an indispensable part of the Plague Times. Google Meet, Microsoft Teams, Zoom, and their compatriots are keeping us close and connected in a physically distanced world.

      As tech-savvy folks with years of cross-office collaboration, we’ve laughed at the sketches and memes about vidconf mishaps. We practice good Zoomiquette, including muting ourselves when we’re not talking.

      Yet even we can’t escape one vidconf pitfall. (There but for the grace of Zoom go I.) On nearly every vidconf, someone starts to talk, and then someone else says: “Oop, you’re muted.” And, inevitably: “Oop, you’re still muted.”

      That’s right: we’re trying to follow Zoomiquette by muting, but then we forget or struggle to unmute when we do want to talk.

      In this post, I’ll share my theories for why the You’re Muted Problems are so pervasive, using Google Meet, Microsoft Teams, and Zoom as examples. Spoiler alert: While I hope this will help you be more mindful of the problem, I can’t offer a good solution. It still happens to me. All. The. Time.

      Skip the why and go straight to the vidconf app keyboard shortcuts you should memorize right now.

      Why we don't realize we’re muted before talking

      Why does this keep happening?!?

      Simply put: UX and design decisions make it harder to remember that you’re muted before you start to talk.

      Here’s a common scenario: You haven’t talked for a bit, so you haven’t interacted with the Zoom screen for a few seconds. Then you start to talk — and that’s when someone tells you, “You’re muted.”

      We forget so easily in these scenarios because when our mouse has been idle for a few seconds, the apps hide or downplay the UI elements that tell us we’re muted.

      Zoom and Teams are the worst offenders:

      • Zoom hides both the toolbar with the main in-app controls (the big mute button) and the mute status indicator on your video pane thumbnail.
      • Teams hides the toolbar, and doesn't show a mute status indicator on your video thumbnail in the first place.

      Meet is only slightly better:

      • Meet hides the toolbar, and shows only a small mute status icon in your video thumbnail.

      Even when our mouse is active, the apps’ subtle approach to muted state UI can make it easy to forget that we’re muted:

      Teams is the worst offender:

      • The mute button is an icon rather than words.
      • The muted-state icon's styling could be confused with unmuted state: Teams does not follow the common pattern of using red to denote muted state.
      • The mute button is not differentiated in visual hierarchy from all the other controls.
      • As mentioned above, Teams never shows a secondary mute status indicator.

      Zoom is a bit better, but still makes it pretty easy to forget that you’re muted:

      • Pros:
        • Zoom is the only app to use words on the mute button, in this case to denote the button action (rather than the muted state).
        • The muted-state icon’s styling (red line) is less likely to be confused with the unmuted-state icon.
      • Cons:
        • The mute button’s placement (bottom left corner of the page) is easy to overlook.
        • The mute button is not differentiated in visual hierarchy from the other toolbar buttons — and Zoom has a lot of toolbar buttons, especially when logged in as host.
        • The secondary mute status indicator is a small icon.
        • The mute button’s muted-state icon is styled slightly differently from the secondary mute status indicator.
      • Potential Cons:
        • While words denote the button action, only an icon denotes the muted state.

      Meet is probably the clearest of the three apps, but still has pitfalls:

      • Pros:
        • The mute button is visually prominent in the UI: It’s clearly differentiated in the visual hierarchy relative to other controls (styled as a primary button); is a large button; and is placed closer to the center of the controls bar.
        • The muted-state icon’s styling (red fill) is less likely to be confused with the unmuted-state icon.
      • Cons:
        • Uses only an icon rather than words to denote the muted state.
      • Unrelated Con:
        • While the mute button is visually prominent, it’s also placed next to the hang-up button. So in Meet’s active state you might be less likely to forget you’re muted … but more likely to accidentally hang up when trying to unmute. 😬

      I know modern app design leans toward minimalism. There’s often good rationale to use icons rather than words, or to de-emphasize controls and indicators when not in use.

      But again: This happens on basically every call! Often multiple times per call!! And we’re supposed to be tech-savvy!!! Imagine what it’s like for the tens of millions of vidconf newbs.

      I would argue that “knowing your muted state” has turned out to be a major vidconf user need. At this point, it’s certainly worth rethinking UX patterns for.

      Why we keep unsuccessfully unmuting once we realize we’re muted

      So we can blame the You’re Muted Problem on UX and design. But what causes the You’re Still Muted Problem? Once we know we’re muted, why do we sometimes fail to unmute before talking again?

      This one is more complicated — and definitely more speculative. To start making sense of this scenario, here’s the sequence I’m guessing most commonly plays out (I did this a couple times before I became aware of it):

      The crucial part is when the person tries to unmute by pressing the keyboard Volume On/Off key.

      If that’s in fact what’s happening (again, this is just a hypothesis), I’m guessing they did that because when someone says “You’re muted” or “I can’t hear you,” our subconscious thought process is: “Oh, Audio is Off. Press the keyboard key that I usually press when I want to change Audio Off to Audio On.”

      There are two traps in this reflexive thought process:

      First, the keyboard volume keys control the speaker volume, not the microphone volume. (More specifically, they control the system sound output settings, rather than the system sound input settings or the vidconf app’s sound input settings.)

      In fact, there isn’t a keyboard key to control the microphone volume. You can’t unmute your mic via a dedicated keyboard key, the way that you can turn the speaker volume on/off via a keyboard key while watching a movie or listening to music.

      Second, I think we reflexively press the keyboard key anyway because our mental model of the keyboard audio keys is just: Audio. Not microphone vs. speaker.

      This fuzzy mental model makes sense: There’s only one set of keyboard keys related to audio, so why would I think to distinguish between microphone and speaker? 

      So my best guess is hardware design causes the You’re Still Muted Problem. After all, keyboard designs are from a pre-Zoom era, when the average person rarely used the computer’s microphone.

      If that is the cause, one potential solution is for hardware manufacturers to start including dedicated keys to control microphone volume:

      Video conference keyboard shortcuts you should memorize right now

      Let me know if you have other theories for the You’re Still Muted Problem!

      In the meantime, the best alternative is to learn all of the vidconf app keyboard shortcuts for muting/unmuting:

      • Meet
        • Mac: Command(⌘) + D
        • Windows: Control + D
      • Teams
        • Mac: Command(⌘) + Shift + M
        • Windows: Ctrl + Shift + M
      • Zoom
        • Mac: Command(⌘) + Shift + A
        • Windows: Alt + A
        • Hold Spacebar: Temporarily unmute

      Other vidconf apps not included in my analysis:

      • Cisco Webex Meetings
        • Mac: Ctrl + Alt + M
        • Windows: Ctrl + Shift + M
      • GoToMeeting

      Bonus protip from Jackson Fox: If you use multiple vidconf apps, pick a keyboard shortcut that you like and manually change each app’s mute/unmute shortcut to that. Then you only have to remember one shortcut!




      es

      So You've Written a Bad Design Take

      So you’ve just written a blog post or tweet about why wireframes are becoming obsolete, the dangers of “too accessible” design, or how a certain style of icon creates “cognitive fatigue.”

      Your post went viral, but now you’re getting ratioed by rude people on the Internet. That sucks! You were just trying to start a conversation and you probably didn’t deserve all that negativity (except for you, “too accessible” guy).

      Most likely, you made one of these common mistakes:

      1. You made generalizations about “design”

      You, a good user-centered designer, know that you are not your user. Nor are you every designer.

      First of all, let's acknowledge that there is no universal definition of design. Even if we narrow it down to software design, it’s still hard to make generalizations. Agency, in-house, product, startup, enterprise, non-profit, website, app, connected hardware, etc. – there are a lot of different work contexts and cultures for people with “designer” in their titles.

      "The Design Industry" is not a thing, but even if it were, you don't speak for it. Don’t assume that the kind of design work you do is the universal default.

      2. You didn’t share enough context

      There are many great design books and few great design blog posts. (There are, to my knowledge, no great design tweets, but I am open to your suggestions.) Writing about design is not well suited to short formats, because context plays such an important role and there’s always a lot of it to cover.

      Writing about your work should include as much context as you would include if you were presenting your portfolio for a job interview. What kind of organization did you work for? Who was your client and/or your stakeholders? What was the goal of the project? Your timeline? What was the makeup of your team? What were the notable business rules and constraints? How are you defining effectiveness and success?

      Without these kinds of details, it’s not possible for other designers to know if what you’ve written is credible or applicable to them.

      3. You were too certain

      A blog post doesn’t need to be a dissertation. It’s okay to share hunches and anecdotes, but give the necessary caveats. And if you're making claims about science, bruh, you gotta cite your sources.

      Be humble in your takes. Your account of what worked for you and why is more valuable to your peers than making sweeping claims and reheating the same old arguments. Be prepared to be told you’re wrong, and have the humility to realize that your perspective is just your perspective. Real conversations, like good design, are built on feedback and diverse viewpoints.

      Together, we can improve the discourse in our information ecosystems. Don't generalize. Give context. Be humble.




      es

      Global Gitignore Files Are Cool and So Are You

      Setting it up

      First, here's the config setup you need to even allow for such a radical concept.

      1. Define the global gitignore file as a global Git configuration:

        git config --global core.excludesfile ~/.gitignore
        

        If you're on OSX, this command will add the following config lines in your ~/.gitconfig file.

        [core]
          excludesfile = /Users/triplegirldad/.gitignore
        
      2. Load that ~/.gitignore file up with whatever you want. It probably doesn't exist as a file yet so you might have to create it first.

      Harnessing its incredible power

      There are only two lines in my global gitignore file and they are both fairly useful pretty much all the time.

      $ cat ~/.gitignore
      TODO.md
      playground
      

      This 2 line file means that no matter where I am, what project I'm working on, where in the project I'm doing so, I have an easy space to stash notes, thoughts, in progress ideas, spikes, etc.

      TODO.md

      More often than not, I'm fiddling around with a TODO.md file. Something about writing markdown in your familiar text editor speaks to my soul. It's quick, it's easy, you have all the text editing tricks available to you, and it never does anything you wouldn't expect (looking at you auto-markdown-formatting editors). I use one or two # for headings, I use nested lists, and I ask for nothing more. Nothing more than more TODO.md files that is!

      In practice I tend to just have one TODO.md file per project, right at the top, ready to pull up in a few keystrokes. Which I do often. I pull this doc up if:

      • I'm in a meeting and I just said "oh yeah that's a small thing, I'll knock it out this afternoon".
      • I'm halfway through some feature development and realize I want to make a sweeping refactor elsewhere. Toss some thoughts in the doc, and then get back to the task at hand.
      • It's the end of the day and I have to switch my brain into "feed small children" mode, thus obliterating everything work-related from my short term memory. When I open things up the next day and know exactly what the next thing to dive into was.
      • I'm preparing for a big enough refactor and I can't hold it all in my brain at once. What I'd give to have an interactive 3D playground for brain thoughts, but in the meantime a 2D text file isn't a terrible way to plan out dev work.

      playground

      Sometimes you need more than some human words in a markdown file to move an idea along. This is where my playground directory comes in. I can load this directory up with code that's related to a given project and keep it out of the git history. Because who doesn't like a place to play around.

      I find that this directory is more useful for long running maintenance projects over fast moving greenfield ones. On the maintenance projects, I tend to find myself assembling a pile of scripts and experiments for various situations:

      • The client requests a one-time obscure data export. Whip up some CSV generation code and save that code in the playground directory.
      • The client requests a different obscure data export. Pull up the last time you did something vaguely similar and save yourself the startup time.
      • A batch of data needs to be imported just once. Might as well stash that in the chance that "just once" is actually "just a few times".
      • Kicking the tires on an integration with a third party service.

      Some of these playground files end up being useful more times than I can count (eg: the ever-changing user_export.rb script). Some items get promoted into application code, which is always fun. But most files here serve their purpose and then wither away. And that's fine. It's a playground, anything goes.

      Wrapping up

      Having a personal space for project-specific notes and code has been helpful to me over the years as a developer on multiple projects. If you have your own organizational trick, or just want to brag about how you memorize everything without any markdown files, let me know in the comments below!





      es

      Best WooCommerce Themes

      Savoy And here comes Savoy, the latest trending WordPress theme for creating interactive online stores. Powered by AJAX technology, the simple and elegant design of the theme delivers the best possible user experience for the customers. Powered by WooCommerce, Savoy enables you to manage various options of your online shop from one location. The perfectly […]

      The post Best WooCommerce Themes appeared first on WP Theme Designer.




      es

      Best Business WordPress Themes

      Kalium Kalium is an excellent WordPress theme that is intended for blogging and portfolio websites. It has plenty of layout design variations, along with an impressive drag and drop content builder. There are many features and elements, each designed to enhance your website and guarantee its success. Dalton A classy and clean theme for businesses […]

      The post Best Business WordPress Themes appeared first on WP Theme Designer.




      es

      2017 Best Coffee Shop WordPress Themes

      Avada Avada is clear, versatile and has a completely responsive design! Avada sets the new standard with limitless potentialities, top-notch help, and free updates with newly requested options from our customers. And its essentially the most easy-to use theme available on the market! Avada could be very intuitive to make use of and utterly able […]

      The post 2017 Best Coffee Shop WordPress Themes appeared first on WP Theme Designer.




      es

      2017 Best WordPress Themes for Boutiques

      Boutique Boutique offers you full means to create a tremendous on-line retailer. It’s trendy design and completely different layouts and limitless potentialities will aid you to place your merchandise in focus, It’s also fully responsive and you won’t worry how your prospects reach your store (It really works fantastic with both desktops and smartphones) Boutique […]

      The post 2017 Best WordPress Themes for Boutiques appeared first on WP Theme Designer.




      es

      2017 Best Education WordPress Themes

      Education WP Education WP is the following era and among the finest training WordPress themes round, containing all of the energy of eLearning WP however with a greater UI/UX. This WordPress educational theme has been developed primarily based on the #1 LMS plugin on the official WordPress Plugins directory LearnPress, which presents you an entire […]

      The post 2017 Best Education WordPress Themes appeared first on WP Theme Designer.




      es

      2017 Best Blog WordPress Themes

      Authentic Authentic is a light-weight & minimalistic WordPress theme good for life-style bloggers & magazines. It has so many superb options that may make your weblog or journal stand out amongst others. Let your guests benefit from the muddle free contemporary design of your new web site powered by Authentic. Maple Maple is a daring, […]

      The post 2017 Best Blog WordPress Themes appeared first on WP Theme Designer.




      es

      designworkplan zoekt per direct wayfinding grafisch ontwerper

      designworkplan zoekt per direct een grafisch ontwerper voor onze wayfinding studio in Amsterdam




      es

      Visual Identity: ESA Annual Conference




      es

      Jude Graveson, Artist

      Select each thumbnail to view the full image × ×




      es

      How to restart a blog after five years

      This is not the post I had planned for resuming my blog. I had in mind a lengthy article about design and its role in communication at this point in digital evolution. Deep. Thought-provoking. But I know that it’s better to start with ideas that are a little less ambitious in scope. Plus, to tell you […]





      es

      What every business must do (and designers even more so)

      What should all businesses do at least once, and do properly, and (like the title of this blog post suggests) designers need to do repeatedly? The answer is: Understanding the target market they’re catering to. Sure, that makes sense—but why are graphic designers any different? Why do this repeatedly? When you’re in business, you’re in the […]