Why mastering Vanilla JS is important before learning any other JS frameworks?

1)If you don’t know the underlying principles of the web, you’ll eventually hit a wall thanks to the evolution of the language itself and the constant arrival of new frameworks.

2)Knowing pure JS will make you a key engineer who can solve complex problems (reason before frantic searching).

3)It’ll make you versatile and productive, both on the front-end and back-end.

4)It’ll give you the toolset to innovate, not just execute.

5)It’ll guide you on when to use a framework or not.

6)It’ll give you a better general understanding of how browsers and computers work.

http://vanilla-js.com/

Advertisements

use ‘strict’ features on JavaScript

List of features (non-exhaustive)

  1. Disallows global variables. (Catches missing var declarations and typos in variable names)
  2. Silent failing assignments will throw error in strict mode (assigning NaN = 5;)
  3. Attempts to delete undeletable properties will throw (delete Object.prototype)
  4. Requires all property names in an object literal to be unique (var x = {x1: "1", x1: "2"})
  5. Function parameter names must be unique (function sum (x, x) {...})
  6. Forbids octal syntax (var x = 023; some devs assume wrongly that a preceding zero does nothing to change the number.)
  7. Forbids the with keyword
  8. eval in strict mode does not introduce new variables
  9. Forbids deleting plain names (delete x;)
  10. Forbids binding or assignment of the names eval and arguments in any form
  11. Strict mode does not alias properties of the arguments object with the formal parameters. (i.e. in function sum (a,b) { return arguments[0] + b;} This works because arguments[0]is bound to a and so on. )
  12. arguments.callee is not supported

Image previews using base64 – Javascript

var fileUpload = document.querySelector('input[type=file]');
var file    = fileUpload.files[0];
var preview = $('.uploaded_file') // img tag
var reader  = new FileReader();
reader.readAsDataURL(file);
reader.onload = function() {
   preview.src = reader.result // it will set the base64 src to img tag
}

// Be-Aware of browser supports (https://developer.mozilla.org/en/docs/Web/API/FileReader)

AWS S3 – Pre-signed URL uploads

RUBY (create a pre-signed & public url from server side)

s3 = Aws::S3::Resource.new
obj = s3.bucket('bucket-name').object('files/hello.text')

put_url = obj.presigned_url(:put, acl: 'public-read', expires_in: 3600 * 24, content_type: 'multipart/form-data')
#=> "https://bucket-name.s3.amazonaws.com/filesp/hello.text?X-Amz-..."

obj.public_url
#=> "https://bucket-name.s3.amazonaws.com/files/hello.text"

You can fetch pre-signed URL from server side either directly while landing on the page or via AJAX. Preferred method is do a AJAX to server side with the param (filename) and in the server generate a pre-signed url using file name and time-stamp. 

JAVASCRIPT (Trigger a PUT call to AWS S3 using pre-signed url)

$.ajax( {
    url: url, //presinged-url which you get from server side
    type: 'PUT',
    data: file,
    processData: false,
    contentType: false,
    headers: {'Content-Type': 'multipart/form-data'},
    success: function(){
        console.log('Image successfully uploaded. Now the object (image/pdf/text/whatever-it-is now can be downloaded using public url received from server side');
    }
    error: function(){
        console.log('Something went wrong');
    }
});

ADD CORS ON AWS S3 (do this only for a particular bucket, avoid doing it globally)

    
 <CORSConfiguration>
 <CORSRule>
 <AllowedOrigin>*.mydomain.com</AllowedOrigin>
 <AllowedMethod>PUT</AllowedMethod>
 <MaxAgeSeconds>3000</MaxAgeSeconds>
 <AllowedHeader>Content-*</AllowedHeader>
 <AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
    

SEO – Do’s and Don’ts

Do’s

  • Ensure to provide unique content for the users. Developing content just for the sake of populating your web page is a poor strategy. Make sure that your content gives a fresh perspective about the subject matter.
  • Perform a keyword research to find out the exact words that are used by people to search for your topic. There are several generalized keywords pertaining to the type of industry you belong to and it is very important to make your content identifiable as soon as those keywords are used by the users. Use the main keywords in the title, body, image tags and meta description.
  • Maintain the accessibility for the users all across your website. Design a good layout for your website, preferably hierarchical, provide Search box for efficient site navigation and most important keep a homepage that is simple and easily accessible.
  • Label or tag all the media that you insert on your website. It helps the Search Engines to decipher the non-text data as a valid content on your website and it looks professional and more appealing to the users.
  • Build internal links to connect best pieces of your content in the website. Also, try getting maximum back-links from reputed and relevant websites in the industry, which helps in validating the ingenuity and increases the visibility of your content across the digital platform.
  • Include “breadcrumbs” like previous-next buttons. This aids user-ability and makes the content Search engine friendly. Use URLs that use canonicalization as these help in effectively segregate the contents of the website.
  • Earn social credits through Social media distribution. Have social media links to Facebook, Twitter, Google+, LinkedIn etc. on your website content. Provide Share option at the bottom of your website allowing users to share your content and make sure to watermark the company’s footprint across all your content for copyright issues.

Don’ts

  • Don’t spend money on bots to artificially share your content. They are easily identifiable by the Search Engines and can incur penalties which may negatively affect your rankings.
  • Don’t recreate duplicated content. There are several tools to identify plagiarized content and this may adversely impact your Search Engine ratings.
  • Don’t provide link to every single page from your homepage as this overloads the page too much and slows it down. It can also hurt your SEO ratings.
  • Don’t use irrelevant links on your website just for the sake of it. Have links that are genuinely related to the content of your website.
  • Don’t do keyword stuffing. Google is smart enough to recognize that and might penalize your site for doing it.
  • Don’t use misleading titles and tagging. Keep it as direct as possible to increase optimization capabilities of the website.
  • Don’t overcrowd your website with back-links. Don’t get links from pages that have too many external links. Too many external links shows that the website quality is not high.

 

seo-dos-and-donts-best-practices-infographic.png

How Search Engine Works?

Search engines perform several activities in order to deliver search results.

  • Crawling – Process of fetching all the web pages linked to a website. This task is performed by a software, called a crawler or a spider (or Googlebot, in case of Google).
  • Indexing – Process of creating index for all the fetched web pages and keeping them into a giant database from where it can later be retrieved. Essentially, the process of indexing is identifying the words and expressions that best describe the page and assigning the page to particular keywords.
  • Processing – When a search request comes, the search engine processes it, i.e. it compares the search string in the search request with the indexed pages in the database.
  • Calculating Relevancy – It is likely that more than one page contains the search string, so the search engine starts calculating the relevancy of each of the pages in its index to the search string.
  • Retrieving Results – The last step in search engine activities is retrieving the best matched results. Basically, it is nothing more than simply displaying them in the browser.

Search engines such as Google and Yahoo! often update their relevancy algorithm dozens of times per month. When you see changes in your rankings it is due to an algorithmic shift or something else outside of your control.

Although the basic principle of operation of all search engines is the same, the minor differences between their relevancy algorithms lead to major changes in results relevancy.