内容简介:All live examples of the Web API described in this post can be found here:All source code of the Web APIs described can be found here:
Awesome Web APIs for your next web app — with examples.
Y ou probably already know and use the more popular Web APIs available out there (Web Worker, Fetch, etc.) but there are a few others, less popular, that I personally enjoy using and would recommend you try as well.
All live examples of the Web API described in this post can be found here:
All source code of the Web APIs described can be found here:
Tip:Create your own implementation of these APIs using your frontend framework of choice, and share them with your team or the entire open-source community. Use Bit ( Github )to “harvest” your reusable components from your local repo and share them to a component collection in bit.dev . Make sure you never have to repeat yourself.
Bit supports React, React with TS, React Native, Angular, Vue and many others.
1. Web Audio API
This Web Audio API allows you to manipulate an audio stream, on the web. It can be used to add effects and filters to an audio source on the web.
The audio source can be from the <audio>
, video/audio source files or an audio network stream.
Let’s see a simple example:
This example channels the audio from an <audio>
element to an AudioContext. Sound effects (like panning) are added to the audio source before being channeled to the audio output, the speakers.
The button Init calls the init function when clicked. This will create an AudioContext instance and set it to audioContext. Next, it creates a media source createMediaElementSource(audio)
, passing the audio element as the audio source.
The volume node volNode
is created createGain.
Here we adjust the volume of the audio. Next, the panning effect is set using the StereoPannerNode. Finally, the nodes are connected to the media source.
The buttons ( Play , Pause , Stop ) play, pause and stop the audio.
We have a volume and panning range slider, changing these will affect the volume and the panning effect of the audio.
Try this here:
Check out another example below:
2. Fullscreen API
This API enables fullscreen-mode in web apps. It lets you select the Element you want to view in fullscreen mode. In Android phones, it will remove the browser’s window and the Android top status bar (where the network status, battery status, etc. are displayed).
The methods:
requestFullscreendisplays the selected element in fullscreen-mode on the system, shutting off other apps and the browser and system UI elements.
exitFullscreenexits fullscreen-mode to normal mode.
Let’s see a simple example where we can use fullscreen mode to watch a video:
See that the video element is in the div#video-stage element, with a button Toggle Fullscreen
.
We want to make the element div#video-stage
to go fullscreen when we click on the button Toggle Fullscreen
.
See the function toggle:
function toggle() { const videoStageEl = document.querySelector(".video-stage") if(!document.fullscreenElement) videoStageEl.requestFullscreen() else document.exitFullscreen() }
It queries the div#video-stage
element and holds its HTMLDivElement
instance on videoStageEl
.
We use the document.fullsreenElement
property to know whether the document is not in fullscreen, so we can call requestFullscreen()
on videoStageEl
. This will make the div#video-stage
take over the entire device view.
If we click on the Toggle Fullscreen
button on the fullscreen mode, the exitFullcreen
will be called on the document, so the UI view is returned to the normal view.
Try it here:
3. Web Speech API
This API provides us with the capabilities to add speech synthesis and speech recognition to our web app.
With this API we will be able to issue voice commands to our web apps, the same way we do on Android via its Google Speech or in Windows using Cortana.
Let’s see a simple example. We will see how to implement Text-to-Speech and Speech-to-Text using the Web Speech API
The first demo Demo - Text to Speech
demonstrate using this API with a simple input field, to receive the input text and a button to execute the speech action.
See the speak function:
function speak() { const speech = new SpeechSynthesisUtterance() speech.text = textToSpeech.value speech.volume = 1 speech.rate = 1 speech.pitch = 1 window.speechSynthesis.speak(speech) }
It instantiates SpeechSynthesisUtterance()
object, sets the text to speak from the text we typed in the input box. Then, calling speechSynthesis#speak function with the speech object says the text in the input box out loud in our speaker.
The second demo Demo - Speech to Text
is a voice recognition demo. We tap on the Tap and Speak into Mic
button and speak into the mic, the words we say are translated into letters in the textarea.
The Tap and Speak into Mic
button when clicked calls the tapToSpeak
function:
function tapToSpeak() { var SpeechRecognition = SpeechRecognition; const recognition = new SpeechRecognition() recognition.onstart = function() { } recognition.onresult = function(event) { const curr = event.resultIndex const transcript = event.results[curr][0].transcript speechToText.value = transcript } recognition.onerror = function(ev) { console.error(ev) } recognition.start() }
Quite simply, the SpeechRecognition
is instantiated, then event handlers and callbacks are registered. onstart
is called at the start of the voice recognition, onerror
is called when an error occurs. onresult
is called whenever the voice recognition captures a line.
See, in onresult
callback, we extract the letters and set them into the textarea. So when we speak into the mic the words appear inside the textarea content :open_mouth:
Try it out here:
4. Bluetooth API
experimental technology
This API lets us access the Bluetooth Low Energy device on our phone and use it to share data from a webpage to another device.
Imagine being able to create a web chat app that can send and receive messages from other phones via Bluetooth.
The possibilities are endless.
The basic API is the navigator.bluetooth.requestDevice
. Calling it will make the browser prompt the user with a device chooser where they can pick one device or cancel the request.
navigator.bluetooth.requestDevice
takes a mandatory Object. This Object defines filters that are used to return Bluetooth devices that match the filters.
Let’s see a simple demo. This demo will use the navigator.bluetooth.requestDevice
API to retrieve basic device information from a BLE device.
The device’s information is displayed here. The button Get BLE Device
calls the bluetoothAction
function when clicked.
function bluetoothAction(){ navigator.bluetooth.requestDevice({ acceptAllDevices: true }).then(device => { dname.innerHTML = device.name did.innerHTML = device.id dconnected.innerHTML = device.connected }).catch(err => { console.error("Oh my!! Something went wrong.") }) }
The bluetoothAction
function calls the navigator.bluetooth.requestDevice
API with an option of acceptAllDevices: true
, this will make it scan and list all nearby Bluetooth-active devices. See that it returns a promise, so we resolve it to get a param device
from the callback function, this device
param will hold the information of a listed Bluetooth device. This is where we display the information on the device using its properties.
Try it here:
5. Channel Messaging API
This API allows two scripts in different browser contexts to communicate and pass messages to each other in a two-way channel.
The different browser contexts can be two scripts running in different tabs, two iframes in a script, the document and an iframe in a script, etc.
It begins with creating a MessageChannel instance:
new MessageChannel()
This will return a MessagePort object.
Then, each browser context can setup port using MessagePort.port1 or MessageChannel.port2.
The context that instantiated the MessageChannel will use MessagePort.port1, will the other context will use MessagePort.port2.
Then, messages can be communicated using the postMessage API.
Each browser context will then listen to messages using the Message.onmessage.
Let’s see a simple example, where we can use MessageChannel to send text between a document and an iframe.
Notice the iframe tag. We loaded an iframe.content.html file on it. The button and text are where we type and send a message to the iframe.
<script> const channel = new MessageChannel() const port1 = channel.port1 iframe.addEventListener("load", onLoad) function onLoad() { port1.onmessage = onMessage iframe.contentWindow.postMessage("load", channel.port2) } function onMessage(e) { const newHTML = "<div>"+e.data+"</div>" displayMsg.innerHTML = displayMsg.innerHTML + newHTML } function sendMsg() { iframe.contentWindow.postMessage(input.value, channel.port2) } </script>
We initialized the MessageChannel
and the port1
. We added a load event listener to the iframe. Here we register the onmessage
listener on the port1
, then send a message to the iframe using the postMessage
API. See that the channel port2
is sent down to the iframe.
Let’s look at the iframe’s iframe.content.html
:
Here, we register a message event handler. We retrieve the port2
and set an onmessage
event handler on it. Now, we can receive and send a message from the iframe to its parent document.
Try it here:
6. Vibration API
This API makes our device shake or vibrate as a means of notification or physical feedback to a new data or info we should respond to.
The method that does this is the navigator.vibrate(pattern).
The pattern is a single number or an array of numbers that describe the vibration pattern.
navigator.vibrate(200) navigator.vibrate([200])
This will make the device vibrate for 200ms and stop.
navigator.vibrate([200, 300, 400])
This will make the device vibrate for 200ms, pause for 300ms and vibrate for 400ms and stop.
Vibration can be cancelled by passing 0, [], array full of zeroes [0,0,0].
Let’s see a simple demo:
We have input and a button. Input the duration of the vibration in the input box and press the button.
Your device will vibrate for the amount of time inputted.
Try it live here:
7. Broadcast Channel API
This API allows communication of messages or data from different browsing contexts on the same origin.
The browsing contexts are windows, tabs, iframes, workers, etc
BroadcastChannel class is used for creating or joining a channel.
const politicsChannel = new BroadcastChannel("politics")
politics
will be the name of the channel. Any context that initializes the BroadcastChannel constructor with politics
will join the politics channel, it will receive any message sent on the channel and can send a message into the channel.
If it is the first to BroadcastChannel constructor with politics
the channel will be created.
To post to a channel use the BroadcastChannel.postMessage
API
To subscribe to a channel (to listen for messages) use the BroadcastChannel.onmessage
event.
To demonstrate the usage of Broadcast Channel, I built a simple chat app:
I began by setting up the UI view. It’s a simple text and button. Type in your message and press the button to send the message.
In the scripts section, I initialized the politicsChannel
, and set an onmessage
event listener on the politicsChannel
, so it would receive and display the messages.
The sendMsg
function is called by the button. It sends the message to the politicsChannel
via the BroadcastChannel#postMessage
API. Any tab, iframe or worker that initializes this same script will receive the messages sent from here, and so this page will receive the messages sent from the other context.
Try it here:
8. Payment Request API
This API provides a way of selecting a payment method for goods and services to a payment gateway.
This API provides a consistent way to provide payment details to different merchants without the user inputting the details over again.
It provides information like billing address, shipping address, card details, etc. — to the merchant.
Note : This API doesn’t bring a new payment method to the table. It provides user payment details.
Let’s see a demo on how we can use the Payment Request API to accept credit card payment.
networks
, types
, and supportedTypes
all describe the method of payment.
details
list our purchases and the total cost.
Now, instantiate PaymentRequest
with the supportedTypes
and details passed to the PaymentRequest
constructor.
paymentRequest.show()
will display the browser payment UI. Then, we handle the data that the user provided when the Promise is resolved.
They are many configurations for payment using the Payment API, at least we have understood by the above example how the Payment Request API is used and how it works.
Try out a live demo here:
9. Resize Observer API
This API provides a way an observer is notified if an element it is registered on is re-sized in any way.
The ResizeObserver class is provided with an observer that will be called on every resize event.
const resizeObserver = new ResizeObserver(entries => { for(const entry of entries) { if(entry.contentBoxSize) consoleo.log("element re-sized") } })resizeObserver.observe(document.querySelector("div"))
Whenever the divs are re-sized, “element re-sized” is printed on the console.
Let’s see an example of how to use the Resize Observer API:
We have range sliders here. If we slide them they will change the height and width of the idv#resizeBox
. We registered a ResizeObserver
on the div#resizeBox
, with an observer that will display the message indicating that the box has been re-sized and the current values of its height and width.
Try sliding the range sliders, you will see the div#resizeBox
change in width and height, also, we will see the info is displayed in the div#stat box.
Try it live here:
10. Pointer Lock API
This API gives unlimited access to the mouse inputs, co-ordinates, actions, movement without restriction.
It is ideal for playing games, modeling a 3D model, etc.
The APIs are:
requestPointerLock: This method will remove the mouse from the browser and send events of the mouse state. This will persist until the exitPinterLock
is called.
exitPointerLock: This API releases the mouse pointer lock and restores the mouse cursor.
Let’s see an example:
We have a div#box
and div#ball
inside the div#box
.
We set up a click event on the div#box
, so that we clicked it calls the requestPointerLock()
, this will make the cursor disappear.
PointerLock
has a pointerlockchange
event listener. This event is emitted when the pointer locks state changes. Inside its callback, we attach it to the mousemove
event. Its callback will be fired when the mouse is moved on the current browser tab. In this callback, the current mouse position is sent to the e
argument, so we use it to get the current X and Y position of the mouse. With this info, we set the top
and left
style property of the div#ball
, so when we move our mouse around we see a dancing ball.
Try it here:
Conclusion
The Web is getting more sophisticated day-by-day. More native features are being brought on board, this is due to the fact that the number of web users is far greater than native app users. The native experience users see on native apps is brought to the web to retain them there without the need to go back to the native apps.
If you have any questions regarding this or anything I should add, correct or remove, feel free to comment, email or DM me.
Thanks !!!
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。