How to : running Nexus 4 without battery

Nexus 4 is not bad, except it is too old for new applications, and the system upgrade was frozen for a long time. This old fellow retired for years, then I started host some services on this device, with charger always connected. It is nice for its jobs, much better than a raspberry PI, then I found the fatal problem: the battery is booming. So technically saying, device supplied by battery is never a good solution for a server.

I offline Nexus 4 for a while, doing some research on web, but seems that no one really care about Nexus 4 as me. Lucky, I found some kind of solution to build a battery-free Nexus 7 here and some battery detail of Nexus 4 here , then I got a thought, the two devices belong to the same age, so I could try to porting Nexus 7 battery-free solution to my Nexus 4.

First I need a Li-ion battery interface PCB, I don’t know the detail, but the 4 pins including V-, Temp, Health and V+.

The temperature reading is from a voltage change on a thermal resistor, NTC, I don’t know which one is it but I believe that with PCB, every thing is ready, it don’t have to plant into  the Li-ion battery.  During the research, some 3-pins battery normally one pin is for temperature sampling, so if you don’t have a PCB for battery interface, you may try to solder a normal resistor between that pin and GND(V-). We don’t care about the temperature, just provide a reading to prevent software shutdown.

The second step is to power on the fake battery, simple and straight forward, and ready for the reborn.

 

So finally, some mechanical modification to install the rear cover back if you want. Just cut a small hold on left side, fix it with a smooth square file, the USB cable is just perfect for the hole.

Job done.

After days test running, just found one thing not satisfy me. The battery drop quick since the device first boot, remaining at 1-2% capacity, event some schedule reboots. My guest: the whole system is not charging, the system may be evaluating battery capacity by the current, not the voltage. Battery voltage reading is stable @ 4.5 ± 0.3 v, the current is about 200 mA, so theoretically for a 2100 mAh battery it will drain out in 10 hours. I think there maybe two way for this: one is the plug another charger into micro USB, to make it looks like charging, or, to modify battery calibration by some script, make the capacity infinite. I don’t know whether they would work because I try none of them so far, the system looks good for me. Maybe I would spent some time on mako battery driver (PM89-21) to find out the real cause and a software or hardware workaround.

Here is battery status dump:

 

root@mako:/sys/class/power_supply/battery # cat capacity
2
root@mako:/sys/class/power_supply/battery # cat energy_full
2103000000
root@mako:/sys/class/power_supply/battery # ll energy_full
-r–r–r– 1 root root 4096 Mar 27 03:07 energy_full
root@mako:/sys/class/power_supply/battery # cat status
Discharging
root@mako:/sys/class/power_supply/battery # cat type
Battery
root@mako:/sys/class/power_supply/battery # cat voltage_now
4508196
127|root@mako:/sys/class/power_supply/battery # cat current_now
261100
root@mako:/sys/class/power_supply/battery # cat technology
Li-ion
root@mako:/sys/class/power_supply/battery # cat present
1
root@mako:/sys/class/power_supply/battery # cat temp
251

Advertisements
Posted in tech | Tagged , , | Leave a comment

Share:Wireless protocols showdown: Why not Wi-Fi?

As promised, with this episode we’ll start reviewing each of the leading connectivity solutions one by one, sharing the lessons we’ve learned while exploring their capabilities and limitations. And what else could we start with but Wi-Fi, arguably the most globally recognized wireless networking technology. According to the Wi-Fi Alliance, the standard carries roughly a half of all Internet traffic for billions of users worldwide. It’s most commonly used to provide computers, smartphones and tablets with a quick a reliable Internet access, but it can theoretically connect any two devices to enable the exchange of data between them. Widely used in private homes, offices and public spaces around the entire globe, Wi-Fi might seem to be perfectly positioned to take the early IoT market by storm. And while the dust is far from settling in the IoT communication standards war, this powerful technology is clearly nowhere near the top of the list of today’s hottest connectivity solutions for the Internet of Things. So what went wrong? Let us break it down for you.

The Wi-Fi technology is based on the family of wireless networking standards IEEE 802.11x. They define only the first two layers of the OSI reference model – the physical layer and the data link layer (a quick introduction to the 7 layers of the OSI model can be found in our previous blogpost). As far as the network and transport layers are concerned, Wi-Fi typically relies on other standard protocols, such as UDP or TCP (for transport) and IPv4 or IPv6 (for networking). Let’s see what this arrangement looks like on a simplified version of the OSI model:

WiFi on OSI

Note the empty space at the application layer, as we’ll get back to it later.

Wi-Fi is a powerful and reliable wireless connectivity solution that the technology industry has successfully relied on over many years.  802.11 has emerged as a global communication standard because it offered numerous excellent features, and was continually developed and improved by the Institute of Electrical and Electronics Engineers (IEEE). As a result of these efforts, multiple “flavors” of 802.11 were developed over time, with 802.11n being the most commonly used in today’s homes and offices. A Wi-Fi network has a star topology, which means that all its nodes connect directly to the central hub, e.g. a wireless router. With this arrangement, devices can be added and removed from the network without disrupting its entire structure and flow of data. Designed for the rapid exchange of high data volumes over reasonable distances, Wi-Fi does that job just perfectly. Basic parameters, such as range or data transfer rate, vary between different 802.11 standards, but a typical wireless router is usually enough to provide a decent network coverage for a standard apartment. In larger buildings, more access points or signal extenders can be deployed to increase coverage. As for the throughput, some versions of the 802.11 standard have the limit of 11 or 54 Mb/s, but the commonly used 802.11n is capable of transmitting hundreds of megabit per second, and 802.11ac is even faster. These numbers certainly look impressive, as the throughput of other wireless connectivity solutions for the IoT is expressed in kb/s rather than Mb/s. On top of that, one of Wi-Fi’s major strengths is the ubiquity of 802.11 infrastructure across the globe. The fact that it is commonly integrated into new laptops, smartphones and tablets is also extremely relevant from the perspective of IoT applications.

The features mentioned above is what made Wi-Fi the default technology for enabling wireless Internet access in our lives. It can easily transport high-definition video streams and its throughput limits are usually way higher than the needs of an average user. But the IoT is a completely different thing that the good ol’ Internet. Wi-Fi’s impressive data transfer rate is overkill for typical smart home/office applications, where instead of data-heavy content, devices broadcast simple commands (e.g. on/off), state-change signals or only tiny bits of information (e.g. sensor data). And while such overcapacity is not a big problem by itself, there is a cost for this enormous throughput. Being a high-bandwidth communication standard, Wi-Fi is also extremely power-intensive. This is a big problem in the resource-challenged IoT world, where multiple devices are supposed to operate without any wires. In the case of several other connectivity solutions, coin batteries can keep simple wireless devices running for years. But building a battery-powered Wi-Fi device that could last even one year with decent responsiveness is virtually impossible. The power-hungriness is obviously not a big deal if a particular device is connected to a power lead or wall outlet, but for all those applications where battery-powered operation is a must (e.g. sensors in remote places), Wi-Fi is just not capable of delivering a reasonable performance.

Further limitations arise from the topology of a Wi-Fi network. Reliance upon a central gateway to handle all the traffic has one major drawback – once a hub fails, individual nodes of the network cannot communicate with each other, essentially making the entire network inoperable. Of course you don’t expect your hub to go down all that often, but each such incident could end up being extremely irritating if all your light bulbs, door locks and garage doors belong to a single smart network.

As already mentioned, Wi-Fi can be found in every new smartphone or laptop on the market. Out of all the communication protocols aspiring to connect the IoT, only Wi-Fi and Bluetooth have this advantage of being natively integrated into our phones, making them ultimate controllers for our smart environments. However, in the case of Wi-Fi this potential cannot be fully realized. Even though a smartphone and a Wi-Fi device use the same language to communicate, this communication is not direct as it always goes through the network’s central access point. This is why Wi-Fi devices cannot use proximity sensing features that have become a trademark of the Bluetooth technology.

Given that virtually every potential customer has a Wi-Fi enabled phone, one could assume that setting up a Wi-Fi network of smart devices would be a piece of cake. It is a bit complicated, though. Before a smart device can be added to a Wi-Fi network, it has to know the password for this network. This is easy when you want to connect a laptop or a smartphone, but gets tricky when your device has no keyboard and no screen. It might seem that a smartphone could do the job, after all it also speaks Wi-Fi so why not use it to tell the device what the password is? This certainly can be done – but first the device has to be networked with the phone, which brings us back to where we started from. Manufacturers use various methods to make this setup process as easy and intuitive as possible, yet each of them introduces additional complexity and has certain drawbacks. The setup process is one of the biggest problems of the Wi-Fi technology in smart home environment where a flawless user experience is top priority. To address this challenge, some of the vendors have gone so far as to add microUSB ports to their smart devices solely for configuration purposes. While this effectively solves the setup issues, we are not convinced that light switches with USB ports is where we want to get with the IoT.

In the first episode of our series, we kept emphasizing that interoperability tops the list of challenges which need to be addressed for the IoT to realize its full potential. So what Wi-Fi offers in this regard? Not that much, unfortunately. As we already mentioned, Wi-Fi does not define the application layer, which means that machine-to-machine communication is basically impossible unless companies manufacturing two particular devices work in close cooperation to precisely define how they can communicate. Wi-Fi is often mistakenly considered interoperable, since we use it all the time to successfully enter into all kinds of interactions with each other. But all these interactions can happen only because there are humans on both ends of the communication process. Setting up a Skype conversation is what can be described as adding an ad-hoc application layer to the Wi-Fi based communication. Humans can do it by choosing the right tools and coordinating the entire process by themselves. “Things” can’t handle that, and for this reason Wi-Fi is a standard which by itself does not ensure any interoperability in the world of connected devices.

Finally, there is the price factor that always needs to be taken into consideration by manufacturers. Wi-Fi modules are relatively pricey, and although differences have decreased recently, they still remain between 50% to 100% more expensive than some of the competing radio modules used in connected devices. This is not something that can be easily ignored when drawing up mass production plans.

Now it must be emphasized that some of the disadvantages mentioned above apply to the vast majority of the leading communication technologies, just to mention the hub-based topology or the complicated setup process. But what really disqualifies Wi-Fi as the ultimate connectivity solution for the IoT is its power-hungriness. Despite numerous impressive features, it just cannot efficiently support wireless devices, such as sensors or controllers, which are an important part of what the IoT is expected to become.

There are certain scenarios where Wi-Fi can still get the job done really well. If you are a manufacturer of a device which needs a reliable connection with the cloud rather than with a dense network of other smart devices, and your product needs to be connected to a power lead or wall outlet anyway, and you manage to find a way to overcome setup challenges to make this process intuitive and user-friendly, and you don’t care all that much about the price of a radio module, then Wi-Fi becomes a totally reasonable solution for you. Otherwise, you should think twice. Wi-Fi is an excellent technology for performing data-heavy activities, such as streaming video content, and it is likely to cover this small fraction of the IoT space where such processes are required. But when it comes to smartening our homes and offices, there are simply more suitable solutions out there, the ones that were designed specifically to address the needs of the IoT. Next time we’ll take a look at one of them, so stay tuned.

Posted in personal | Leave a comment

A way to fix DD-WRT can not run on some Linksys EA2700 router

I have a Linksys EA2700 router, the original firmware functional very limited, I like to use this router as a wireless repeater and a giga switch for the computers near by it. So of course, DD-WRT is the best choice. The problem is, although I am a long time DD-WRT user, no matter how I try to flash DD-WRT firmware, testing any different version, I could not have it run on my ea2700, after several reboots, it return to the original classic firmware.

I wired the serial port pints out, try to figure out what was happening. Update firmware from classic firmware web ui, during booting, log show something like this:

List of all partitions:
1f00        512 mtdblock0 (driver?)
1f01       1536 mtdblock1 (driver?)
1f02      18432 mtdblock2 (driver?)
1f03    4175864 mtdblock3 (driver?)
1f04      18432 mtdblock4 (driver?)
1f05      16524 mtdblock5 (driver?)
1f06      25600 mtdblock6 (driver?)
No filesystem could mount root, tried:  squashfs ntfs fuseblk
Kernel panic – not syncing: VFS: Unable to mount root fs on unknown-block(31,5)

and something like that:

nand_read_bbt: Bad block at 0x00c84000

I mean, it seem that NAND had some bad block, that could not be read during booting, using web UI to update, it said it is succeeded, but I doubt it. Then I try to update firmware using CFE way and tftp, before tftp upload finish, CFE report that

 I/O error
*** command status = -4

This error could be repeated. So I think the part of NAND is damaged. The problem is, I could upload linksys classic firmware success every time!

The original firmware size is 13M while ddwrt is 17M. So I figure out a way to modify DD-WRT firmware, remove some unnecessary ‘big’ modules and resources, repack the firmware @ 12M. Upload by tftp, it works for me!

cat /proc/partitions

major minor  #blocks  name
  31        0      30720 mtdblock0
  31        1      31744 mtdblock1
  31        2        512 mtdblock2
  31        3       1536 mtdblock3
  31        4      30720 mtdblock4
  31        5      29518 mtdblock5
cat /proc/mtd
dev:    size   erasesize  name
mtd0: 01e00000 00004000   namelinux;
mtd1: 01f00000 00004000   nameddwrt;
mtd2: 00080000 00004000   namecfeot;
mtd3: 00180000 00004000   namenvram;
mtd4: 01e00000 00004000   namenandimage
mtd5: 01cd3800 00004000   namerootfsage

I know that mtd table looks strange, but most function works as expected.

Here is the way of how to modify DD-WRT firmware. Hope this could help

Posted in tech | Tagged , , , , | 1 Comment

How to Unpack/MOD/Repack A DD-WRT trx Firmware

DD-WRT firmware is packed as ‘.trx’. The purpose of document is to to show you how to MOD a DD-WRT package, without recompile it from source. All the tool need is a virtual-boxed ubuntu, DD-WRT svn source code, a hex editor, HxD .

A MOD firmware may brick your device, so make sure you know what you are doing by googling enough before apply changes to the router. Doing this on your own risk.

There is one obvious way to modify the package is, to download and modify source code/script, recompile from source code, repack firmware. But it seems to me that DD-WRT maintainers are not very please sharing their build system, the best resource is could found is Compiling DD-WRT. It was a miserable process for me, I don’t know which module need to be built for EA2700 and endless error shows up, when trying to read/modify configuration, guessing, building. I spent a whole weekend keep failing, not even able to run through the configuration. So I gave up this way.

I also found a tool Firmware Mod Kit, but seems that it is out of maintenance, at least not for me.

During the building attemptsssssssssssssss, I acknowledge that for EA2700, the main build script is src/router/Makefile.brcm3x, I was thinking that there must be packing process in this script, so if I want to unpack, I shall be able to find some clues of how the package generated. Here is some piece of code from Makefile.brcm3x:

gzip -c9 mipsel-uclibc/lzma_vmlinus > mipsel-uclibc/lzma_vmlinuz
../../opt/tools/trx -m 32000000 -o $(ARCH)-uclibc/dd-wrt.v24-K3-nandboot.trx $(ARCH)-uclibc/lzma_vmlinuz  -a 1024 $(ARCH)-uclibc/rootfs.squashfs

According to TRX Header, from openwrt,

  0                   1                   2                   3   
  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 
 +---------------------------------------------------------------+
 |                     magic number ('HDR0')                     |
 +---------------------------------------------------------------+
 |                  length (header size + data)                  |
 +---------------+---------------+-------------------------------+
 |                       32-bit CRC value                        |
 +---------------+---------------+-------------------------------+
 |           TRX flags           |          TRX version          |
 +-------------------------------+-------------------------------+
 |                      Partition offset[0]                      |
 +---------------------------------------------------------------+
 |                      Partition offset[1]                      |
 +---------------------------------------------------------------+
 |                      Partition offset[2]                      |
 +---------------------------------------------------------------|
  • offset[0] = lzma-loader
  • offset[1] = Linux-Kernel
  • offset[2] = rootfs

It seems that the trx is packing two files, lzma_vmlinuz and rootfs.squashfs. I also verify this by reading source code of trx.c. lzma_vmlinuz should be a gzipped Linux kernel and rootfs.squashfs should be the read-only rootfs, and that is my interest.

Open a prebuilt firmware, using Hxd, we can see the trx header very clear:

trxheader

trx header

The first part, vmlinuz is at offset: 0x0000001c, the second part, rootfs, should be located at offset:0x0012c800, little endian.

If we just extract rootfs, we could ignore the first part, but as we need to repack firmware after modification, flash to router, without rebuilding from source, we need a standalone lzma_vmlinuz. As we can see in the previous script, lzma_vmlinuz is gzip compressed, so need to have some knowledge of gzip format.

 2.3. Member format

      Each member has the following structure:

         +---+---+---+---+---+---+---+---+---+---+
         |ID1|ID2|CM |FLG|     MTIME     |XFL|OS | (more-->)
         +---+---+---+---+---+---+---+---+---+---+

      (if FLG.FEXTRA set)

         +---+---+=================================+
         | XLEN  |...XLEN bytes of "extra field"...| (more-->)
         +---+---+=================================+

      (if FLG.FNAME set)

         +=========================================+
         |...original file name, zero-terminated...| (more-->)
         +=========================================+

      (if FLG.FCOMMENT set)

         +===================================+
         |...file comment, zero-terminated...| (more-->)
         +===================================+

      (if FLG.FHCRC set)

         +---+---+
         | CRC16 |
         +---+---+

         +=======================+
         |...compressed blocks...| (more-->)
         +=======================+

           0   1   2   3   4   5   6   7
         +---+---+---+---+---+---+---+---+
         |     CRC32     |     ISIZE     |
         +---+---+---+---+---+---+---+---+
gzipheader

gzip header

Gzip header is very clear, original file name is lzma_vmlinus, the problem is we don’t know where it ends. So we move to position: 0x0012c800, the second file, where rootfs starts:

rootfs head

lzma_vmlinuz ends here

0x0012c800 is where rootfs begun. trx tool padding some zeros to the to the end of each part, according to -a 1024 option. We could not tell exactly where vmlinus ends, but we can guess. According to RFC1952, gzip end with 32bits CRC32 and 32bits ISIZE, the original file size, so the last 4 bytes has meaning. 0x12cd48bc, is too big for a file size, while 0x000012cd is too small, but 0x0012cd40=1,232,200 sounds reasonable. So we copy from offset 0x1c to 0x12c739, open a new tab from HxD, save to a new file, name it ‘vmlinuz‘, using 7zip on windows or gzip on linux to decompress it, check if the guess is correct. In this way, we had lzma_vmlinuz exacted and verified. We don’t really need to decompress lzma_vmlinuz, ’cause we do not actually modify it.

From 0x0012c800, to the end of file, save as another file ‘rootfs.squashfs‘, we could consider it as rootfs, but how to exact and  apply a MOD? Here is another piece from Makefile.brcm3x:

$(LINUXDIR)/scripts/squashfs/mksquashfs-lzma $(ARCH)-uclibc/target $(ARCH)-uclibc/rootfs.squashfs -noappend -root-owned -le

It is said, rootfs.squashfs is a, LZMA compressed, SQUASHFS file system. So how to exact rootfs? I don’t do much reading of SQUASHFS file system spec, I found the source code of mk/unsquashfs-lzma. OK, just build this tool on Ubuntu by typing ‘make‘, you get both tools of mksquashfs-lzma and unsquashfs-lzma. That is way much easier than building DD-WRT. Now we have tools of packing and unpacking squashfs image.

./unsquashfs-lzma rootfs.squashfs

The result is a directory (default name ‘rootfs.squashfs‘, I don’t remember, see help) tree of run-time read-only Linux rootfs:

rootfs tree

I was so exciting about what I saw. The original unpack size is more than 49M. Now we could start a MOD. As my purpose is to reduce size of the whole package, I located big files and relate resources, which I assure unnecessary, remove them from directory tree. You could also add new tool, modify init script, modify services, make symbol link, whatever you needed, I think.

After modification, the last step is to repack it. Things are straightforward:

# packing rootfs.squashfs image
sudo ./mksquashfs-lzma rootfs.squashfs -noappend -root-owned -le
#packing firmware. if you do have trx tool, just build it yourself
sudo ./trx -m 32000000 -o new.trx vmlinuz  -a 1024 rootfs.squashfs

Then you can try upload new firmware using Web GUI, or serial recovery, if you already have serial pin out. The web page including all necessary resource to flashing DD-WRT to router by serial port, but  maybe out of date, I actually using :

flash -noheader : flash1.trx

to flash Linksys ea2700 router. Flashing firmware, especially a MOD version, using CFE, maybe dangerous, but as my understand, as long as CFE is not damage, Linksys router always has a backup ‘known good’ firmware on the other partition, which will automatically recover to active partition, if active partition boots failed for several times, that is why peoples reported that sometimes their Linksys router return back to original factory firmware during flash DD-WRT.

JOB DONE!

I share a Dropbox folder here, including some tools (I built on Ubuntu 16.04 LTS) and a MOD firmware for Linksys EA2700. I did not list what I remove, but you can always unpack ‘dd-wrt-30949-ea2700.MOD.trx’, then compare rootfs directory tree to the original version download from ftp.dd-wrt.com.

Posted in tech | Tagged , , , , , , | 1 Comment

Share:Wireless protocols showdown: Why not Wi-Fi?

Original link: https://blog.silvair.com/2015/10/01/wireless-protocols-showdown-3/

 

As promised, with this episode we’ll start reviewing each of the leading connectivity solutions one by one, sharing the lessons we’ve learned while exploring their capabilities and limitations. And what else could we start with but Wi-Fi, arguably the most globally recognized wireless networking technology. According to the Wi-Fi Alliance, the standard carries roughly a half of all Internet traffic for billions of users worldwide. It’s most commonly used to provide computers, smartphones and tablets with a quick a reliable Internet access, but it can theoretically connect any two devices to enable the exchange of data between them. Widely used in private homes, offices and public spaces around the entire globe, Wi-Fi might seem to be perfectly positioned to take the early IoT market by storm. And while the dust is far from settling in the IoT communication standards war, this powerful technology is clearly nowhere near the top of the list of today’s hottest connectivity solutions for the Internet of Things. So what went wrong? Let us break it down for you.

The Wi-Fi technology is based on the family of wireless networking standards IEEE 802.11x. They define only the first two layers of the OSI reference model – the physical layer and the data link layer (a quick introduction to the 7 layers of the OSI model can be found in our previous blogpost). As far as the network and transport layers are concerned, Wi-Fi typically relies on other standard protocols, such as UDP or TCP (for transport) and IPv4 or IPv6 (for networking). Let’s see what this arrangement looks like on a simplified version of the OSI model:

WiFi on OSI

Note the empty space at the application layer, as we’ll get back to it later.

Wi-Fi is a powerful and reliable wireless connectivity solution that the technology industry has successfully relied on over many years.  802.11 has emerged as a global communication standard because it offered numerous excellent features, and was continually developed and improved by the Institute of Electrical and Electronics Engineers (IEEE). As a result of these efforts, multiple “flavors” of 802.11 were developed over time, with 802.11n being the most commonly used in today’s homes and offices. A Wi-Fi network has a star topology, which means that all its nodes connect directly to the central hub, e.g. a wireless router. With this arrangement, devices can be added and removed from the network without disrupting its entire structure and flow of data. Designed for the rapid exchange of high data volumes over reasonable distances, Wi-Fi does that job just perfectly. Basic parameters, such as range or data transfer rate, vary between different 802.11 standards, but a typical wireless router is usually enough to provide a decent network coverage for a standard apartment. In larger buildings, more access points or signal extenders can be deployed to increase coverage. As for the throughput, some versions of the 802.11 standard have the limit of 11 or 54 Mb/s, but the commonly used 802.11n is capable of transmitting hundreds of megabit per second, and 802.11ac is even faster. These numbers certainly look impressive, as the throughput of other wireless connectivity solutions for the IoT is expressed in kb/s rather than Mb/s. On top of that, one of Wi-Fi’s major strengths is the ubiquity of 802.11 infrastructure across the globe. The fact that it is commonly integrated into new laptops, smartphones and tablets is also extremely relevant from the perspective of IoT applications.

The features mentioned above is what made Wi-Fi the default technology for enabling wireless Internet access in our lives. It can easily transport high-definition video streams and its throughput limits are usually way higher than the needs of an average user. But the IoT is a completely different thing that the good ol’ Internet. Wi-Fi’s impressive data transfer rate is overkill for typical smart home/office applications, where instead of data-heavy content, devices broadcast simple commands (e.g. on/off), state-change signals or only tiny bits of information (e.g. sensor data). And while such overcapacity is not a big problem by itself, there is a cost for this enormous throughput. Being a high-bandwidth communication standard, Wi-Fi is also extremely power-intensive. This is a big problem in the resource-challenged IoT world, where multiple devices are supposed to operate without any wires. In the case of several other connectivity solutions, coin batteries can keep simple wireless devices running for years. But building a battery-powered Wi-Fi device that could last even one year with decent responsiveness is virtually impossible. The power-hungriness is obviously not a big deal if a particular device is connected to a power lead or wall outlet, but for all those applications where battery-powered operation is a must (e.g. sensors in remote places), Wi-Fi is just not capable of delivering a reasonable performance.

Further limitations arise from the topology of a Wi-Fi network. Reliance upon a central gateway to handle all the traffic has one major drawback – once a hub fails, individual nodes of the network cannot communicate with each other, essentially making the entire network inoperable. Of course you don’t expect your hub to go down all that often, but each such incident could end up being extremely irritating if all your light bulbs, door locks and garage doors belong to a single smart network.

As already mentioned, Wi-Fi can be found in every new smartphone or laptop on the market. Out of all the communication protocols aspiring to connect the IoT, only Wi-Fi and Bluetooth have this advantage of being natively integrated into our phones, making them ultimate controllers for our smart environments. However, in the case of Wi-Fi this potential cannot be fully realized. Even though a smartphone and a Wi-Fi device use the same language to communicate, this communication is not direct as it always goes through the network’s central access point. This is why Wi-Fi devices cannot use proximity sensing features that have become a trademark of the Bluetooth technology.

Given that virtually every potential customer has a Wi-Fi enabled phone, one could assume that setting up a Wi-Fi network of smart devices would be a piece of cake. It is a bit complicated, though. Before a smart device can be added to a Wi-Fi network, it has to know the password for this network. This is easy when you want to connect a laptop or a smartphone, but gets tricky when your device has no keyboard and no screen. It might seem that a smartphone could do the job, after all it also speaks Wi-Fi so why not use it to tell the device what the password is? This certainly can be done – but first the device has to be networked with the phone, which brings us back to where we started from. Manufacturers use various methods to make this setup process as easy and intuitive as possible, yet each of them introduces additional complexity and has certain drawbacks. The setup process is one of the biggest problems of the Wi-Fi technology in smart home environment where a flawless user experience is top priority. To address this challenge, some of the vendors have gone so far as to add microUSB ports to their smart devices solely for configuration purposes. While this effectively solves the setup issues, we are not convinced that light switches with USB ports is where we want to get with the IoT.

In the first episode of our series, we kept emphasizing that interoperability tops the list of challenges which need to be addressed for the IoT to realize its full potential. So what Wi-Fi offers in this regard? Not that much, unfortunately. As we already mentioned, Wi-Fi does not define the application layer, which means that machine-to-machine communication is basically impossible unless companies manufacturing two particular devices work in close cooperation to precisely define how they can communicate. Wi-Fi is often mistakenly considered interoperable, since we use it all the time to successfully enter into all kinds of interactions with each other. But all these interactions can happen only because there are humans on both ends of the communication process. Setting up a Skype conversation is what can be described as adding an ad-hoc application layer to the Wi-Fi based communication. Humans can do it by choosing the right tools and coordinating the entire process by themselves. “Things” can’t handle that, and for this reason Wi-Fi is a standard which by itself does not ensure any interoperability in the world of connected devices.

Finally, there is the price factor that always needs to be taken into consideration by manufacturers. Wi-Fi modules are relatively pricey, and although differences have decreased recently, they still remain between 50% to 100% more expensive than some of the competing radio modules used in connected devices. This is not something that can be easily ignored when drawing up mass production plans.

Now it must be emphasized that some of the disadvantages mentioned above apply to the vast majority of the leading communication technologies, just to mention the hub-based topology or the complicated setup process. But what really disqualifies Wi-Fi as the ultimate connectivity solution for the IoT is its power-hungriness. Despite numerous impressive features, it just cannot efficiently support wireless devices, such as sensors or controllers, which are an important part of what the IoT is expected to become.

There are certain scenarios where Wi-Fi can still get the job done really well. If you are a manufacturer of a device which needs a reliable connection with the cloud rather than with a dense network of other smart devices, and your product needs to be connected to a power lead or wall outlet anyway, and you manage to find a way to overcome setup challenges to make this process intuitive and user-friendly, and you don’t care all that much about the price of a radio module, then Wi-Fi becomes a totally reasonable solution for you. Otherwise, you should think twice. Wi-Fi is an excellent technology for performing data-heavy activities, such as streaming video content, and it is likely to cover this small fraction of the IoT space where such processes are required. But when it comes to smartening our homes and offices, there are simply more suitable solutions out there, the ones that were designed specifically to address the needs of the IoT. Next time we’ll take a look at one of them, so stay tuned.

Posted in tech | Tagged , , , , | Leave a comment

转:郭老学徒:专制为什么离不开谎言

original is here

500年前就有人精辟地指出:专制统治必须靠暴力和谎言这两个支柱来支撑,这个人就是被誉为政治学之父的马基亚维里。

马基亚维里是意大利佛罗伦萨人,在佛罗伦萨城市共和国担任过国务秘书和外交官,后来佛罗伦萨的共和制被推翻了,美第奇家族执掌了权力,马基亚维里丢 了官,还一度被关进了监狱。他为了得到新统治者的青睐,东山再起,写了一本为统治者美第奇家族出谋划策的书,这就是著名的《君主论》。

在《君主论》中,马基亚维里为君主支招,实话实说,大胆突破了道德的约束。

他直言不讳地告诉君主:对于君主,作恶比行善有利。统治者如果按照道德主义鼓吹的善良品质行事,就有失去权力的危险。

他给君主支招:伟大的君主既要有狮子的凶猛,又要有狐狸的狡猾。要使每一个人害怕和尊重自己。一个君主必须是一个大伪装者和伪君子,懂得如何使用欺骗把人们弄糊涂,使人们相信自己的伪装。蒙蔽视听,把人的头脑搞糊涂的本事,是所有成功的君主的必备武器。

为了给君主减轻道德上的压力,马基亚维里指出:目的总是证明手段正确。为了目的,可以不择手段。统治者为了保持地位,必须非善良地去获取权力。这是“必然性的命令”。

许多人把马基亚维里看作是邪恶的象征。莎士比亚就把他称为“凶残的马基亚维里。”

但是也有人认为马基亚维里是清醒和真实的,他站在君主的立场上所说的实话恰恰反映了或者说揭露了专制制度邪恶的本质,在专制制度下,统治者必然要作 恶,马基亚维里要君主服从这“必然性的命令”,这个必然性对于人们认清专制本质有着积极的意义。恩格斯把马基亚维里称作文艺复兴的巨人,大概也是从这个角 度考虑的。

关于专制统治为什么必须靠谎言支撑,仅仅靠暴力支撑为什么不行,对此马基亚维里没有谈透,我们来分析分析看。

任何统治者都不可能在失去民意基础的情况下维持统治地位,专制统治也必须有民意基础,即使没有认可的民意,至少也要有认命的民意。哲学家休谟就曾经说过,统治是建立在舆论上的。录音分段

为什么仅仅靠暴力无法维持统治?

专制统治归根结底是少数人的统治,统治权最终归结到执掌最高权力的一个人、一个家族或极少数寡头手里。

最高权力的执掌者也是凡人,没有孙悟空的本领,没有刀枪不入的神功,他的个人力量是极其有限的。他可以下令杀人,但也可能被杀。如果没有民意支持, 仅仅靠暴力手段进行统治,那掌握暴力的人就是他的最大的危险所在。古罗马帝国的皇帝害怕军队政变,把罗马军团都布置在帝国的边疆区,首都罗马只留忠诚的近 卫军保护皇帝,但后来恰恰是近卫军成了废立皇帝的决定性力量,一些皇帝就是被近卫军杀死的。中国的帝王死于非命的也大都是身边的权臣、宦官或兄弟所为,被 造反民众所杀的是极少数。

所有执行暴力任务的人,如军队、警卫、近卫军等都是与民有联系的,大都来自于民,大都会受到民的影响,滥用暴力镇压不满的民众,会激起民众更强烈的不满和愤怒,到了某个转折点,拿枪的人就可能不再听命,而是掉转枪口。越是依赖于暴力的统治者,越有可能葬身于暴力。

权力从来都不缺争夺者,专制权力更是如此,多少人为了夺权你死我活地拼杀,儿子可能杀父亲,母亲可能杀儿子,亲兄弟也互相残杀,更不用说没有骨肉关 系的政敌了。在统治者失去民意的情况下,他的政敌一定会利用这一点替天行道为民除害。所以,统治者一旦失去民意,就离覆灭不远了,起义、造反、政变、暗 杀、逼宫、废黜等大戏在等着他。中外历史上这种例子太多了,举不胜举。

如此,我们知道了民意为什么对于统治者是必不可少的:在民意面前,再强大的统治者也是虚弱的。

无论什么样的专制统治都是少数人获得利益和垄断权力的统治,不可能代表人民的利益,也不可能真正地获得民意,所以,必须靠谎言来骗取民意。

在古代社会,统治者骗取民意最主要的谎言是“君权神授”,实际上是以天意来逼迫民意就范。

在欧洲,在基督教没有获得统治地位之前,天意的权威没有确立起来,那时候罗马帝国的皇帝被杀被废的比例非常高。当基督教统治欧洲后,皇帝与国王由教 皇承认并加冕,由于大多数民众是教徒,如此使得君权神授被认可,天意代表了民意。中世纪君主的人身安全要比罗马帝国的皇帝强了许多。即使如此,君主们也不 敢掉以轻心,必须依靠谎言巩固统治。15世纪德国人古登堡的印刷术给欧洲带来了出版便利,所有国家的君主无一例外地限制言论自由和出版自由,他们尽管有天 意支撑,还是害怕谎言被揭穿,执政地位被动摇。

中国皇帝的“天命”主要是靠“奉天承运”的故事和礼教来欺骗臣民。同时还实行愚民教育,用各种方式灌输和培养奴性,明清时期,奴性文化达到高潮。

进入了现代社会,君权神授这种一劳永逸的谎言不灵了,天意无法代表民意了,民意成了权力合法性的唯一来源。

对于实行民主制度的国家来说,民意是由选票定量地统计出来的。所以当选的政治家必然具备民意基础。又由于言论自由新闻自由出版自由得到保障,当政者撒谎太难,代价也太大,等于自杀,所以,民主制度不需要谎言维系。录音分段

实行专制制度的国家,由于没有了天意支撑,又没有人民授权,为了维系统治,就必须强化谎言。所以,这些国家无一例外地控制媒体,限制言论自由和出版自由,实行新闻垄断,封闭或过滤外部世界的信息,并控制文化教育各个领域的信息。

不过,现代社会撒谎越来越难了,除非像朝鲜那样,封闭了与外界的所有联系。即使这样,还是有许多人冒死逃了出来,他们在与外界完全隔绝的情况下,怎么会想到要冒着生命的危险逃往没有主体阳光照耀的黑暗世界呢?看来,没有不透风的墙。

前苏联时期,所有媒体都被严格控制,那时候也没有手机和互联网,但人民的不满还是通过厨房聊天和政治笑话广为传播。

文革时期,广播报纸每天都宣传文化大革命就是好就是好就是好,每天都造谣撒谎,但人民的不满还是通过小道消息和内部消息广为传播。

如此说来,在现代社会,即使没有手机和互联网,撒谎也很难了,有了它们,撒谎就更难了。所以,谎言的功效会越来越衰减,相信的人会越来越少。当谎言不再被人们相信的时候,专制的解体就会开始,这也是必然的。

不过,这个必然性与马基亚维里的必然性是截然相反的方向。

“以国有之名,行一家私有之实,以人民之名,行盘剥人民之实,以稳固安定之名,行维持暴政统治之实”,周厉王虽然死了,但他的谎言幸存了下来。

“除了谎言之外,暴力没有任何东西可作护身符,而谎言也只有靠暴力才能生存”,改变从唾弃谎言开始。

Posted in personal | Leave a comment

EPSON R270更换打印头 故障处理

最近由于R270一个颜色堵头严重,无法清理,并且由于清理过程中搞坏了打印头的墨针,只能报废了.淘宝买了一个打印头,更换过程发生两次错误,这两个错误现象在网上都无法找到相应的答案,所以解决以后决定还是放到这里.

墨车拆解的过程网上可以找到相应的教程以及视频,这里就不赘述.

安装碰到第一个问题是开机后几乎没有开始自检,立即出现换墨和进纸指示灯同时闪烁.EPSON的软件提示是可能有异物导致墨车无法正常运动,网上查询也基本上是这个结论.即使把打印头拆除,现象依旧.仔细排查后我认我是没有这个问题的,墨车活动基本正常.仔细观察后,发现在墨车皮带(虽然不是皮质的)上方,有一条几乎透明的,看起来像朔料带的东西,不知道是什么,似乎没有安装到位.用手电观察墨车背后(无法拆卸,背后指的是连接导轨的那一面),有一个T型槽,似乎正好能卡住这条透明的带子.于是尝试将透明条带设法卡入T型槽,居然自检就正常了.依然不知道这个T型槽有什么功能,估计是有传感器用于检测墨车位置.限于空间,难以拍照留念.

碰到的第二个问题,是测试页无法正常打印,打印的内容完全混乱,看起来像所有的喷头颜色同时不受控的喷墨,见下图:

IMG_20160318_230720

连续拆解多次,依然如故.同样的状况,在网上无法搜索出结果,甚至似乎没有人碰到过.还有一个现象是当墨车到达导轨中央的时候,有明显的电流滋滋声,这个正常情况下肯定是没有的.

联系卖家,建议是拆下打印头,用电吹风干燥打印头的两个数据接口,排除电路短路问题.无奈中只好第四次拆下打印头,这个时候我已经是个超级熟练工了,拆装打印头早都不需要拆开打印机的机壳了.拿出打印头的瞬间才发现,打印头的两个数据线接口我只插了比较宽的那个,还有个窄的,贴合在墨车底部,我一直以为是直接就连接在墨车上的,没有接过.接上两个数据线后,测试页立即正常了,答应了几张照片,效果基本满意.打印头测试还是有几根断线,不过看起来影响不大,希望到后面能慢慢改善吧,清洗打印头现在也不是什么摸不着头脑的操作了.

EPSON打印机的废墨槽设计看起来有很大的问题,这些天拆解打印机的过程中,才发现打印机底壳装满了废墨,稍微倾斜打印机就流的到处都是,几乎用了一卷卫生纸才慢慢把里面的墨水吸干,又凉了几天.在搜索拆机视频的时候,无意中看到有个视频将打印机内部的废墨引流管直接导到机身外部(原来是将废墨导入一些吸废墨的类似无纺棉布的材料里,但是从拆机结果看废墨实际上喷的到处都是),也就仿照这个过程在机身右边打个孔,将导管直接引出,废墨就可以接到机身外的容器里面,预计不会再在底壳留下大量的墨汁.从现在的情况看,每次清洗打印头,都会大概都会浪费5ml左右的墨水,有时候开机也会做类似清洗的操作.

总体来看,喷墨打印机会是一个很过时的产品,墨水打印本身就是个问题,只不过现在没有更合适的廉价方案替代罢了.

 

Posted in personal, tech | Tagged , , , , , , , | Leave a comment