Friday, 29 October 2010

绿雨一气呵成 iPad 3.2.2固件越狱不求人

从来没有像iOS 4.1、iOS 3.2.2越狱这样一波三折——久负盛名的DEV-Team早在一个月前就宣布要放出基于SHAtter漏洞的绿毒(GreenPis0n),并在一周前确认发布时间为2010年10月10日10点10分。可就在发布前夕,神奇小子Geohot突然发力,直接拿出了Limera1n,轻松解决了iPad iOS 3.2.2、iPhone iOS 4.1越狱,让DEV Team被迫推迟绿毒发布。与此同时,Geohot也在不断修正Limera1n程序。截至发稿前,Limera1n最新版本为Beta 4。




iPad 3.2.2固件绿雨越狱不求人



在绿雨LimeRa1n发布不久,我们就为大家准备好了苹果iPhone iOS 4.1的越狱教程。接下来当然是要搞定iPad iOS 3.2.2越狱,事实上iOS 3.2.2越狱和iOS 4.1越狱大致相当,只要胆大心细谁都能不求人轻松搞定。


iPad 3.2.2越狱不求人 准备篇


和之前神奇小子Geohot发布的所有越狱工具一样,Limera1n也无需重刷iOS系统。只是相比以往,由于Cydia撞墙,所以完成越狱后我们必须使用VPN或者代理服务器才能成功加载。





与此同时,我们还需要把iTunes升级到最新的iTunes 10版本,并确保iTunes能和iPad正常同步。最后当然是要猛击此处下载最新的Limera1n Beta 4(解压密码:www.evolife.cn),顺带通过iTunes做好备份。


iPad 3.2.2越狱不求人 实战篇



完成准备功夫之后,自然就开始实战iPad 3.2.2固件越狱了。我们首先确保iPad和iTunes正常连接,关闭所有的杀毒软件防火墙。然后找到下载来解压缩后的limera1n.exe文件,点击右键“以管理员身份运行”。





iPad 3.2.2固件绿雨越狱不求人





此时会弹出Limera1n的提示界面。我们点击make it ra1n。iPad就会自动重启进入恢复模式,iTunes也会提示找到一个处于恢复状态的设备。




iPad 3.2.2固件绿雨越狱不求人





然后我们根据limera1n提示同时按住Home键和电源键,一直到出现提示“Release Power Button”后保持按住Home键,然后松开电源键大约10秒。此时limera1n就会提示Entering the DFU Mode——此时我们就可以松开所有按键,等待Limera1n自动搞定剩下的工作。


iPad 3.2.2固件绿雨越狱不求人



完成上述所有工作后,你的iPad会显示出一个绿色雨滴Logo,随即自动关机——随着屏幕上Limera1n提示Done,iPad 3.2.2/3.2.1固件的越狱就此完成!接下来我们要再度打开iPad,开始安装Cydia和AppSync补丁了!


iPad 3.2.2越狱不求人 安装篇


完成了越狱后,iPad程序中会出现Limera1n图标,由于在国内因为众所周知的原因无法连接到Limera1n网站和Cydia,所以这个图标暂时是白色的。此时我们配置好VPN或者代理服务器,确保可以访问Limera1n.com网站的情况下,点击该白色图标,然后在弹出界面中选择Install Cydia。Limera1n就会开始下载Cydia到你的iPad中。整个过程大约耗时5分钟。



iPad 3.2.2固件绿雨越狱不求人



完成下载后,Cydia会自动运行——如果没有,在主界面中运行也没问题。首次运行Cydia,Cydia会自动加载配置程序,完成后自动退出Cydia。我们此时要确保网络、代理服务器或者VPN正常运行,然后慢慢等待Cydia完成配置——整个配置过程大概需要5分钟。



iPad 3.2.2固件绿雨越狱不求人 iPad 3.2.2固件绿雨越狱不求人



完成配置后,Cydia和Limera1n的图标就不再是一片惨白。我们再点击Cydia运行,Cydia将会提示以什么样的用户身份进行配置。如果不需要命令行,我们点User即可。随后Cydia还会自动进行在线更新,我们强力建议大家不要嫌麻烦,点击Compelety Upgrade,完成所有更新后再进入下一步。更新过程中Cydia还可能重启或自行关闭。




iPad 3.2.2固件绿雨越狱不求人 iPad 3.2.2固件绿雨越狱不求人



更新完成后我们再度进入Cydia,选择下方的Manage,然后找到Source,点击右上角Add,然后添加一个新源:http://cydia.hackulo.us。注意该源同样需要代理服务器或VPN链接才能正确下载。



iPad 3.2.2固件绿雨越狱不求人



完成添加后,在该源提供的软件中找到AppSync for OS 3.2进行安装。然后退出Cydia,重新启动iPad就完成了所有越狱破解动作——此时你安装任何APP都将不受限制。必须注意的是,iPad 3.2.2/3.2.1越狱只能安装AppSync for OS 3.2补丁,除此以外安装其他任何版本的补丁都将直接导致白苹果系统崩溃。

Thursday, 28 October 2010

Prevent DOS with iptables

Prevent DOS with iptables


After a recent conversation on the Ubuntu Forums I wanted to post an example of using iptables.


Of course there are several types of DOS attacks , in this post I will demonstrating the use if iptables to limit the traffic on port 80.



The goal is to keep your web server “responsive” to legitimate traffic, but to throttle back on excessive (potential DOS) traffic.


In this demonstration iptables is configured :



  1. The default policy is ACCEPT (to prevent lockout in the event of flushing the rules with iptables -F).

  2. “Legitimate” traffic is then allowed. In this example I am allowing traffic only on port 80.

  3. All other traffic is then blocked at the end of the INPUT chain (the final rule in the INPUT chain is to DROP all traffic).



The rules I will demonstrate are as follows:


First rule : Limit NEW traffic on port 80


sudo iptables -A INPUT -p tcp --dport 80 -m state --state NEW -m limit --limit 50/minute --limit-burst 200 -j ACCEPT



Lets break that rule down into intelligible chunks.


-p tcp --dport 80 => Specifies traffic on port 80 (Normally Apache, but as you can see here I am using nginx).


-m state NEW => This rule applies to NEW connections.


-m limit --limit 50/minute --limit-burst 200 -j ACCEPT =>This is the essence of preventing DOS.




  • “--limit-burst” is a bit confusing, but in a nutshell 200 new connections (packets really) are allowed before the limit of 50 NEW connections (packets) per minute is applied.


For a more technical review of this rule, see this netfilet page. Scroll down to a bit to the “limit” section.


Second rule – Limit established traffic



This rule applies to RELATED and ESTABLISHED all traffic on all ports, but is very liberal (and thus should not affect traffic on port 22 or DNS).


If you understood the above rule, you should understand this one as well.


sudo iptables -A INPUT -m state --state RELATED,ESTABLISHED -m limit --limit 50/second --limit-burst 50 -j ACCEPT


In summary, 50 ESTABLISHED (and/or RELATED) connections (packets really) are allowed before the limit of 50 ESTABLISHED (and/or RELATED) connections (packets) per second is applied.


Do not let that rule fool you, although it seems very open, it does put some limits on your connections.



Test it for yourself, try using the first rule with and without the second rule.


Full set of rules


After the above commands, here is the complete set of rules I am testing:



iptables-save
# Generated by iptables-save v1.4.4 on --
*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
# Completed on --
# Generated by iptables-save --
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT
# Completed on --
# Generated by iptables-save v1.4.4 on --

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -m limit --limit 50/sec --limit-burst 50 -j ACCEPT
-A INPUT -p icmp -m limit --limit 1/sec -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -m state --state NEW -m limit --limit 50/min --limit-burst 200 -j ACCEPT
-A INPUT -j LOG
-A INPUT -j DROP
-A FORWARD -j DROP
-A OUTPUT -o lo -j ACCEPT
COMMIT
# Completed on --

This rule set is for demonstration only and is NOT a complete set of rules for a web server. Do no use this rule set unmodified on a production server.


Testing the rule set


Human interaction



Open Firefox, point it to your web page. The web page should load nice and fast.


Hit F5 repetitively, load the page as fast as you can. Your web site should remain nice and responsive.


So far, so good, we want our site to remain responsive.


Simulated DOS


Actual DOS attacks are many times faster then humans, here I will use ab.


See this link or the Apache documentation for information of ab.



Baseline, without the above 2 rules


ab -n 100 -c 10 http://bodhi's_test_server.com/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking bodhi's_test_server.com (be patient).....done

Server Software: nginx
Server Hostname: bodhi's_test_server.com
Server Port: 80

Document Path: /
Document Length: 59786 bytes

Concurrency Level: 10
Time taken for tests: 13.174 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 6002700 bytes
HTML transferred: 5978600 bytes
Requests per second: 7.59 [#/sec] (mean)
Time per request: 1317.369 [ms] (mean)
Time per request: 131.737 [ms] (mean, across all concurrent requests)
Transfer rate: 444.98 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 122 129 2.2 128 134
Processing: 1151 1182 19.1 1177 1260
Waiting: 125 132 8.2 128 170
Total: 1280 1310 19.3 1305 1390

Percentage of the requests served within a certain time (ms)
50% 1305
66% 1313
75% 1316
80% 1321
90% 1328
95% 1354
98% 1386
99% 1390
100% 1390 (longest request)

Notice:

Requests per second: 7.59 [#/sec] .

Total time for requests: 13 seconds .

(Data) Transfer rate: 444.98 [Kbytes/sec] .



With the above rules


First attempt:



ab -n 100 -c 10 http://bodhi's_test_server.com/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking bodhi's_test_server.com (be patient)...apr_poll: The timeout specified has expired (70007)
Total of 99 requests completed

Oh no ! timed out, LOL


Second attempt (I reduced the number of requests to 90):



ab -n 90 -c 10 http://bodhi's_test_server.com/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking bodhi's_test_server.com (be patient).....done

Server Software: nginx
Server Hostname: bodhi's_test_server.com
Server Port: 80

Document Path: /
Document Length: 59786 bytes

Concurrency Level: 10
Time taken for tests: 69.684 seconds
Complete requests: 90
Failed requests: 0
Write errors: 0
Total transferred: 5402430 bytes
HTML transferred: 5380740 bytes
Requests per second: 1.29 [#/sec] (mean)
Time per request: 7742.658 [ms] (mean)
Time per request: 774.266 [ms] (mean, across all concurrent requests)
Transfer rate: 75.71 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 123 128 4.3 127 155
Processing: 1036 6269 10081.4 1921 51059
Waiting: 125 1240 5908.7 128 49656
Total: 1159 6396 10081.1 2044 51186

Percentage of the requests served within a certain time (ms)
50% 2044
66% 2981
75% 5478
80% 7047
90% 20358
95% 27356
98% 48218
99% 51186
100% 51186 (longest request)

Notice :


Requests per second: 1.29 [#/sec] (mean)

Total time for requests: 69 seconds.

(Data) Transfer rate: 75.71 [Kbytes/sec] [Kbytes/sec].


For those unfamiliar with ab, that is a “minor” DOS


For comparison, here is what ab can do to the server (iptables was flushed [disabled]):


ab -n 1000 -c 100 http://bodhi's_test_server.com/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking bodhi's_test_server.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests

Server Software: nginx
Server Hostname: bodhi's_test_server.com
Server Port: 80

Document Path: /
Document Length: 58708 bytes

Concurrency Level: 100
Time taken for tests: 59.324 seconds
Complete requests: 1000
Failed requests: 945
(Connect: 0, Receive: 0, Length: 945, Exceptions: 0)
Write errors: 0
Total transferred: 59190450 bytes
HTML transferred: 58945935 bytes
Requests per second: 16.86 [#/sec] (mean)
Time per request: 5932.368 [ms] (mean)
Time per request: 59.324 [ms] (mean, across all concurrent requests)
Transfer rate: 974.37 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 127 908 817.9 788 8016
Processing: 735 4779 1805.2 4368 15707
Waiting: 128 981 827.2 811 12143
Total: 1058 5687 1880.8 5269 17450

Percentage of the requests served within a certain time (ms)
50% 5269
66% 5899
75% 6340
80% 6863
90% 8078
95% 9001
98% 10937
99% 11730
100% 17450 (longest request)



Notice :

Requests per second: 16.86 [#/sec]

Total time for requests: 69 seconds.

(Data) Transfer rate: 974.37 [Kbytes/sec] .


As you can see, the server has no problem dishing out 974.37 [Kbytes/sec] !!!


Closing remarks



Hopefully you now understand this “simple” example limiting a DOS on port 80.


With these rules your web site remains responsive to human interaction in firefox. Go ahead, hit F5 (refresh the page) as fast as you can, see if you can get your web page to slow down =) .


The difference is that as with a DOS attack, ab is hitting the server faster then you can with F5 , so your site is responsive to “normal” activity, but blocks DOS.


Obviously this is but one example and there are several types of DOS attacks. The goal is to demonstrate the use of iptables using a few “simple” rules.



You task is to take this knowledge and apply it to you own server.

Pandoc is a Swiss Army knife text conversion utility

pandoc


I love Markdown. If you write any sort of content for the Web, you really should try it; it's a simple notation system for making text bold or italic, creating headlines and bulleted lists, and more. To make text bold, for example, you just need to surround it with asterisks.


Converting Markdown into valid HTML is a fairly common task, and there's no dearth of tools that do this. But Pandoc caught my eye because it can do this and a whole lot more. Plus it's free, open source, and cross-platform.


Pandoc understands Markdown, HTML, and several other formats, and it can output:



  • plain text – i.e., strip all HTML and give you just the text


  • Markdown – so you can convert HTML back into Markdown for editing

  • And a whole list of other formats, including HTML, LaTeX, ConTeXt, PDF, RTF, DocBook XML, OpenDocument XML, ODT, GNU Texinfo, MediaWiki markup, groff man pages, EPUB ebooks, and S5 and Slidy HTML slide shows


It even supports PDF output using a helper utility. So yes, it can convert Markdown to HTML (and vice versa), but it can do so much more, too. It's definitely one for the toolbox!

Defend against Firesheep by surfing securely with HTTPS

Defend against Firesheep by surfing securely with HTTPS

Firesheep Sucks, DeviantArt user MyBlackSheepThe last couple of days have seen the launch and explosive proliferation of a Firefox add-on called Firesheep. It's an incredibly simple program that snoops unsecured Wi-Fi packets to grant you one-click masquerading of other users: if you log into Facebook at the local coffee shop, someone can use Firesheep to become you. Seriously, you can go along to any location with an unsecured Wi-Fi network and steal other users' accounts.




Firesheep does this by 'scooping' cookies out of the air. Whenever you log into a website your name and password is only sent once -- afterwards, a stored authorization token is used. This means that if someone has your cookie they can pretend to be you -- and with unsecured wireless networks, anyone can grab your cookie.



This is a huge issue, and you have every right to be concerned -- but there is a solution!



Hopefully you've all heard about SSL and HTTPS, the encryption techniques used to secure Internet communications. The 'secure padlock' icon in your browser is most commonly found when buying things online, but most major sites also use it to secure login and registration. If you see this padlock, you are safe. If you could browse the entire Internet with that secure padlock in place then I wouldn't be writing this post.




Unfortunately, many sites redirect you to an unsecured page after you log in. Yes, your password remains secret -- but what good is that if your exposed cookie can be stolen by anyone on the same unsecured Wi-Fi network?



Fortunately, there are a few solutions for Firefox, and at least one good solution for every other browser.


The key to staying safe is by forcing every connection to use HTTPS, or to go via another connection that encrypts your communication. Almost every website has HTTPS capabilities, but because of the increased overhead that encrypted communication requires, it's often only used for logins and registering. Years ago this might not even have become an issue, but with everyone storing more and more personal information on services like Facebook and Google, and with Wi-Fi blanketing our streets and coffee shops, encryption really is required.







If you use Firefox, these add-ons should do the trick:

  • HTTPS Everywhere -- this gem from the Electronic Frontier Foundation is about as good as it gets. By default it forces most popular websites to use HTTPS, and you can add your own rules for other sites. This is one of the few add-ons that I use everywhere

  • Torbutton -- this solution is slightly more involved (it's for power-users), but if you want to be really secure and anonymous, the Tor network is a fantastic solution


  • Force-TLS -- this is like HTTPS Everywhere, but doesn't come with a built-in dictionary of secure sites. Adding them is very easy, though







Chrome
users, due to a limitation of the browser, aren't quite so lucky. There is no way to force HTTPS with an extension. You may have read elsewhere that KB SSL will help you, but it won't. Instead you need to use a secure SOCKS proxy. This isn't particularly hard, it does involve a bit of work.


Opera and Internet Explorer users: you too will need to use a SOCKS proxy; just follow one of the guides above.




Ultimately, though, if you use unsecured Wi-Fi networks you will leave yourself exposed. The best solution might not be to install add-ons, but to ask your local coffee shop owner to secure his network with WPA2. The entire problem would go away if big-name websites used HTTPS across the board, too.


Tuesday, 26 October 2010

10 things you should know about IPv6 addressing

Over the last several years, IPv6 has been inching toward becoming a mainstream technology. Yet many IT pros still don’t know where to begin when it comes to IPv6 adoption because IPv6 is so different from IPv4. In this article, I’ll share 10 pointers that will help you understand how IPv6 addressing works.



Note: This article is also available as a PDF download.


1: IPv6 addresses are 128-bit Hexadecimal numbers


The IPv4 addresses we’re all used to seeing are made up of four numerical octets that combine to form a 32-bit address. IPv6 addresses look nothing like IPv4 addresses. IPv6 addresses are 128 bits long and are made up of hexadecimal numbers.


In IPv4, each octet is separated by a period. In IPv6, the hexadecimal characters are separated by colons. A group of hexadecimal characters can range from two to four characters in length.


2: Link local unicast addresses are easy to identify


IPv6 reserves certain headers for different types of addresses. Probably the best known example of this is that link local unicast addresses always begin with FE80. Similarly, multicast addresses always begin with FF0x, where the x is a placeholder representing a number from 1 to 8.



3: Leading zeros are suppressed


Because of their long bit lengths, IPv6 addresses tend to contain a lot of zeros. When a section of an address starts with one or more zeros, those zeros are nothing more than placeholders. So any leading zeros can be suppressed. To get a better idea of what I mean, look at this address:


FE80:CD00:0000:0CDE:1257:0000:211E:729C

If this were a real address, any leading zero within a section could be suppressed. The result would look like this:


FE80:CD00:0:CDE:1257:0:211E:729C

As you can see, suppressing leading zeros goes a long way toward shortening the address.


4: Inline zeros can sometimes be suppressed


Real IPv6 addresses tend to contain long sections of nothing but zeros, which can also be suppressed. For example, consider the address shown below:



FE80:CD00:0000:0000:0000:0000:211E:729C

In this address, there are four sequential sections separated by zeros. Rather than simply suppressing the leading zeros, you can get rid of all of the sequential zeros and replace them with two colons. The two colons tell the operating system that everything in between them is a zero. The address shown above then becomes:


FE80:CD00::211E:729C

You must remember two things about inline zero suppression. First, you can suppress a section only if it contains nothing but zeros. For example, you will notice that the second part of the address shown above still contains some trailing zeros. Those zeros were retained because there are non-zero characters in the section. Second, you can use the double colon notation only once in any given address.


5: Loopback addresses don’t even look like addresses


In IPv4, a designated address known as a loopback address points to the local machine. The loopback address for any IPv4-enabled device is 127.0.0.1.


Like IPv4, there is also a designated loopback address for IPv6:


0000:0000:0000:0000:0000:0000:0000:0001


Once all of the zeros have been suppressed, however, the IPv6 loopback address doesn’t even look like a valid address. The loopback address is usually expressed as ::1


6: You don’t need a traditional subnet mask


In IPv4, every IP address comes with a corresponding subnet mask. IPv6 also uses subnets, but the subnet ID is built into the address.


In an IPv6 address, the first 48 characters are the network prefix. The next 16 characters (which are often all zeros) are the subnet ID. And the last 64 characters are the interface identifier. Even though there is no subnet mask, you have the option of specifying a subnet prefix length.


7: DNS is still a valid technology


In IPv4, Host (A) records are used to map an IP address to a host name. DNS is still used in IPv6, but Host (A) records are not used by IPv6 addresses. Instead, IPv6 uses AAAA resource records, which are sometimes referred to as Quad A records. The domain ip6.arpa is used for reverse hostname resolution.


8: IPv6 can tunnel its way across IPv4 networks


One of the things that has caused IPv6 adoption to take so long is that IPv6 is not generally compatible with IPv4 networks. As a result, a number of transition technologies use tunneling to facilitate cross network compatibility. Two such technologies are Teredo and 6to4. Although these technologies work in different ways, the basic idea is that both encapsulate IPv6 packets inside IPv4 packets. That way, IPv6 traffic can flow across an IPv4 network. Keep in mind, however, that tunnel endpoints are required on both ends to encapsulate and extract the IPv6 packets.



9: You might already be using IPv6


Beginning with Windows Vista, Microsoft began installing and enabling IPv6 by default. Because the Windows implementation of IPv6 is self-configuring, your computers could be broadcasting IPv6 traffic without your even knowing it. Of course, this doesn’t necessarily mean that you can abandon IPv4. Not all switches and routers support IPv6, just as some applications contain hard-coded references to IPv4 addresses.


10: Windows doesn’t fully support IPv6


It’s kind of ironic, but as hard as Microsoft has been pushing IPv6 adoption, Windows does not fully support IPv6 in all the ways you might expect. For example, in Windows, it is possible to include an IP address within a Universal Naming Convention (\\127.0.0.1\C$, for example). However, you can’t do this with IPv6 addresses because when Windows sees a colon, it assumes you’re referencing a drive letter.


To work around this issue, Microsoft has established a special domain for IPv6 address translation. If you want to include an IPv6 address within a Universal Naming Convention, you must replace the colons with dashes and append .ipv6.literal.net to the end of the address — for example, FE80-AB00–200D-617B.ipv6.literal.net.

Friday, 22 October 2010

[免費] AutoCAD WS網路版CAD繪圖軟體、DWG閱讀器

AutoCAD是一套相當知名且相當好用的CAD設計軟體,最近該公司推出了線上版的AutoCAD WS線上編輯、檢視工具,儘管功能無法跟一般完整版軟體比擬,不過整體來說也相當方便實用。


除了可以讓一般Windows、Mac..等電腦透過網路瀏覽器直接線上編輯設計圖之外,還支援了iPhone、iPad與iPod Touch等手持裝置,做好的設計圖還可直接分享出來讓其他人線上瀏覽,只要在可以上網的地方就可以線上展示、編修設計圖.… 有興趣的人可以開來試試看。


AutoCAD WS主要特色:  (轉貼自廠商新聞稿

  • 網路和行動DWG瀏覽器可透過網路瀏覽器或行動設備,於任何地方進行AutoCAD設計的瀏覽;
  • 線上DWG編輯器支援超過100個常見的AutoCAD繪圖和編輯工具,並提供直覺式操作畫面予CAD專業人士和非CAD專業人士;
  • 內置共用功能可產生一個唯一的URL,邀請專案參與者線上瀏覽DWG檔案,並能設定其他人的許可權,以控制其瀏覽、編輯或下載圖紙和資料夾;
  • 方便的線上儲存功能可把DWG和DXF檔案、影像檔案,以及其他相關專案檔案,整理、存放到專案檔案夾中,並支援包括DOC、JPEG、PNG和PDF在內的多種檔案格式;
  • 即時協作能讓多人同時操作同一個DWG檔案,並即時瀏覽其所進行的變更;
  • 設計階段表能捕捉和追蹤所有圖紙的變更,以利版本控制和審核。
  • Tuesday, 19 October 2010

    Scrapwalls 把照片拼成可愛圖案、印成海報

    很多人可能會在特別的節日或活動中拍攝很多別具意義的照片,如果只是放在電腦中看看而已就有點可惜。如果想要變點花樣,可以試試看本文所介紹的照片加工服務。

    Scrapwalls透過線上28種不同的圖案,將我們拍好的照片拼貼成各種可愛的圖形,拼好的圖案除了可以用彩色印表機列印之外,也可直接抓圖下來分享給朋友或放在Facebook或部落格上讓大家欣賞。如果你比較有心的話,也可以使用線上的付費印刷服務,把它輸出成不同尺寸的海報,貼在牆上自己慢慢欣賞。(不便宜就是了)


  • 網站名稱:Scrapwalls
  • 網站網址:http://www.scrapwalls.com/
  • 服務說明:可免費線上製作再自己抓圖下來,另可付費製作成海報、明信片等印刷品,運費外加美金10元,聽說25美金以上訂單免運費。

  • 完成圖DEMO:



    home-main  


    關於圖片的輸出品質,由於我自己也沒付費印過、也沒看過相關產品,如果有興趣的話建議多搜尋一下國外其他網友的意見,再參考看看。  

     


    使用方法:


    第1步  開啟Scrapwalls網站,按一下「Get Started」按鈕開始製作。

    01  

     

    第2步  先選擇海報的尺寸。

    02  

     

    第3步  選擇你要合成的外框圖案,接著再按「Continue」按鈕。

    03  

     

    第4步  輸入Email與帳號密碼,免費建立一個帳戶。

    04  

     

    第5步  按「Select Photos」按鈕選取你要上傳的圖片,然後再按「Upload Now」按鈕開始上傳。

    05  

     

    第6步  上傳完圖片後,我們可以在視窗選擇哪些圖片要用、哪些不使用,確認完後再按「Continue」按鈕。

    06  

     

    第7步  接著,我們可以直接在畫面中用滑鼠拖拉圖片、改變位置與圖片大小,設定完成後,可以按「Add to cart」加入購物車,準備結帳、送印。

    如果你不想花錢印出來,可以只按「Share Now」按鈕,把完成圖顯示在網站上、分享給親朋好友看。

    07  

     

    第8步  另外我們也可以用抓圖軟體或用鍵盤上的「PrtSc」抓圖按鈕把畫面抓下來,另外存成圖檔。

    08 

    High Availability for the Ubuntu Enterprise Cloud (UEC) – Cloud Controller (CLC)

    High Availability for the Ubuntu Enterprise Cloud (UEC) – Cloud Controller (CLC)






    At UDS-M, I raised the concern of the lack of High Availability for the Ubuntu Enterprise Cloud (UEC). As part as the Cluster Stack Blueprint, the effort of trying to bring HA to UEC was defined, however, it was barely discussed due to the lack of time, and the work on HA for the UEC has been deferred for Natty. However, in preparation for the next release cycle, I’ve been able to setup a two node HA Cluster (Master/Slave) for the Cloud Controller (CLC).


    NOTE: Note that this tutorial is an early draft and might contain typos/erros that I might have not noticed. Also, this might not also work for you, that’s why I first recommend to have a UEC up and running with one CLC, and then add the second CLC. If you need help or guidance, you know where to find me :) . Also note that this is only for testing purposes!, and I’ll be moving this HowTo to an Ubuntu Wiki page soon since the formatting seems to be somehow annoying :) .



    1. Installation Considerations

    I’ll show you how to configure two UEC (eucalyptus) Cloud Controllers in High Availability (Active/Passive) , using the HA Clustering tools (Pacemaker, Heartbeat), and DRBD for replication between CLC’s. This is shown in the following image.



    The setup I used is a 4 node setup, 1 CLC, 1 Walrus, 1 CC/SC, 1 NC, as it is detailed in the UEC Advanced Installation Doc, however, I installed the packages from the Ubuntu Server Installer. Now, as per the UEC Advanced Installation Doc, it is assumed that there is only one network interface (eth0) in the Cloud Controller connected to a “public network” that connects it to both, the outside world and the other components in the Cloud. However, to be able to provide HA be need the following requirements:




    • First, we need a Virtual IP (VIP) to allow both, the clients and the other Controllers to access either one of the CLC’s using that single IP. In this case, we are assuming that the “public network” is 192.168.0.0/24, and that the VIP is 192.168.0.100. This VIP will also be used to generate the new certificates.


    • Second, we need to add a second network interface to the CLC’s to use it as a replication link between DRBD. This second interface is eth1 and will have address ranged in 10.10.10.0/30.


    2. Install Second Cloud Controller (CLC2)

    Once you finish setting up the UEC and everything is working as expected, please install a second cloud controller.
    Once installed, it is desirable to not start the services just yet. However, you will need to exchange the CLC ssh keys with both the CC and the Walrus as it is specified in SSH Key Authentication Setup, under STEP4 of the UEC Advanced Installation doc. Please note that this second CLC will also have two interfaces, eth0 and eth1. Leave eth1 unconfigured, but configure eth0 with an IP address in the same network as the other controllers.



    3. Configure Second Network Interface

    Once the two CLC’s are installed (CLC1 and CLC2), we need to configure eth1. This interface will be used as a direct link between CLC1 and CLC2 and will be used by DRBD as the replication link. In this example, we’ll be using 10.10.10.0/30. On your /etc/network/interfaces.


    On CLC1:





    auth eth1

    iface eth1 inet static

    address 10.10.10.1

    netmask 255.255.255.252



    On CLC2:




    auth eth1


    iface eth1 inet static

    address 10.10.10.2

    netmask 255.255.255.252



    NOTE: Do *NOT* add the gateway because it is a direct link between CLC’s. If we add it, it will create a default route the configuration of the resources will fail further along the way.


    4. Setting up DRBD



    Once the CLC2 is installed and configured, we need to setup DRBD for replication between CLC’s.


    4.1. Create Partitions (CLC1/CLC2)

    For this, we either need a new disk or disk partition. In my case, I’ll be using /dev/vdb1. Please note that both partitions need to be exactly equal in both nodes. You can create them whichever way you prefer.


    4.2. Install DRBD and load module (CLC1/CLC2)

    Now we need to install DRBD Utils.




    sudo apt-get install drbd




    Once it is installed, we need to load the kernel module, and add it is /etc/modules. Please note that DRBD Kernel Module is now included in mainline kernel.




    sudo modprobe drbd

    sudo -i

    echo drbd >> /etc/modules



    4.3. Configuring the DRBD resource (CLC1/CLC2)

    Add a new resource for DRBD by editing the following file:





    sudo vim /etc/drbd.d/uec-clc.res



    The configuration looks similar as the following:




    resource uec-clc {

    device /dev/drbd0;

    disk /dev/vdb1;

    meta-disk internal;


    on clc1 {

    address 10.10.10.1:7788;

    }

    on clc2 {

    address 10.10.10.2:7788;

    }

    syncer {

    rate 10M;

    }


    }


    4.4. Creating the resource (CLC1/CLC2)

    Now we need to do the following on CLC1 and CLC2:




    sudo drbdadm create-md uec-clc

    sudo drbdadm up uec-clc



    4.5. Establishing initial communication (CLC1)


    Now, we need to do the following:




    sudo drbdadm -- --clear-bitmap new-current-uuid uec-clc

    sudo drbdadm primary uec-clc

    mkfs -t ext4 /dev/drbd0



    4.6. Copying the Cloud Controller Data for DRBD Replication (CLC1)

    Once the DRBD nodes are in sync, we need have the data replicated between the CLC1 and the CLC2 and make the necessary changes so that they both can access the data at a given point in time. To do this, do the following in CLC1:





    sudo mkdir /mnt/uecdata

    sudo mount -t ext4 /dev/drbd0 /mnt/uecdata

    sudo mv /var/lib/eucalyptus/ /mnt/uecdata

    sudo mv /var/lib/image-store-proxy/ /mnt/uecdata

    sudo ln -s /mnt/uecdata/eucalyptus/ /var/lib/eucalyptus

    sudo ln -s /mnt/uecdata/image-store-proxy/ /var/lib/image-store-proxy

    sudo umount /mnt/uecdata



    What we did here is to move the Cloud Controller data to the DRBD mount point so that it get’s replicated to the second CLC, and then do a symlink from the mountpoint to the original data folders.



    4.7. Preparing the second Cloud Controller (CLC2)

    Once we prepared the data in CLC1, we can discard the data in CLC2, and we need to create the symlinks the same way we did in the CLC1. We do this as follows:




    sudo mkdir /mnt/uecdata

    sudo rm -fr /var/lib/eucalyptus

    sudo rm -fr /var/lib/image-store-proxy

    sudo ln -s /mnt/uecdata/eucalyptus/ /var/lib/eucalyptus

    sudo ln -s /mnt/uecdata/image-store-proxy/ /var/lib/image-store-proxy




    After this, the data will be replicated via DRBD. Whenever CLC1.


    5. Setup the Cluster


    5.1. Install the Cluster Tools

    First we need to install the clustering tools:




    sudo apt-get install heartbeat pacemaker



    5.2. Configure Heartbeat

    Then we need to configure Heartbeat. First, create /etc/ha.d/ha.cf and add the following:






    autojoin none

    mcast eth0 239.0.0.43 649 1 0

    warntime 5

    deadtime 15

    initdead 60

    keepalive 2

    node clc1


    node clc2

    crm respawn



    Then create the authentication file (/etc/ha.d/authkeys), ad add the following:




    auth1

    1 md5 password



    and change the permissions:





    sudo chmod 600 /etc/ha.d/authkeys



    5.3. Removing Startup of services at boot up

    We need to let the Cluster manage the resources, instead of starting them at bootup.




    sudo update-rc.d -f eucalyptus remove

    sudo update-rc.d -f eucalyptus-cloud remove

    sudo update-rc.d -f eucalyptus-network remove

    sudo update-rc.d -f image-store-proxy remove




    And we also need to change the “start on” to “stop on” in the upstart configuration scripts at /etc/init/* for:




    eucalyptus.conf

    eucalyptus-cloud.conf

    eucalyptus-network.conf




    5.4. Configuring the resources

    Then, we need to configure the cluster resources. For this do the following:




    sudo crm configure



    and paste the following:




    primitive res_fs_clc ocf:heartbeat:Filesystem params device=/dev/drbd/by-res/uec-clc directory=/mnt/uecdata fstype=ext4 options=noatime


    primitive res_ip_clc ocf:heartbeat:IPaddr2 params ip=192.168.0.100 cidr_netmask=24 nic=eth0

    primitive res_ip_clc_src ocf:heartbeat:IPsrcaddr params ipaddress="192.168.0.100"

    primitive res_uec upstart:eucalyptus op start timeout=120s op stop timeout=120s op monitor interval=30s

    primitive res_uec_image_store_proxy lsb:image-store-proxy

    group rg_uec res_fs_clc res_ip_clc res_ip_clc_src res_uec res_uec_image_store_proxy

    primitive res_drbd_uec-clc ocf:linbit:drbd params drbd_resource=uec-clc

    ms ms_drbd_uec res_drbd_uec-clc meta notify=true

    order o_drbd_before_uec inf: ms_drbd_uec:promote rg_uec:start

    colocation c_uec_on_drbd inf: rg_uec ms_drbd_uec:Master


    property stonith-enabled=False

    property no-quorum-policy=ignore



    6. Specify the Cloud IP for the CC, NC, and in the CLC.

    Once you finish the configuration above, one of the CLC’s will be the Active one and the Second will the passive one. The Cluster Resource Manager will decide which one will become the primary one. However, it is expected that CLC1 will become the primary.


    Now, as specified in the UEC Advanced Installation Doc, we need to specify the Cloud Controller VIP in the CC. However it is also important to do it in the NC. This is done in /etc/eucalyptus/eucalyptus.conf by adding:



    VNET_CLOUDIP="192.168.0.100"


    Then, log into the Web Front end (192.168.0.100:8443), and change the Cloud Configuration to have the VIP as the Cloud Host.



    By doing this you will have the new certificates generated with the VIP, that will allow you to connect to the cloud even if the primary Cloud Controller failed, and the Second one tool control of the service.


    Finally, restart the Walrus, CC/SC, and NC and enjoy.


    7. Final Thoughts

    The cluster resource manager is using the upstart script to manage the Cloud Controller. However, this is not optimal, and it is used for testing purposes. The creation of an OCF Resource Agent will be required to adequately start/stop and monitor eucalyptus. The OCF RA will be developed soon, and this will be discussed at Ubuntu Developer Summit – Natty.




    Reference link:

    http://www.roaksoax.com/2010/10/high-availability-uec-clc-howto

    用17個防毒軟體幫檔案檢測病毒、木馬(NoVirusThanks Uploader)

    跟之前介紹過的「VirusTotal Uploader」功能一樣,NoVirusThanks Uploader可以幫我們將電腦中的檔案上傳到該網站中進行線上掃毒,幫我們透過Avira、NOD32、Kaspersky、AVG、Avast…等等16個較知名的防毒軟體執行病毒檢測。

    不過此軟體跟其他類似服務不同的是,除了單純的檔案上傳、掃毒功能之外,還提供了遠端檔案上傳(貼上網址轉傳過去)、開機啟動項目的檢測與刪除、展示已安裝的驅動程式清單、展示已載入的DLL程式庫...等等,每個項目都可讓我們選取後上傳,檢查看看有沒啥問題。

    此外還提供了「Running Processes」電腦執行程式即時檢測視窗,我們可以像使用系統「工作管理員」那樣在視窗中看看有哪些正在運行中的檔案,如果懷疑是病毒或木馬的話,還可直接按右鍵、上傳到網站上做病毒檢測。


    ▇ 軟體小檔案 ▇     (錯誤、版本更新回報)


  • 軟體名稱:NoVirusThanks Uploader
  • 軟體版本: 2.4.1.0

  • 軟體語言:英文
  • 軟體性質:免費軟體
  • 檔案大小:653KB
  • 系統支援:Windows 98/2000/XP/2003/Vista/Win7(32/64位元)
  • 官方網站:按這裡
  • 軟體下載:按這裡

  • 使用方法:



    第1步  將軟體安裝好之後,開啟程式在「Uploader」分頁中點一下「Browse…」按鈕選取你要上傳、檢測的檔案後再按「Upload」按鈕即可開始上傳。

    01  

     

    第2步  檢測完成後,會自動開啟一個網頁,裡面的「Status」如果顯示「CLEAN」則表示該檔案應該沒有病毒或木馬的疑慮。(當然這些檢測結果是否精準都得看個別防毒軟體的功力囉)

    02  

     

    第3步  如果某個程式可能有問題的話,則會在「Status」中顯示「INFECTED」的紅色字體,下方的清單也會列出哪個版本的防毒軟體回報哪些病毒檢測結果,有紅字的代表可能有問題。

    03  

     

    第4步  另外NoVirusThanks Uploader軟體中也提供了「Running Processes」功能,讓我們檢視目前運作中的程式,如果懷疑某個運作中的程式可能有問題,可以在上面按右鍵再點「Upload To Virus Scanner」。

    04  

     

    第5步  「Registry Startups」選單裡面則列出開機後會自動啟動的項目,很多病毒或惡意程式都會在這裡登記讓電腦開機時自動執行該程式。如果你無法判別某個不知道來源的開啟啟動項目是否有問題,可以在上慢按右鍵、上傳檔案以執行病毒檢測。

    05  

     

    第6步  「Drivers List」可以列出目前已安裝的各式驅動程式,裡外還有個「Loaded DLLs」也可檢查已載入的DLL項目。

    06 

    Square Privacy Cleaner 一鍵清除:電腦操作記錄、個人隱私資料

    當我們在操作電腦時都會留下一些暫存檔、快取資料與網路連線記錄、檔案開啟記錄…等等,此外在瀏覽網頁時,也會在瀏覽器中留下網址記錄、搜尋關鍵字、下載檔案記錄與Cookies、快取檔案.…等等資訊,甚至在使用軟體、應用程式時,也都會留下一些檔案清單、歷史紀錄...等資訊,如果你不希望其他人透過某些方法挖出這些跟你有關的秘密,可以試試看本文所介紹的Square Privacy Cleaner個人隱私清除程式。

    Square Privacy Cleaner把常見的電腦操作痕跡、隱私資料分為「Windows General」、「Web Browsers」、「Temp Folders」、「Applications」與「Junk Files」等5個分類,我們可以依照實際需求勾選你要清除的項目,然後再按一下「Delete Traces」按鈕,即可開始搜尋、刪除你已選取的操作記錄與隱私資料,整體來說相當簡單易用。


    ▇ 軟體小檔案 ▇     (錯誤、版本更新回報)



  • 軟體名稱:Square Privacy Cleaner
  • 軟體版本:1.1.0.0
  • 軟體語言:英文
  • 軟體性質:免費軟體
  • 檔案大小:696KB
  • 系統支援:Windows 98/2000/XP/2003/Vista/Win7(32/64位元)
  • 官方網站:http://www.novirusthanks.org/

  • 軟體下載:按這裡

  • 使用方法:


    第1步  Square Privacy Cleaner的使用方法很簡單,就是勾選你要清除的項目後再按一下「Delete Traces」按鈕即可搞定。以下簡單擷取幾張軟體操作介面,列出全部可清除的項目。


    Windows General」裡面大部分都是跟系統有關的操作記錄,很可惜此軟體目前只有英文介面,如果不太了解哪個項目是啥功能,可以用Google搜尋一下。



    01  

     

    第2步  「Web Browsers」裡面列出了包括IE、Firefox、opera與Google Chrome等常見瀏覽器的瀏覽記錄、快取、Cookies、網址記錄與下載清單...等等。

    02  

     

    第3步  「Temp Folders」裡面有電腦暫存資料夾裡面的一些暫時使用的檔案。

    03  

     

    第4步  「Applications」裡面列出了大部分使用者會用到的各種知名軟體,包括WinRAR、MediaPlyaer、Foxit Reader、WinZIP、VNC、MS Office、7-zip、Java、Adobe Reader… 等,勾選並按下「Delete Traces」按鈕即可清除相關歷史紀錄與操作痕跡。

    04  

     

    第5步  「Junk Files」就是一些電腦沒出事時用不到的log記錄檔,可依實際需求勾選、刪除。如果不是因為空間不夠或特殊隱私控管的需求,log檔留著還是有些好處,以後電腦出啥問題時,還可以挖出來檢查看看是否有啥問題。

    05

    Monday, 18 October 2010

    前一陣子滿流行一種USB隨身碟的病毒,只要你一插上隨身碟之後,隨身碟中的病毒馬上會透過Windows系統中的自動播放功能感染你的電腦,然後再繼續透過其他管道感染其他電腦,造成不少損失。如果你常常需要用USB隨身碟到處存取檔案,又擔心這類隨身碟病毒的變種持續橫行的話,可以試試看下面這個免費的UsbCleaner隨身碟病毒殺毒工具。



    軟體名稱:UsbCleaner
    軟體版本:V6.0 Build 20101017
    軟體語言:繁體中文
    軟體性質:免費軟體
    檔案大小:3.2MB
    系統支援:Windows 2000/XP/2003/Vista
    官方網站:http://www.usbcleaner.cn/ (廣告很多)
    軟體下載:按這裡


    軟體更新項目:  (轉貼自官方網站)


  • 修正USBCleaner對正常vmnat.exe的錯誤警報問題
  • 對FolderCure部分界面進行美化調整
  • 更新FolderCure到V4.4
  • 對USBCleaner更新模塊Update.exe進行改進,以後在USBCleaner沒有更新的時候,可通過在線升級獲取FolderCure的全部更新文件
  • 對FolderCure更新模塊進行調整與優化
  • 修正USBMON在繁體中文系統中重複運行提示文字的亂碼問題
  • 修正USBCleaner主程式一處在繁體中文系統中亂碼問題
  • 對FolderCure報告Folder_ForceHidden作非病毒標注
  • 對FolderCure病毒庫重新進行優化(注意這將使舊版本的FolderCure無法正確讀取新版病毒庫,請下載最新版)
  • 增強對文件夾圖標病毒Important.FILES.EXE系列的識別與查殺

  • 增強對gdiplus.exe usddriver.exe等回收站圖標病毒的查殺(可通過FolderCure查殺)
  • 新增240個新U盤類病毒的查殺,其中包括delautorun.bat Worm.Win32.Autorun.204800;   .exe Trojan.Win32.DownldrU.a;useinit.exe Worm.Win32.Autorun.jot;h4ck3v1l.vbs Worm.Script.VBS.Autorun.ai;cao.exe Trojan.Win32.Nodef.ihw;boot.exe Trojan.Win32.Undef.dqr等等

  • 一、偵測電腦是否有隨身碟病毒


     


    第1步  依照本文上方的網址將UsbCleaner檔案下載回來並解壓縮後,直接按兩下執行「USBCleaner.exe」這檔案。


    18-14-21 


     


    第2步  開啟UsbCleaner軟體主畫面後,可以直接按一下左邊的〔全面檢測〕按鈕,讓UsbCleaner掃描你的電腦中是否有相關病毒或可疑程式。



    18-14-26


     


    二、其他系統修復功能


    UsbCleaner軟體還內建了各種Windows系統修復功能,包含恢復隱藏檔功能、修復無法直接按兩下開啟磁碟機的功能、修復右鍵選單功能、修復其他被惡意程式禁用的系統功能...等。


    18-27-26


     


    此外,還內建了一些好用的殺毒與惡意程式偵測工具。


    18-27-27


     



     


    三、啟用即時監控,阻擋隨身碟病毒


     


    第1步  如果你很擔心常常會受到USB隨身碟系列病毒的攻擊的話,可以按一下〔後台監控〕按鈕,準備開啟UsbCleaner即時防護功能。


    18-24-20


     


    第2步  接著再按一下〔開啟監控〕按鈕即可讓UsbCleaner監控你的電腦,避免遭受病毒攻擊。



    004


     


    第3步  開啟即時監控功能後,桌面右下方會出現一個提示視窗讓我們知道目前正在防護中。以後你的電腦就可多一層保障,應該可以阻擋絕大多數常見的USB隨身碟病毒。


    005

    Wednesday, 13 October 2010

    FTP Rush 支援「FXP伺服器對傳」的免費FTP軟體

    免費的FTP傳檔軟體有很多,譬如說開放原始碼的FileZilla就很好用,可以支援俗稱「FXP」的Server to Server傳檔機制的FTP軟體也不少,如果你常常有類似的需求,也可以試試看本文所介紹的FTP Rush免費FTP軟體

    一般人使用FTP軟體大概都是上傳或下載檔案而已,除了單純的將檔案從FTP伺服器下載下來或上傳檔案、文件到FTP伺服器或網站主機之外,很多人可能得常常需要把檔案從某台伺服器搬移到另外一台伺服器,大部分應該都是先將檔案從主機A下載到本地端電腦,然後再從本地端電腦上傳到主機B,其實類似這樣的工作其實可以交給提供「FXP」傳檔機制的軟體來做,同時間連上兩個FTP伺服器,從A拉到B就好囉!

    除了支援FXP傳檔機制之外,FTP Rush支援多分頁操作介面,可一次開啟很多不同的FTP伺服器,同時操作、傳檔。另外也支援即時壓縮功能(MODE Z),可在支援的伺服器中透過壓縮技術節省頻寬與傳輸時間。


    此外也支援系統剪貼簿監控、滑鼠拖放操作、目錄快取、目錄監控、防斷線、支援SFV檔案驗證、支援Unicode/UTF-8,可瀏覽下載日文、簡體中文的資料夾與檔案、支援文件搜尋、支援Socks4, Socks4A, Socks5, HTTP tunnel等代理伺服器設置、支援AUTH SSL, AUTH TLS, Implicit SSL加密機制...等等功能,並提供22國語言介面,包含繁體中文語系,是個相當簡單方便、快速好用的FTP傳檔工具。   

    使用方法:


    第1步  FTP Rush可支援22個不同國家的語言介面,安裝畫面一開始就是繁體中文,安裝方法也很簡單,沒有強迫安裝其他有的沒的軟體。

    01  

     

    第2步  開啟FTP Rush主程式之後,我們可以直接在視窗上方的工具列填上FTP網址、登入帳號密碼...等等資訊,按下「Enter」後即可連線到指定FTP站,讓我們上傳或下載檔案。

    02  

     

    第3步  左邊視窗是遠端電腦,右邊視窗是本地端電腦,我們可以分別切換到你要上傳或下載的資料夾,然後按右鍵、點傳輸,或直接把檔案拉到另外一個視窗去,即可開始上傳或下載。 也可支援從桌面拖拉檔案到FTP Rush視窗,快速上傳檔案。

    03  

     

    第4步  如果要啟用FXP伺服器對傳模式,請在「本地視窗」的工具列按一下「切換到遠端視窗」按鈕,切換成遠端視窗後、再按連線,連上你要用的FTP伺服器。

    04  

     

    第5步  如圖,兩邊都分別連上FTP伺服器後,我們就可直接在兩個視窗中拖拉、傳檔囉。

    05  

     

    第6步  FTP Rush支援多頁籤功能,可同時開啟不同FTP伺服器、同時傳檔,在分頁標籤上按一下滑鼠右鍵即可開啟或關閉新分頁。

    06  

     

    第7步  另外還有其他相當方便的小工具或延伸功能.

    07